Main

October 28, 2022

MAGICAL, Part 1

hasrbrouck-cpdp2017.jpg"What's that for?" I asked. The question referred to a large screen in front of me, with my newly-captured photograph in the bottom corner. Where was the camera? In the picture, I was trying to spot it.

The British Airways gate attendant at Chicago's O'Hare airport tapped the screen and a big green checkmark appeared.

"Customs." That was all the explanation she offered. It had all happened so fast there was no opportunity to object.

Behind me was an unforgiving line of people waiting to board. Was this a good time to stop to ask:

- What is the specific purpose of collecting my image?

- What legal basis do you have for collecting it?

- Who will be storing the data?

- How long will they keep it?

- Who will they share it with?

- Who is the vendor that makes this system and what are its capabilities?

It was not.

I boarded, tamely, rather than argue with a gate attendant who certainly didn't make the decision to install the system and was unlikely to know much about its details. Plus, we were in the US, where the principles of the data protection law don't really apply - and even if they did, they wouldn't apply at the border - even, it appears, in Illinois, the only US state to have a biometric privacy law.

I *did* know that US Customs and Border Patrol had begun trialing facial recognition in selected airports, beginning in 2017. Long-time readers may remember a net.wars report from the 2013 Biometrics Conference about the MAGICAL [sic] airport, circa 2020, through which passengers flow unimpeded because their face unlocks all. Unless, of course, they're "bad people" who need to be kept out.

I think I even knew - because of Edward Hasbrouck's indefatagable reporting on travel privacy - that at various airports airlines are experimenting with biometric boarding. This process does away entirely with boarding cards; the airline captures biometrics at check-in and uses them to entirely automate the "boarding process" (a favorite bit of airline-speak of the late comedian George Carlin). The linked explanation claims this will be faster because you can have four! automated lanes instead of one human-operated lane. (Presumably then the four lanes merge into a giant pile-up in the single-lane jetway.)

It was nonetheless startling to be confronted with it in person - and with no warning. CBP proposed taking non-US citizens' images in 2020, when none of us were flying, and Hasbrouck wrote earlier this year about the system's use in Seattle. There was, he complained, no signage to explain the system despite the legal requirement to do so, and the airport's website incorrectly claimed that Congress mandated capturing biometrics to identify all arriving and departing international travelers.

According to Biometric Update, as of last February, 32 airports were using facial recognition on departure, and 199 airports were using facial recognition on arrival. In total, 48 million people had their biometrics taken and processed in this way in fiscal 2021. Since the program began in 2018, the number of alleged impostors caught: 46.

"Protecting our nation, one face at a time," CBP calls it.

On its website, British Airways says passengers always have the ability to opt out except where biometrics are required by law. As noted, it all happened too fast. I saw no indication on the ground that opting out was possible, even though notice is required under the Paperwork Reduction Act (1980).

As Hasbrouck says, though, travelers, especially international travelers and even more so international travelers outside their home countries, go through so many procedures at airports that they have little way to know which are required by law and which are optional, and arguing may get you grounded.

He also warns that the system I encountered is only the beginning. "There is an explicit intention worldwide that's already decided that this is the new normal, All new airports will be designed and built with facial recognition built into them for all airlines. It means that those who opt out will find it more and more difficult and more and more delaying."

Hasbrouck, who is probably the world's leading expert on travel privacy, sees this development as dangerous. Largely, he says, it's happening unopposed because the government's desire for increased surveillance serves the airlines' own desire to cut costs through automating their business processes - which include herding travelers onto planes.

"The integration of government and business is the under-noticed aspect of this. US airports are public entities but operate with the thinking of for-profit entities - state power merged with the profit motive. State *monopoly* power merged with the profit motive. Automation is the really problematic piece of this. Once the infrastructure is built it's hard for airline to decide to do the right thing." That would be the "right thing" in the sense of resisting the trend toward "pre-crime" prediction.

"The airline has an interest in implying to you that it's required by government because it pressures people into a business process automation that the airline wants to save them money and implicitly put the blame on the government for that," he says. "They don't want to say 'we're forcing you into this privacy-invasive surveillance technology'."


Illustrations: Edward Hasbrouck in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 20, 2022

The laws they left behind

dailystar-lettuce-celebrates-Ffg3wfmXEAI1ZLX-370.jpegIn the spring of 2020, as country after country instituted lockdowns, mandated contact tracing, and banned foreign travelers, many, including Britain, hastily passed laws enabling the state to take such actions. Even in the strange airlessness of the time, it was obvious that someday there would have to be a reckoning and a reevaluation of all that new legislation. Emergency powers should not be allowed to outlive the emergency. I spent many of those months helping Privacy International track those new laws across the world.

Here in 2022, although Western countries believe the acute emergency phase of the pandemic is past, the reality is that covid is still killing thousands of people a week across the world, and there is no guarantee we're safe from new variants with vaccine escape. Nonetheless, the UK and US at least appear to accept this situation as if it were the same old "normal". Except: there's a European war, inflation, strikes, a cost of living crisis, energy shortages, and a load of workplace monitoring and other privacy invasions that would have been heavily resisted in previous times. (And, in the UK, a government that has lost its collective mind; as I type no one dares move the news cameras away from the doors of Number 10 Downing Street in case the lettuce wins.)

Laws last longer than pandemics, as the human rights lawyer Adam Wagner writes in his new book, Emergency State: How We Lost Our Freedoms in the Pandemic and Why It Matters. For the last couple of years, Wagner has been a constant presence in my Twitter feed, alongside numerous scientists and health experts posting and examining the latest new research. Wagner studies a different pathology: the gaps between what the laws actually said and what was merely guidance. and between overactive police enforcement and people's reasonable beliefs of what the laws should be.

In Emergency State, Wagner begins by outlining six characteristics of the power of emergency-empowered state: mighty, concentrated, ignorant, corrupt, self-reinforcing, and, crucially, we want it to happen. As a comparison, Wagner notes the surveillance laws and technologies rapidly adopted after 9/11. Much of the rest of the book investigates a seventh characteristic: these emergency-expanded states are hard to reverse. In an example that's frequently come up here, see Britain's World War II ID card, which took until 1952 to remove, and even then it took Harry Wilcock to win in court after refusing to show his papers on demand.

Most of us remember the shock and sudden silence of the first lockdown. Wagner remembers something most of us either didn't know or forgot: when Boris Johnson announced the lockdown and listed the few exceptional circumstances under which we were allowed to leave home, there was as yet no law in place on which law enforcement could rely. That only came days later. The emergency to justify this was genuine: dying people were filling NHS hospital beds. And yet: the government response overturned the basis of Britain's laws, which traditionally presume that everything is permitted unless it's specifically forbidden. Suddenly, the opposite - everything is forbidden unless explicitly permitted - was the foundation of daily life. And it happened with no debate.

Wagner then works methodically through Britain's Emergency State, beginning by noting that the ethos of Boris Johnson's government, continuing the conservatives' direction of travel, coincidentally was already disdainful of Parliamentary scrutiny (see also: prorogation of Parliament) and ready to weaken both the human rights act and the judiciary. As the pandemic wore on, Parliamentary attention to successive waves of incoming laws did not improve; sometimes, the laws had already changed by the time they reached the chamber. In two years, Parliament failed to amend any of them. Meanwhile, Wagner notes, behind closed doors government members ignored the laws they made.

The press dubbed March 18, 2022 Freedom Day, to signify the withdrawal of all restrictions. And yet: if scientists' worst fears come true, we may need them again. Many covid interventions - masks, ventilation, social distancing, contact tracing - are centuries old, because they work. The novelty here was the comprehensive lockdowns and widespread business closures, which Wagner suggests may have come about because the first country to suffer and therefore to react was China, where this approach was more acceptable to its authoritarian government. Would things have gone differently had the virus surfaced in a democratic country? We will never know. Either way, the effects of the cruelest restrictions - the separation among families and friends, the isolation imposed on the elderly and dying - cannot be undone.

In Britain's case, Wagner points to flaws in the Public Health Act (1984) that made it too easy for a months-old prime minister with a distaste for formalities to bypass democratic scrutiny. He suggests four remedies: urgently amend the act to include safeguards; review all prosecutions and fines under the various covid laws; codify stronger human rights, either in a written constitution or a bill of rights; and place human rights at the heart of emergency decision making. I'd add: elect leaders who will transparently explain which scientific advice they have and haven't followed and why, and who will plan ahead. The Emergency State may be in abeyance, but current UK legislation in progress seeks to undermine our rights regardless.


Illustrations: The Daily Star's QE2 lettuce declaring victory as 44-day prime minister Liz Truss resigns.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 30, 2022

Regression

Queen_Elizabeth_II's_Funeral_and_Procession_(19.Sep.2022)-370.jpgThey got what they wanted, and now they're screwing it up.

"They" in that sentence, is entertainment industry rights holders, who campaigned for years in bad ways and worse ways to get rid of "piracy" - that is, unauthorized copying and digital distribution of their products. In pursuit of that ideal, they sued popular companies (Napster, MP3.com) out of existence; prosecuted users and demanded ISPs' help in doing so; applied digital rights management to everything from software and classic books to tractors and wheelchairs; and pursued national legislation and trade treaties to entrench their business model.

"What they wanted" was people paying for the cultural artefacts they finance. How they got it, in the end, was not through any of the above efforts. Instead, as many scholars and activists told them during those years it would be, the solution was legally authorized services for which people were willing to pay. And thus grew and flourished video services such as Netflix, YouTube, Hulu, and, latterly, Disney, Amazon Prime, Apple TV, and and music services like Apple iTunes, Spotify, and Amazon Music. The industry began making money from digital downloads. So yay?

You would think. Instead, we're going backwards. The reality now is that paid services are becoming a chore to use: users complain the interfaces are frustrating, and that the thing they want to watch is always on some other service. Newspapers now track where to find popular older shows, and know only a sliver of the mass audience will be able to see some of the new material they review.

Result: pirate sites are back on top You can find almost anything in one search, it's yours to watch any way you want within minutes, and any ads have been neatly excised. Like I said: they got what they wanted and then...

This tiny rant had two immediate provocations. The first was the release of Glyn Moody's new book, Walled Culture (available here as a freely downloadable PDF). The other was two Guardian stories by Jim Waterson about Buckingham Palace's wrangle with the UK's national broadcasters over the footage of the recent state funeral of Queen Elizabeth II. The BBC, ITV, and Channel 4 are allowed future use of juar one hour's worth of clips; for anything else they must ask permission.

This was a state occasion, paid for by taxpayers, held on public streets and in public buildings, and the video recording was made by broadcasters, which are financed by univeral license fees (BBC) and their own commercial activities (all of them). It's particularly bonkers because the entirety of the day's footage is readily available on torrent sites. The palace literally cannot control the footage as it could at the 1953 coronation - though it can limit broadcast. Waterson also reveals that behind the scenes during the various services palace staff and broadcasters shared a WhatsApp group in which the staffers sent a message every five minutes to approve or refuse the use of the previous video block. In our world of 2022, this power to micromanage how they are seen is more power than most people think the monarchy has. The palace is also claiming the right to veto the use of footage of the new monarch's ascension service. This is the rawest form of copyright as entrenched power.

In Walled Culture, Moody recounts the Internet's three decades of copyright wrangles, and the resulting shrinkage of public access to culture. It's a great romp through a legal regime that, as Jessica Litman said circa 1998, people would reject if they understood it. Moody begins with the shift from analogue to digital media, then goes through the lawsuits, the battle to make the results of publicly funded research open to the public, web blocking and other censorship, the EU's copyright directive, and the regulatory capture that, as Moody says, leaves impoverished the artists and creators copyright law was originally designed to benefit.

My favorite chapter, however, is the one on copyright absurdities. Half of the commercial movies ever made are unavailable to view. Because of the way streaming is licensed, Netflix 2022 has a library perhaps a tenth the size of Netflix 2012 - or 2002, when the rental service's copy of a DVD could not be withdrawn. Yet digital media have a notoriously short life before they must be migrated to newer media and formats. Copyright is even why statisticians continue to use suboptimal statistical analysis because in the 1920s Kendall Pearson refused fellow statistician Ronald A. Fisher permission to use his statistical tables.

As Moody shows, the impact of copyright law is widely felt, and its abuse even more so. Bear in mind that the original purpose was to balance the public interest (as opposed to the public's interest) in its own culture against the desirability of encouraging creators and artists to go on creating new works by giving them a relatively brief period of exclusivity in which to exploit their work. For that reason, a world in which piracy is the best option for accessing culture is not a good world. Moody' proposes numerous fixes that roll back the worst elements and change the power imbalance. We do want to pay artists and creators, especially those whose voices have largely gone unheard in the past. Rights holders should not be - ahem - kings.


Illustrations: Queen Elizabeth II's funeral procession (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 23, 2022

Insert a human

We Robot - 2022 - boston dynamics.JPGRobots have stopped being robots. This is a good thing.

This is my biggest impression of this year's We Robot conference: we have moved from the yay! robots! of the first year, 2012, through the depressed doldrums of "AI" systems that make the already-vulnerable more vulnerable circa 2018 to this year, when the phrase that kept twanging was "sociotechnical systems". For someone with my dilettantish conference-hopping habit, this seems like the necessary culmination of a long-running trend away from robots as autonomous mobile machines to robots/AI as human-machine partnerships. We Robot has never talked much about robot rights, instead focusing on considering the policy challenges that arise as robots and AI become embedded in our lives. This is realism; as We Robot co-founder Michael Froomkin writes, we're a long, long way from a self-aware and sentient machine.

The framing of sociotechnical systems is a good thing in part because so much of what passes for modern "artificial intelligence" is humans all the way down, as Mary L. Gray and Siddhart Suri documented in their book, Ghost Work. Even the companies that make self-driving cars, which a few years ago were supposed to be filling the streets by now, are admitting that full automation is a long way off. "Admitting" as in consolidating or being investigated for reckless hyping.

If this was the emerging theme, it started with the first discussion, of a paper on humans in the loop, by Margot Kaminski, Nicholson Price, and Rebecca Crootof. Too often, the proposed policy-making proposal for handling problems with decision making systems is to insert a human, a "solution" they called the "MABA-MABA trap", for "Machines Are Better At / Men Are Better At". While obviously humans and machines have differing capabilities - people are creative and flexible, machines don't get bored - just dropping in a human without considering what role that human is going to fill doesn't necessarily take advantage of the best capabilities of either. Hybrid systems are of necessity more complex - this is why cybersecurity keeps getting harder - but policy makers may not take this into account or think clearly about what the human's purpose is going to be.

At this conference in 2016, Madeleine Claire Elish foresaw that the human would become a moral crumple zone or liability sponge, absorbing blame without necessarily being at fault. No one will admit that this is the human's real role - but it seems an apt description of the "safety driver" watching the road, trying to stay alert in case the software driving the car needs backup or the poorly-paid human given a scoring system and tasked with awarding welfare benefits. What matters, as Andrew Selbst said in discussing this paper, is the *loop*, not the human - and that may include humans with invisible control, such as someone who can massage the data they enter into a benefits system in order to help a particularly vulnerable child, or who have wide discretion, such as a judge who is ultimately responsible for parole decisions no matter what the risk assessment system says.

This is not the moment to ask what constitutes a human.

It might be, however, the moment to note the commentator who said that a lot of the problems people are suggesting robots/AI can solve have other, less technological solutions. As they said, if you are putting a pipeline through a community without its consent, is the solution to deploy police drones to protect the pipeline and the people working on it - or is it to put the pipeline somewhere else (or to move to renewables and not have a pipeline at all)? Change the relationship with the community and maybe you can partly disarm the police.

One unwelcome forthcoming issue, discussed in a paper by Kate Darling and Daniella DiPaola is the threat merging automation and social marketing poses to consumer protection. A truly disturbing note came from DiPaola, who investigated manipulation and deception with personal robots and 75 children. The children had three options: no ads, ads allowed only if they are explicitly disclosed to be ads, or advertising through casual conversation. The kids chose casual conversation because they felt it showed the robot *knew* them. They chose this even though they knew the robot was intentionally designed to be a "friend". Oy. In a world where this attitude spreads widely and persists into adulthood, no amount of "media literacy" or learning to identify deception will save us; these programmed emotional relationships will overwhelm all that. As DiPaola said, "The whole premise of robots is building a social relationship. We see over and over again that it works better if it is more deceptive."

There was much more fun to be had - steamboat regulation as a source of lessons for regulating AI (Bhargavi Ganesh and Shannon Vallor), police use of canid robots (Carolin Kemper and Michael Kolain), and - a new topic - planning for the end of life of algorithmic and robot systems (Elin Björling and Laurel Riek). The robots won't care, but the humans will be devastated.

Illustrations: Hanging out at We Robot with Boston Dynamics' "Spot".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 12, 2022

Nebraska story

Thumbnail image for Facebook-76536_640.pngThis week saw the arrest of a Nebraska teenager and her mother, who are charged with multiple felonies for terminating the 17-year-old's pregnancy at 28 weeks and burying (and, apparently, trying to burn) the fetus. Allegedly, this was a home-based medication abortion...and the reason the authorities found out is that following a tip-off the police got a search warrant for the pair's Facebook accounts. There, the investigators found messages suggesting the mother had bought the pills and instructed her daughter how to use them.

Cue kneejerk reactions. "Abortion" is a hot button. Facebook privacy is a hot button. Result: in reporting these gruesome events most media have chosen to blame this horror story on Facebook for turning over the data.

As much as I love a good reason to bash Facebook, this isn't the right take.

Meta - Facebook's parent - has responded to the stories with a "correction" that says the company turned over the women's data in response to valid legal warrants issued by the Nebraska court *before* the Supreme Court ruling. The company adds, "The warrants did not mention abortion at all."

What the PR folks have elided is that both the Supreme Court's Dobbs decision, which overturned Roe v. Wade, and the wording of the warrants are entirely irrelevant. It doesn't *matter* that this case was about an abortion. Meta/Facebook will *always* turn over user data in compliance with a valid legal warrant issued by a court, especially in the US, its home country. So will every other major technology company.

You may dispute the justice of Nebraska's 2019 Pain-Capable Unborn Child Act, under which abortion is illegal after 20 weeks from fertilization (22 weeks in normal medical parlance). But that's not Meta's concern. What Meta cares about is legal compliance and the technical validity of the warrant. Meta is a business, not a social justice organization, and while many want Mark Zuckerberg to use his personal judgment and clout to refuse to do business with oppressive regimes (by which they usually mean China, or Myanmar), do you really want him and his company to obey only laws they agree with?

There will be many much worse cases to come, because states will enact and enforce the vastly more restrictive abortion laws that Dobbs enables, and there will be many valid legal warrants that force them to hand data to police bent on prosecuting people in excruciating pregnancy-related situations - and in many more countries. Even in the UK, where (except for Northern Ireland) abortion has been mostly non-contentious for decades, lurking behind the 1967 law which legalized abortion until 24 weeks is an 1861 statute under which abortion is criminal. That law, as Shanti Das recently wrote at the Guardian, has been used to prosecute dozens of women and a few men in the last decade. (See also Skeptical Inquirer.)

So if you're going to be mad at Facebook, be mad that the platform hadn't turned on end-to-end encryption for its messaging. That, as security engineer Alec Muffett has been pointing out on Twitter, would have protected the messages against access by both the system itself and by law enforcement. At the Guardian, Johana Bhuiyan reports the company is now testing turning on end-to-end encryption by default. Doubtless, soon to be followed by law enforcement and governments demanding special access.

Others advocate switching to other encrypted messaging platforms that, like Signal, provide a setting that allows you to ensure that messages automatically vape themselves after a specified number of days. Such systems retain no data that can be turned over.

It's good advice, up to a point. For one thing, it ignores most people's preference for using the familiar services their friends use. Adopting a second service just for, say, medical contacts adds complications; getting everyone you know to switch is almost impossible.

Second, it's also important to remember the power of metadata - data about data, which includes everything from email headers to search histories. "We kill people based on metadata," former NSA head Michael Hayden said in 2014 in a debate on the constitutionality of the NSA. (But not, he hastened to add, metadata collected from *Americans*.)

Logs of who has connected to whom and how frequently is often more revealing than the content of the messages sent back and forth. For example: the message content may be essentially meaningless to an outsider ("I can make it on Monday at two") until the system logs tell you that the sender is a woman of childbearing age and the recipient is an abortion clinic. This is why so many governments have favored retaining Internet connection data. Governments cite the usual use cases - organized crime, drug dealers, child abusers, and terrorists - when pushing for data retention, and they are helped by the fact that most people instinctively quail at the thought of others reading the *content* of their messages but overlook metadata's significance.intuitively grasp the importance of metadata - data about data, as in system logs, connection records - has helped enable mass Internet surveillance.

The net result of all this is to make surveillance capitalism-driven technology services dangerous for the 65.5 million women of childbearing age in the US (2020). That's a fair chunk of their most profitable users, a direct economic casualty of Dobbs.


Illustrations: Facebook.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 5, 2022

Painting by numbers

heron-thames-nw.JPGMy camera can see better than I can. I don't mean that it can take better pictures than I can because of its automated settings, although this is also true. I mean it can capture things I can't *see*. The heron above, captured on a grey day along the Thames towpath, was pretty much invisible to me. I was walking with a friend. Friend pointed and said, "Look. A heron" I pointed the camera more or less where she indicated, pushed zoom to maximum, hit the button, and when I got home there it was.

If the picture were a world-famous original, there might be a squabble about who owned the copyright. I pointed the camera and pushed the button, so in our world the copyright belongs to me. But my friend could stake a reasonable claim: without her, I wouldn't have known where or when to point the camera. The camera company (Sony) could argue, quite reasonably, that the camera and its embedded software, which took years to design and build, did all the work, while my entire contribution took but a second.

I imagine, however, that at the beginning of photography artists who made their living painting landscapes and portraits might have seen reason to be pretty scathing about the notion that photography deserved copyright at all. Instead of working for months to capture the right light and nuances...you just push a button? Where's the creative contribution in that?

This thought was inspired by a recent conversation on Twitter between two copyright experts - Lilian Edwards and Andres Guadamuz - who have been thinking for years about the allocation of intellectual property rights when an AI system creates or helps to create a new work. The proximate cause was Guadamuz's stunning experiments generating images usingMidjourney.

If you try out Midjourney's image-maker via the bot on its Discord server, you quickly find that each detail you add to your prompt adds detail and complexity to the resulting image; an expert at "prompt-craft" can come extraordinarily close to painting with the generation system. Writing prompts to control these generation systems and shape their output is becoming an art in itself, an expertise that will become highly valuable in itself. Guadamuz calls it "AI whispering".

Guadamuz touches on this in a June 2022 blog posting, in which he asks about the societal impact of being able to produce sophisticated essays, artworks, melodies, or software code based on a few prompts. The best human creators will still be the crucial element - I don't care how good you are at writing prompts, unless you're the human known as Vince Gilligan you+generator are not going to produce Breaking Bad or Better Call Saul. However, generation systems *might*, as Guadamuz says, produce material that's good-enough for many contexts, given that it''s free (ish).

More recently, Guadamuz considers the subject he and Edwards were mulling on Twitter: the ownership of copyright in generated images. Guadamuz had been reading the generators' terms and conditions. OpenAI, owner of DALL-E, specifies that users assign the copyright in all "Generations" its system produces, which it then places in the public domain whilegranting users a permanent license to do whatever they want with the Generations their prompts inspire. Midjourney takes the opposite approach: the user owns the generated image, and licenses it back to Midjourney.

What Guardamuz found notable was the trend toward assuming that generated images are subject to copyright, even though lawyers have argued that they can't be and fall into the public domain. Earlier this year, the US Copyright Office has rejected a request to allow an AI copyright a work. The UK is an outlier, awarding copyright in computer-generated works to the "person by whom the arrangements necessary for the creation of the work are undertaken". This is ambiguous: is that person the user who wrote the prompt or the programmers who trained the model and wrote the code?

Much of the discussion evolved around how that copyright might be divided up. Should it be shared between the user and the company that owns the generating tool? We don't assign copyright in the words we write to our pens or word processors; but as Edwards suggested, the generator tool is more like an artist for hire than a pen. Of course, if you hire a human artist to create an image for you, contract terms specify who owns the copyright. If it's a work made for hire, the artist retains no further interest.

So whatever copyright lawyers say, the companies who produce and own these systems are setting the norms as part of choosing their business model. The business of selling today's most sophisticated cameras derives from an industry that grew up selling physical objects. In a more recent age, they might have grown up selling software add-on tools on physical media. Today, they may sell subscriptions and tiers of functionality. Nonetheless, if a company's leaders come to believe there is potential for a low-cost revenue stream of royalties for reusing generated images, it will go for it. Corbis and Getty have already pioneered automated copyright enforcement.

For now, these terms and conditions aren't about developing legal theory; the companies just don't want to get sued. These are cover-your-ass exercises, like privacy policies.


Illustrations: Grey heron hanging out by the Thames in spring 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 22, 2022

Parting gifts

nw-Sunak-Truss-ITV-2022.pngAll national constitutions are written to a threat model that is clearly visible if you compare what they say to how they are put into practice. Ireland, for example, has the same right to freedom of religion embedded in its constitution as the US bill of rights does. Both were reactions to English abuse, yet they chose different remedies. The nascent US's threat model was a power-abusing king, and that focus coupled freedom of religion with a bar on the establishment of a state religion. Although the Founding Fathers were themselves Protestants and likely imagined a US filled with people in their likeness, their threat model was not other beliefs or non-belief but the creation of a supreme superpower derived from merging state and church. In Ireland, for decades, "freedom of religion" meant "freedom to be Catholic". Campaigners for the separation of church and state in 1980s Ireland, when I lived there, advocated fortifying the constitutional guarantee with laws that would make it true in practice for everyone from atheists to evangelical Christians.

England, famously, has no written constitution to scrutinize for such basic principles. Instead, its present Parliamentary system has survived for centuries under a "gentlemen's agreement" - a term of trust that in our modern era transliterates to "the good chaps rule of government". Many feel Boris Johnson has exposed the limitations of this approach. Yet it's not clear that a written constitution would have prevented this: a significant lesson of Donald Trump's US presidency is how many of the systems protecting American democracy rely on "unwritten norms" - the "gentlemen's agreement" under yet another name.

It turns out that tinkering with even an unwritten constitution is tricky. One such attempt took place in 2011, with the passage of the Fixed-term Parliaments Act. Without the act, a general election must be held at least once every five years, but may be called earlier if the prime minister advises the monarch to do so; they may also be called at any time following a vote of no confidence in the government. Because past prime ministers were felt to have abused their prerogative by timing elections for their political benefit, the act removed it in favor of a set five-year interval unless a no-confidence vote found a two-thirds super-majority. There were general elections in 2010 and 2015 (the first under the act). The next should have been in 2020. Instead...

No one counted on the 2016 vote to leave the EU or David Cameron's next-day resignation. In 2017, Theresa May, trying to negotiate a deal with an increasingly divided Parliament and thinking an election would win her a more workable majority and a mandate, got the necessary super-majority to call a snap election. Her reward was a hung Parliament; she spent the rest of her time in office hamstrung by having to depend on the good will of Northern Ireland's Democratic Unionist Party to get anything done. Under the act, the next election should have been 2022. Instead...

In 2019, a Conservative party leadership contest replaced May with Boris Johnson, who, after several failed attempts blocked by opposition MPs determined to stop the most reckless Brexit possibilities, won the necessary two-thirds majority and called a snap election, winning a majority of 80 seats. The next election should be in 2024. Instead...

They repealed the act in March 2022. As we were. Now, Johnson is going, leaving both party and country in disarray. An election in 2023 would be no surprise.

Watching the FTPA in action led me to this conclusion: British democracy is like a live frog. When you pin down one bit of it, as the FTPA did, it throws the rest into distortion and dysfunction. The obvious corollary is that American democracy is a *dead* frog that is being constantly dissected to understand how it works. The disadvantage to a written constitution is that some parts will always age badly. The advantage is clarity of expectations. Yet both systems have enabled someone who does not care about norms to leave behind a generation's worth of continuing damage.

All this is a long preamble to saying that last year's concerns about the direction of the UK's computers-freedom-privacy travel have not abated. In this last week before Parliament rose for the summer, while the contest and the heat saturated the news, Johnson's government introduced the Data Protection and Digital Information bill, which will undermine the rights granted by 25 years of data protection law. The widely disliked Online Safety bill was postponed until September. The final two leadership candidates are, to varying degrees, determined to expunge EU law, revamp the Human Rights act, and withdraw from the European Convention on Human Rights. In addition, lawyer Gina Miller warns, the Northern Ireland Protocol bill expands executive power by giving ministers the Henry VIII power to make changes without Parliamentary consent: "This government of Brexiteers are eroding our sovereignty, our constitution, and our ability to hold the government to account."

The British convention is that "government" is collective: the government *are*. Trump wanted to be a king; Johnson wishes to be a president. The coming months will require us to ensure that his replacement knows their place.


Illustrations: Final leadership candidates Rishi Sunak and Liz Truss in debate on ITV.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 15, 2022

Online harms

boris-johnson-on-his-bike-European-Cycling-Federation-370.jpgAn unexpected bonus of the gradual-then-sudden disappearance of Boris Johnson's government, followed by his own resignation, is that the Online Safety bill is being delayed until after Parliament's September return with a new prime minister and, presumably, cabinet.

This is a bill almost no one likes - child safety campaigners think it doesn't go far enough; digital and human rights campaigners - Big Brother Watch, Article 19, Electronic Frontier Foundation, Open Rights Group, Liberty, a coalition of 16 organizations (PDF) - because it threatens freedom of expression and privacy while failing to tackle genuine harms such as the platforms' business model; and technical and legal folks because it's largely unworkable.

The DCMS Parliamentary committee sees it as wrongly conceived. The he UK Independent Reviewer of Terrorism Legislation, Jonathan Hall QC, says it's muzzled and confused. Index on Censorship calls it fundamentally broken, and The Economist says it should be scrapped. The minister whose job it has been to defend it, Nadine Dorries (C-Mid Bedfordshire), remains in place at the Department for Culture, Media, and Sport, but her insistence that resigning-in-disgrace Johnson was brought down by a coup probably won't do her any favors in the incoming everything-that-goes-wrong-was-Johnson's-fault era.

In Wednesday's Parliamentary debate on the bill, the most interesting speaker was Kirsty Blackman (SNP-Aberdeen North), whose Internet usage began 30 years ago, when she was younger than her children are now. Among passionate pleas that her children should be protected from some of the high-risk encounters she experienced, was: "Every person, nearly, that I have encountered talking about this bill who's had any say over it, who continues to have any say, doesn't understand how children actually use the Internet." She called this the bill's biggest failing. "They don't understand the massive benefits of the Internet to children."

This point has long been stressed by academic researchers Sonia Livingstone and Andy Phippen, both of whom actually do talk to children. "If the only horse in town is the Online Safety bill, nothing's going to change," Phippen said at last week's Gikii, noting that Dorries' recent cringeworthy TikTok "rap" promoting the bill focused on platform liability. "The liability can't be only on one stakeholder." His suggestion: a multi-pronged harm reduction approach to online safety.

UK politicians have publicly wished to make "Britain the safest place in the world to be online" all the way back to Tony Blair's 1997-2007 government. It's a meaningless phrase. Online safety - however you define "safety" - is like public health; you need it everywhere to have it anywhere.

Along those lines, "Where were the regulators?" Paul Krugman asked in the New York Times this week, as the cryptocurrency crash continues to flow. The cryptocurrency market, which is now down to $1 trillion from its peak of $3 trillion, is recapitulating all the reasons why we regulate the financial sector. Given the ongoing collapses, it may yet fully vaporize. Krugman's take: "It evolved into a sort of postmodern pyramid scheme". The crash, he suggests, may provide the last, best opportunity to regulate it.

The wild rise of "crypto" - and the now-defunct Theranos - was partly fueled by high-trust individuals who boosted the apparent trustworthiness of dubious claims. The same, we learned this week was true of Uber 2014-2017, Based on the Uber files,124,000 documents provided by whistleblower Mark MacGann, a lobbyist for Uber 2014-2016, the Guardian exposes the falsity of Uber's claims that its gig economy jobs were good for drivers.

The most startling story - which transport industry expert Hubert Horan had already published in 2019 - is the news that the company paid academic economists six-figure sums to produce reports it could use to lobby governments to change the laws it disliked. Other things we knew about - for example, Greyball, the company's technology denying regulators and police rides so they couldn't document Uber's regulatory violations and Uber staff's abuse of customer data - are now shown to have been more widely used than we knew. Further appalling behavior, such as that of former CEO Travis Kalanick, who was ousted in 2017, has been thoroughly documented in the 2019 book, Super Pumped, by Mike Isaac, and the 2022 TV series based on it, Super Pumped.

But those scandals - and Thursday/s revelation that 559 passengers are suing the company for failing to protect them from rape and assault by drivers - aren't why Horan described Uber as a regulatory failure in 2019. For years, he has been indefatigably charting Uber's eternal unprofitability. In his latest, he notes that Uber has lost over $20 billion since 2015 while cutting driver compensation by 40%. The company's share price today is less than half its 2019 IPO price of $45 - and a third of its 2021 peak of $60. The "misleading investors" kind of regulatory failure.

So, returning to the Online Safety bill, if you undermine existing rights and increase the large platforms' power by devising requirements that small sites can't meet *and* do nothing to rein in the platforms' underlying business model...the regulatory failure is built in. This pause is a chance to rethink.

Illustrations: Boris Johnson on his bike (European Cyclists Federation via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 8, 2022

Orphan consciousness

icelandverse.pngWhat if, Paul Bernal asked late in this year's Gikii, someone uploaded a consciousness and then we forgot where we got it from? Taking an analogy from copyrighted works whose owners are unknown - orphan works, an orphan consciousness. What rights would it have? Can it commit crimes? Is it murder to erase it? What if it met fellow orphan consciousness and together they created a third? Once it's up there without a link to humanity, then what?

These questions annoyed me less than proposals for robot rights, partly because they're more obviously a thought experiment, and partly because they specifically derived from Greg Daniels' science fiction series Upload, which inspired many of this year's gikii presentations. The gist: Nathan (Robbie Arnell), whose lung is collapsing after an autonomous vehicle crash, is offered two choices: take his chances in the operating room, or have his consciousness uploaded into Lakeview, a corporately owned and run "paradise" where he can enjoy an afterlife in considerable comfort. His girlfriend, Ingrid (Allegra Edwards), begs him to take the afterlife, at her family's expense. As he's rushed into signing the terms and conditions, I briefly expected him to land at the waystation in Albert Brooks' 1991 film Defending Your Life.

Instead, he wakes in a very nice country club hotel where he struggles to find his footing among his fellow uploaded avatars and wrangle the power dynamics in his relationship with Ingrid. What is she willing to fund? What happens if she stops paying? (A Spartan 2GB per day, we find later.) And, as Bernal asked, what are his neurorights?

Fictional use cases, as Gikii proves every year (2021): provide fully-formed use cases through which to explore the developing ethics and laws surrounding emergent technologies. For the current batch - the Digital Markets Act (EU, passed this week), the Digital Services Act (ditto), the Online Safety bill (UK, pending), the Platform Work Directive (proposed, EU), the platform-to-business regulations (in force 2020, EU and UK), and, especially, the AI Act (pending, EU) - Upload couldn't be more on point.

Side note: in-person attendees got to sample the Icelandverse, a metaverse of remarkable physical reality and persistence.

Upload underpinned discussions of deception and consent laws (Burkhard Schäfer and Chloë Kennedy), corporate objectification (Mauricio Figueroa ), and property rights - English law bans perpetual trusts. Can uploads opt out? Can they be murdered? Maybe like copyright, give them death plus 70 years?

Much of this has direct relevance to the "metaverse", which Anna-Maria Piskopani called "just one new way to do surveillance capitalism". The show's perfect example: when sex fails to progress, Ingrid yells out, "Tech support!".

In life, Nora (Andy Allo), the "angel" who arrives to help, works in an open plan corporate dystopia where her co-workers gossip about the avatars they monitor. As in this year's other notable fictional world, Dan Erickson's Severance, the company is always watching, a real pandemic-accelerated trend. In our paper, Andelka Phillips and I noted that although the geofenced chip implanted in Severance's workers prevents their work selves ("innies") from knowing anything about their out-of-hours selves ("outies"), their employer has no such limitation. Modern companies increasingly expect omniscience.

Both series reflect the growing ability of cyber systems to effect change in the physical world. Lachlan Urquhart, Lilian Edwards, and Derek McAuley used the science fiction comedy film Ron's Gone Wrong to examine the effect of errors at scale. The film's damaged robot, Ron, is missing safety features and spreads its settings to its counterparts. Would the AI Act view Ron as high or low risk? It may be a distinction without a difference; MacAuley reminded there will always be failures in the field. "A one-bit change can make changes of orders of magnitude." Then that chip ships by the billion, and can be embedded in millions of devices before it's found. Rinse, repeat, and apply to autonomous vehicles.

In Japan, however, as Naomi Lindvedt explained, the design culture surrounding robots has been far more influenced by the rules written for Astro Boy in 1951 by creator Tezuka Osamu than by Asimov's Laws. These rules are more restrictive and prescriptive, and designers aim to create robots that integrate into society and are user-friendly.

In other quick highlights, Michael Veale noted the Deliveroo ads that show food moving by itself, as if there are no delivery riders, and noted that technology now enforces the exclusivity that used to be contractual, so that drivers never see customer names and contact information, and so can't easily make direct arrangements; Tima Otu Anwana and Paul Eberstaller examined the business relationship between Only Fans and its creators; Sandra Schmitz-Berndt and Paula Contreras showed the difficulty of reporting cyber incidents given the multiple authorities and their inconsistent requirements; Adrian Aronsson-Storrier produced an extraordinary long-lest training video (Super-Betamax!) for a 500-year-old Swedish copyright cult; Helen Oliver discussed attitudes to privacy as revealed by years of UK high school students' entries for a competition to design fictional space stations; and Andy Phippen, based on his many discussions with kids, favors a harm reduction approach to online safety. "If the only horse in town is the Online Safety bill, nothing's going to change."


Illustrations: Image from the Icelandverse (by Inspired by Iceland).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 24, 2022

Creepiness at scale

Thumbnail image for 2001-hal.pngThis week, Amazon announced a prospective new feature for its Alexa "smart" speakers: the ability to mimic anyone;s voice from less than on minute of recording. Amazon is, incredibly, billing this as the chance to memorialize a dead loved one as a digital assistant.

As someone commented on Twitter, technology companies are not *supposed* to make ideas from science fiction dystopias into reality. As so often, Philip K. Dick got here first; in his 1969 novel Ubik, a combination of psychic powers and cryonics lets (rich) people visit and consult their dead, whose half-life fades with each contact.

Amazon can call this preserving "memories", but at The Overspill Charles Arthur is likely closer to reality, calling it "deepfake for voice". Except that were deepfakes emerged from a Reddit group and requires some technical effort, Amazon's functionality will be right there in millions of people's homes, planted by one of the world's largest technology companies. Questions abound: who gets access to the data and models, and will Amazon link it to its Ring doorbell network and thousands of partnerships with law enforcement?

The answers, like the service, are probably years off. The lawsuits may not be.

This piece began as some notes on the company that so far has been the technology industry's creepiest: the facial image database company Clearview AI. Clearview, which has built its multibillion-item database by scraping images off social media and other publicly accessible sites, has fallen foul of regulators in the UK, Australia, France, Italy, Canada, and Illinois. In a world full of intrusive companies collecting mass amounts of personal data about all of us, Clearview AI still stands out.

It has few, if any, defenders outside its own offices. For one thing, unlike Facebook or Google, it offers us - citizens, consumers - nothing in return for our data, which it appropriates wholesale. It is the ultimate two-sided market in which we are nothing but salable data points. It came to public notice in January 2020, when Kashmir Hill exposed its existence and asked if this was the company that was going to end privacy.

Clearview, which bills itself as "building a secure world one face at a time", defends itself against both data protection and copyright laws by arguing that scraping and storing billions of images from what law enforcement likes to call "open source intelligence" is legitimate because the images are posted in public. Even if that were how data protection laws work, it's not how copyright works! Both Twitter and Facebook told Clearview to stop scraping their sites shortly after Hill's article appeared in 2020, as did Google, LInkedIn, and YouTube. It's not clear if the company stopped or deleted any of the data.

Among regulators, Canada was first, starting federal and provincial investigations in June 2020, when Clearview claimed its database held 3 billion images. In February 2021, the Canadian Privacy Commissioner, David Therrien, issued a public warning that the company could not use facial images of Canadians without their explicit consent. Clearview, which had been selling its service to the Royal Canadian Mounted Police among dozens of others, opted to leave the country and mount a court challenge - but not to delete images of Canadians, as Therrien had requested.

In December 2021, the French data protection authority, CNIL, ordered Clearview to delete all the data it holds relating to French citizens within two months, and threatened further sanctions and administrative fines if the company failed to comply within that time.

In March 2022, with Clearview openly targeting 100 billion images and commercial users, Italian DPA Garante per la protezione dei dati personali fined Clearview €20 million, ordered it to delete any data it holds on Italians, and banned it from further processing of Italian citizens' biometrics.

In May 2022, the UK's Information Commissioner's Office fined the company £7.5 million and ordered it to delete the UK data it holds.

All these cases are based on GDPR and find the same complaints: Clearview has no legal basis for holding the data, and it is in breach of data retention rules and subjects' rights. Clearview appears not to care, taking the view that it is not subject to GDPR because it's not a European company.

It couldn't make that argument to the state of Illinois. In early May 2022, Clearview and the American Civil Liberties Union settled a court action filed in May 2020 under Illinois' Biometric Information Privacy Act. Result: Clearview has accepted a ban on selling its services or offering them for free to most private companies *nationwide* and a ban on selling access to its database to any private or state or local government entity, including law enforcement, in Illinois for five years. Clearview has also developed an opt-out form for Illinois residents to use to withdraw their photos from searches, and continue to try to filter out photographs taken in or uploaded from Illinois. On its website, Clearview paints all this as a win.

Eleven years ago, Google's then-CEO, Eric Schmidt, thought automating facial recognition was too creepy to pursue and synthesizing a voice from recordings took months. The problem isn't any more that potentially dangerous technology has developed faster than laws can be formulated to control it. It's that we now have well-funded companies that don't care about either.


Illustrations: HAL, from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.


Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 6, 2022

Heartbeat

Trigger_law_states.svg.pngThree months ago, for a book Cybersalon is producing, called Twenty-Two Ideas About the Future, I wrote a provocation about a woman living in Heartbeat Act Texas who discovers she's pregnant. When she forgets to disable its chip, the "smart" home pregnancy test uploads the news to the state's health agency, which promptly shares it far and wide. Under the 2021 law's sanctions on intermediaries, payment services, travel companies, supermarkets all fear being sued as intermediaries, and so they block her from doing anything that might lead to liability, like buying alcohol, cigarettes, or a bus ticket to the state line, or paying a website for abortion pills.

It wasn't supposed to come true, and certainly not so soon.

As anyone who's seen any form of news this week will know, in a leaked draft of the US Supreme Court's decision in Dobbs v. Jackson Women's Health Organization, author Justice Samuel Alito argues that its 1973 decision in Roe v. Wade was "wrongly decided". This is not the place to defend the right to choose or deplore the dangers of of valuing the potential life of a fetus over the actual life of the person carrying it (Louisiana legislators have advanced a bill classifying abortion as homicide). But it is the place to consider the privacy loss if the decision proceeds as indicated, and not just in the approximately half of US states predicted to jump at the opportunity to adopt forced-childbirth policies.

On my shelf is Alan E. Nourse's 1965 book Intern, by Doctor X, an extraordinarily frank diary Nourse kept throughout his 1956 internship. Here he is during his OB/GYN rotation: "I don't know who the OB men have to answer to around here when they get back suspicious pathology reports...somebody must be watching them." In an update, he says the hospital's Tissue Committee reviewed pathology reports on all dilation and curettage procedures; first "suspicious" report attracted a private warning, second a censure, and third permanent expulsion from the hospital staff.

I first read that when I was 12, and I did not understand that he was talking about abortion - although D&Cs were and are routine, necessary procedures, in that time and place each one was also suspected, like travelers today boarding a plane. Every miscarriage had to be cleared of suspicion, a process unlikely to help any of the estimated 1 million per year who grieve pregnancy loss. Elsewhere, he notes the number of patients labeled "NO INFORMATION"; they were giving their babies up for adoption. Then, it was sufficient to criminalize the doctors.

Part of Alito's argument is that abortion is not mentioned in either the Constitution or the First, Fourth, Fifth, Ninth, or Fourteenth Amendments Roe cited. Neither, he says, is privacy; that casual little aside is the Easter egg pointing to future human rights rollbacks.

The US has insufficient privacy law, even in the health sector. Worse, the data collected by period trackers, fitness gizmos, sleep monitoring apps, and the rest is not classed as health data to be protected under HIPAA. In 2015, employers' access to such data through "wellness" programs began raising privacy concerns; all types of employee monitoring have expanded since the pandemic began. Finally, as Johana Bhuiyan reported at the Guardian last month, US law enforcement has easy access to the consumer data we trustingly provide to companies like Apple and Meta. And even when don't provide it, others do: in 2016, anti-choice activists were caught snapping pictures of women entering clinics, noting license plate numbers, and surveiling their smarphones via geofencing to target those deemed to be "abortion-minded".

"Leaving it to the states" - Alito writes of states' rights, not of women's rights - means any woman of child-bearing age at risk of living under a prohibitive regime dare not confide in any of these technologies. Also dangerous: insurance companies, support groups for pregnancy loss or for cancer patients whose treatment is incompatible with continuing a pregnancy, centers for health information, GPS-enabled smartphones, even search engines. Heterosexual men can look forward to diminished sex lives dominated by fear of pregnancy (although note that no one's threatening to criminalize ejaculating inside a vagina) and women may struggle to find doctors willing to treat them at all.

My character struggled to travel out of state. This was based on 1980s Ireland, where ending a pregnancy required a trip to England; in 1992 courts famously barred a raped 14-year-old from traveling. At New York Magazine, Irin Carman finds that some Republican politicians are indeed thinking about this.

Encryption, VPNs, Tor - women will need the same tools that aid dissidents in authoritarian countries. The company SafeGraph, Joseph Cox reports at Vice, sells location data showing who has visited abortion clinics. In response, SafeGraph promised to stop. By then Cox had found another one.

At Gizmodo, Shoshona Wodinsky has the advice on privacy protection my fictional character needed. She dares not confide in anyone she knows lest she put them at risk of becoming an attackable intermediary, yet everyone she *doesn't* know has already been informed.

This is the exact near-future Parmy Olson outlines at Bloomberg, quoting US senator Ron Wyden (D-OR): "...every digital record - from web searches, to phone records and app data - will be weaponized in Republican states as a way to control women's bodies."


Illustrations: Map of the US states with "trigger laws" waiting to come into force if Roe v. Wade is overturned (via M. Bitton at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 29, 2022

The abundance of countries

Adam Smith-National Gallery of Scotland-PD.jpgThis week, some updates.

First up is the Court of Justice of the European Union's ruling largely upholding Article 17 of the 2019 Copyright Directive. Article 17, also known as the "upload filter", was last seen leading many to predict it would break the web. Poland challenged the provision, arguing that requiring platforms to check user-provided material for legality infringed the rights to freedom of expression and information.

CJEU dismissed Poland's complaint, and Article 17 stands. However, at a panel convened by Communia, former Pirate Party MEP Felix Reda found the disappointment is outweighed by the court's opinion regarding safeguards, which bans general monitoring, and, Joao Pedro Quantais explained, restrict content removal to material whose infringing nature is obvious.

More than half of EU countries have failed to meet the June 2021 deadline to transpose the directive into national law, and some that have simply copied and pasted the directive's two most contentious articles - Articles 17 and 15 (the "link tax") rather than attempt to resolve the directive's internal contradictions. As Glyn Moody explains at Walled Culture, the directive requires the platforms to both block copyright-infringing content from being uploaded and make sure legal content is not removed. Moody also reports that Finland's attempts at resolution have attracted complaints from the copyright industries, who want the country to make its law more restrictive. Among the other countries that have transposed the directive, Reda believes only Germany's and Austria's interpretations provide safeguards in line with the court's ruling - and Austria's only with some changes.

***

The best response I've seen to the potential sale of Twitter comes from writer Racheline Maltese: who tweeted, "On the Internet, your home will always leave you."

In a discussion sparked by the news, Twitter user Yishan argues that "free speech" isn't what it used to be. In the 1990s version, the threat model was religious conservatives in the US. This isn't entirely true; some feminist groups also sought to censor pornography, and 1980s Internet users had to bypass Usenet hierarchy administrators to create newsgroups for sex and drugs. However, the understanding that abuse and trolling drive people away and chill them into silence definitely took longer to accept as a denial of free speech rights. Today, Yishan writes, *everyone* feels their free speech is under threat from everyone else. And they're likely right.

***

It's also worth noting the early stages of the cybercrime treaty. It's now 20 years since the Convention on Cybercrime was formulated; as of December 2020 65 states have ratified it and four have signed it. The push for a new treaty is coming from countries that either opposed the original or weren't involved in drafting it - Russia in particular, ironically enough. At Human Rights Watch, Deborah Brown warns of risks to fundamental rights: "cybercrime" has no agreed definition and some states want expansion to include "incitement to terrorism" and copyright infringement. In addition, while many states back including human rights protections, detail is lacking. However, we might get some clues from this week's White House declaration for the future of the Internet, which seeks to "reclaim the promise of the Internet" and embed human rights. It's backed by 60 countries - but not China or Russia.

There is general agreement that the vast escalation of cybercrime means better cross-border cooperation is needed, as Summer Walker writes at Foreign Policy. However, she notes that as work progressed in 2021 a number of states already felt excluded from the decision-making process.

The goal is to complete an agreement by early 2024.

***

Finally....20 years ago I wrote (in a piece from the lostweb) about the new opportunities for plagiarism afforded by the Internet. That led to a new industry sector: online services that check each new paper against a database of known material. The services do manage to find previously published text; six days after publication even a free example service rates the first two paragraphs of last week's net.wars as "100% plagiarized". Even so, the concept is flawed, particularly for academics, whose papers have been flagged or rejected for citations, standardized descriptions of experimental methodology, or reused passages describing their own previous work - "self-plagiarism". In some cases, academics have reported on Twitter, the automated systems in use at some journals reject their work before an editor can see it.

Now there's a new twist in this little arms race: rephrasing services that freshen up published material so it will pass muster. The only problem is (of course) that the AI is supremely stupid and poorly educated. Last year, Nature reported on "tortured phrases" that indicated plagiarized research papers, particularly rife in computer science. This week Essex senior lecturer Matt Lodder reported on Twitter his sightings of AI-rephrased material in students' submissions. First clue: "It read oddly." Well, yes. When I ran last week's posting through several of these services, they altered direct quotes (bad journalism), rewrote active sentences into passive ones (bad writing), and changed the meaning (bad editing). In Lodder's student's text, the AI had substituted "graph" for "chart"; in a paper submitted to a friend of his, "the separation of powers" had been rendered as "the sundering of puissances" and Adam Smith's classic had become "The Abundance of Countries". People: when you plagiarize, read what you turn in!


Illustrations: Adam Smith, author of The Wealth of Nations (portrait from the National Gallery of Scotland, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 4, 2022

Consent spam

openRTB.pngThis week the system of adtech that constantly shoves banners in our face demanding consent to use tracking cookies was ruled illegal by the Belgian Data Protection Authority, leading 28 EU data protection authorities. The Internet Advertising Bureau, whose Transparency and Consent Framework formed the basis of the complaint that led to the decision, now has two months to redesign its system to bring it into compliance with the General Data Protection Regulation.

The ruling marks a new level of enforcement that could begin to see the law's potential fulfilled.

Ever since May 2018, when GDPR came into force, people have been complaining that so far all we've really gotten from it is bigger! worse! more annoying! cookie banners, while the invasiveness of the online advertising industry has done nothing but increase. In a May 2021 report, for example, Access Now examined the workings of GDPR and concluded that so far the law's potential had yet to be fulfilled and daily violations were going unpunished - and unchanged.

There have been fines, some of them eye-watering, such as Amazon' s 2021 fine of $877 million for its failure to get proper consent for cookies. But even Austrian activist lawyer Max Schrems' repeated European court victories have so far failed to force structural change, despite requiring the US and EU to rethink the basis of allowing data transfers.

To "celebrate" last week's data protection day, Schrems documented the situation: since the first data protection laws were passed,enforcement has been rare. Schrems' NGO, noyb, has plenty of its own experience to drawn on. Of the 51 individual cases noyb has filed in Europe since its founding in 2018, only 15% have been decided wthin a year, none of them pan-European. Four cases filed with the Irish DPA in May 2018, the day after GDPR came into force, have yet to be given a final decision.

Privacy International, which filed seven complaints against adtech companies in 2018, also has an enforcement timeline. Only one, against Experian, resulted in an investigation, and even in that case no action has been taken since Experian's appeal in 2021. A recent study of diet sites showed that they shared the sensitive information they collect with unspecified third parties, PI senior tecnologist Eliot Bendinelli told last week's Privacy Camp. PI's complaint is yet to be enforced, though it has led some companies to change their practices.

Bendinelli was speaking on a panel trying to learn from GDPR's enforcement issues in order to ensure better protection of fundamental rights from the EU's upcoming Digital Services Act. Among the complaints with respect to GDPR: the lack of deadlines to spur action and inconsistencies among the different national authorities.

The complaint at the heart of this week's judgment began in 2018, when Open Rights Group director Jim Killock, UCL researcher Michael Veale, and Irish Council on Civil Liberties senior fellow Johnny Ryan took the UK Information Commissioner's Office to court over the ICO's lack of action regarding real-time bidding, which the ICO itself had found illegal under the UK's Data Protection Act (2018), the UK's post-Brexit GDPR clone. In real-time bidding, your visit to a participating web page launches an instant mini-auction to find the advertiser willing to pay the most to fill the ad space you're about to see. Your value is determined by crunching all the data the site and its external sources have or can get about you.

If all this sounds like it oughtta be illegal under GDPR, well, yes. Enter the IAB's TCF, which extracts your permission via those cookie consent banners. With many of these, dark patterns design make "consent" instant and rejection painfully slow. The Big Tech sites, of course, handle all this by using logins; you agree to the terms and conditions when you create your account and then you helpfully forget how much they learn about you every time you use the site.

In December 2021, the UK's Upper Tribunal refused to require the ICO to reopen the complaint, though it did award Killock and Veal concessions they hope will make the ICO more accountable in future.

And so back to this week's judgment that the IAB's TCF, which is used on 80% of the European Internet, is illegal. The Irish DPA is also investigating Google's similar system, as well as Quantcast's consent management system. On Twitter, Ryan explained the gist: cookie-consent pop-ups don't give publishers adequate user consent, and everyone must delete all the data they've collected.

Ryan and the Open Rights Group also point out that the judgment spikes the UK government's claim that revamping data protection law is necessary to get rid of cookie banners (at the expense of some of the human rights enshrined in the law). Ryan points to DuckDuckGo as an example of the non-invasive alternative: contextual advertising. He also observed that all that "consent spam" makes GDPR into merely "compliance theater".

Meanwhile, other moves are also making their mark. Also this week, Facebook (Meta)'s latest earnings showed that Apple's new privacy controls, which let users opt out of tracking, will cost it $10 billion this year. Apparently 75% of Apple users opt out.

Moral: given the tools and a supportive legal environment, people will choose privacy.

Illustrations: Diagram of OpenRTB, from the Belgian decision.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 28, 2022

The user in charge

Thumbnail image for Wilcox, Dominic - Stained Glass car.jpgLast week, we learned that back last October prosecutors in California filed two charges of vehicular manslaughter against the driver of a Tesla on Autopilot who ran a red light in 2019, hit another car, and killed two people.

As they say, we've had this date since the beginning.

Meanwhile, in the UK the Law Commission is trying to get ahead of the market by releasing a set of proposals covering liability for automated driving. Essentially, its report concludes that when automation is driving a car liability for accidents and dangerous driving should shift to the "Authorized Self-Driving Entity" - that is, the company that was granted the authorization to operate on public roads . Perfectly logical; what's jarring is the report's linguistic shift that turns "drivers" into "users", with all the loss of agency that implies. Of course, that's always been the point, particularly for those who contend that automated driving will (eventually) be far safer. Still.

Ever since the first projects developing self-driving cars appeared, there have been questions about how liability would be assigned. The fantasy promulgated in science fiction and Silicon Valley is that someday cars will reach Level 5 automation, in which human intervention no longer exists. Ten years ago, when many people thought it was just a few years away, there were serious discussions about whether these future cars should have steering wheels when they could have bedroom-style acommodation instead. You also heard a lot about improving accessibility and inclusion for those currently barred from driving because of visual impairment, childhood, or blood alcohol level. And other benefits were mooted: less congestion, less pollution, better use of resources through sharing. And think, Clive Thompson wrote at Mother Jones in 2016, of all the urban parking space we could reclaim for cyclists, pedestrians, and parks.

In 2018, Christian Wolmar argued that a) driverless cars were far away, and b) that they won't deliver the benefits hypsters were predicting. Last year, he added that self-driving cars won;t solve the problems people care about, like congestion and pollution, drivers will become deskilled, and shared use is not going to be a winning argument. I agree with most of this. For example, if we take out all the parking, then aren't you going to increase congestion as they ferry themselves back home to wait for the end of the day after dropping off their owners?

So far, Wolmar appears to have been right. Several of the best-known initiatives have either closed down or been sold, and the big trend is consolidation into the hands of large companies that can afford to invest and wait. Full automation seems as far away as ever.

Instead, we are mired in what everyone eventually agreed would be the most dangerous period in the shift to automated driving: the years or decades of partial and inconsistent automation. As the Tesla crash shows, humans overestimating their cars' capabilities is one problem. A second is the predictability gap between humans and AIs. As humans ourselves, we're pretty good at guessing how other human drivers will likely behave. We all tend to try to put distance between ourselves and cars with lots of dents and damage or cars exhibiting erratic behavior, and pedestrians learn young to estimate the speed at which a car is approaching in order to decide whether it's safe to cross the street. We do not have the same insight into how a self-driving car is programmed to behave - and we are not appear predictable to its systems. One bit of complexity I imagine will increasingly matter is that the car's sensors will react to differences we can't perceive.

At the 2016 We Robot conference, Madeleine Clare Elish introduced the idea moral crumple zones. In a hybrid AI-human system, she argued, the blame when anything goes wrong will be assigned to the human element. The Tesla autopilot crash we began with is a perfect example, and inevitable under current US law: the US National Highway Traffic Safety Administration holds that the human in charge of the car is always responsible. Since a 2018 crash, Tesla has reportedly tried to make it clearer to customers that even its most sophisticated cars cannot drive themselves, and, according to the Associated Press, updated its software to make it "harder for drivers to abuse it".

Pause for bafflement. What does "abuse" mean in that sentence? That a driver expects something called "Autopilot" to...drive the car? It doesn't help the accuracy of people's perceptions of their car's capabilities that in December Tesla decided to add a gaming console to its in-car display. Following an announcement by the US National Highway Traffic Safety Administration that it would investigate, Tesla is updating the software so that the gaming feature locks when the car is moving. Shouldn't the distraction potential have been obvious? That's Theranos-level recklessness.

This is where the Law Commission's report makes a lot of sense. It pins the liability squarely on the ASDE for things like misleading marketing, and it sets requirements for handling transitions to human drivers, the difficulty of which was so elegantly explored in Dexter Palmer's Version Control. The user in charge is still responsible for things like insurance and getting kids into seatbelts. The proposals will now be considered by the UK's national governments.


Illustrations: Dominic Wilcox's concept driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 7, 2022

Resolutions

Winnie-the-Pooh-north pole_143.pngWe start 2022 with some catch-ups.

On Tuesday, the verdict came down in the trial of Theranos founder Elizabeth Holmes: guilty on four counts of wire fraud, acquitted on four counts, jury hung on three. The judge said he would call a mistrial on those three, but given that Holmes will already go to prison, expectations are that there will be no retrial.

The sad fact is that the counts on which Holmes was acquitted were those regarding fraud against patients. While investment fraud should be punished, the patients were the people most harmed by Theranos' false claims to be able to perform multiple accurate tests on very small blood samples. The investors whose losses saw Holmes found guilty could by and large afford them (though that's no justification). I know the $350 million collectively lost by Trump education secretary Betsy DeVos, Rupert Murdoch, and the Cox family is a lot of money, but it's a vanishingly tiny percentage of their overall wealth (which may help explain DeVos family investment manager Lisa Peterson's startlingly tcasual approach to research). By contrast, for a woman who's already had three miscarriages, the distress of being told she's losing a fourth, despite the eventual happy ending, is vastly more significant.

I don't think this case by itself will make a massive difference in Silicon Valley's culture, despite Holmes's prison sentence - how much did bankers change after the 2008 financial crisis? Yet we really do need the case to make a substantial difference in how regulators approach diagnostic devices, as well as other cyber-physical hybrid offerings, so that future patients don't become experimental subjects for the unscrupulous.

***

On New Year's Eve, Mozilla, the most important browser that ">only 3% of the market uses, reminded people it accepts donations in cryptocurencies through Bitpay. The message set off an immediate storm, not least among two of the organization's co-founders, one of whom, Jamie Zawinski, tweeted that everyone involved in the decision should be "witheringly ashamed". At The Register, Liam Proven points out that it's not new for Mozilla to accept cryptocurrencies; it's just changed payment providers.

One reason to pay attention to this little fiasco is that while Mozilla (and other Internet-related non-profits and open software projects) appeal greatly to the same people who care about the environment and believe that cryptocurrency mining is wasteful and energy-intensive and deplore the anti-government rhetoric of its most vocal libertarian promoters, the richest people willing to donate to such projects are often those libertarians. Trying to keep both onside is going to become increasingly difficult. Mozilla has now suspended its acceptance of cryptocurrencies to consider its position.

***

In 2010, fatally frustrated with Google, I went looking for a replacement search engine and found DuckDuckGo. It took me a little while to get the hang of formulating successful queries, but both it and I got better. It's a long time since I needed to direct a search elsewhere.

At the time, a lot of people thought it was bananas for a small startup to try to compete against Google. In an interview, founder Gabriel Weinberg explained that the decision had been driven by his own frustration with Google's results. Weinberg talked most about getting to the source you want more efficiently.

Even at that early stage, embracing privacy was part of his strategy. Nearly 12 years on from the company's founding, its 35.3 billion searches last year - up 46% from 2020 - remain a rounding error compared to Google's many hundreds of billions per day. But the company continues to offer things I actually want. I have its browser on my phone, and (despite still having a personal email server) have signed up for one of its email addresses because it promises to strip out the extensive tracking inserted into many email newsletters. And all without having to buy into Apple's ecosystem.

Privacy has long been a harder sell than most privacy advocates would like to admit, usually because it involves giving up a lot of convenience to get it. In this case...it's easy. So far.

***

Never doubt that tennis is where cultural clashes come home to roost. Tennis had the first transgender athlete; it was at the forefront of second wave feminism; and now it's the venue for science versus anti-science. And now, as even people who *aren't* interested in tennis have seen, it is the foremost venue for the clash between vaccine mandates and anti-vaxx refuseniks. Result: the men's world number one, Serbian player Novak Djokovic (and, a day later, doubles specialist Renata Voracova), was diverted to a government quarantine hotel room like any non-famous immigrant awaiting deportation.

Every tennis watcher saw this coming months ago. On one side, Australian rules; on the other, a tennis tournament that apparently believed it could accommodate a star's balking at an immigration requirement as unyieldingly binary as pregnancy or the Northern Ireland protocol

Djokovic is making visible to the world a reality that privacy advocates have been fighting to expose: you have no rights at borders. If you think Djokovic, with all his unique resources, should be meeting better treatment, then demand better treatment for everyone, legal or illegal, at all borders, not just Australia's.


Illustrations: Winnie the Pooh, discovering the North Pole, by Ernest Howard Shepard, finally in the public domain (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 22, 2021

It's about power

vampire-squid-flickr-2111032672_db268e72d9_c.jpgIt is tempting to view every legislative proposal that comes from the present UK government as an act of revenge against the people and institutions that have disagreed with it.

The UK's Supreme Court undid Boris Johnson's decision to prorogue Parliament in the 2019 stages of the Brexit debates; the proposes to limit judicial review. The Election Commission recommended codes of conduct to keep political advertising fair; the Elections Bill, as retiring House of Lords member David Puttnam writes at the Guardian as one element of a long list of anti-democratic moves, prioritizes registration and voter ID, here as in the US, measures likely to disenfranchise opposition voters.

The UK government's proposals for reforming data protection law - the consultation is open until November 19 - also seem to fit this scenario. Granted, the UK wasn't a fan, even in 2013, when the EU's General Data Protection Regulatioon was being negotiated. Today's proposals would roll back some aspects of the law. Notably, it suggests discouraging individuals from filing subject access requests by introducing fees, last seen in the 1998 Data Protection Act GDPR replaced, and giving organizations greater latitude to refuse. This thinking is familiar from the 2013 discussions about freedom of information requests. The difference here: it's *our* data we want to access.

More pervasive, though, is the consultation's general assumption that data protection is a burden that impedes innovation and needs to be lightened to unlock economic growth. The EU, reading it, may be relieved it only granted the UK's data protection regime adequacy for four years.

It is impossible to read the subject access rights section (page 69ff) without concluding that the "burden" the government seeks to relieve is its own. In a panel on the proposed changes at the UK Internet Governance Forum, speakers agreed that businesses are not calling for this. What they *do* want is guidance. Diverging from GDPR makes life more complicated by creating multiple regimes that all require compliance. If you're a business, you want consistency and clarity. It's hard to see how these proposals provide them.

This is even more true for individuals who depend on their rights under GDPR (and equivalent) to understand the decisions that have been made about them. As Renate Samson put it at UKIGF, viewing their data is crucial in obtaining redress for erroneous immigration and asylum decisions. "Understanding why the computer says no is critical for redress purposes." In May, the Open Rights Group and the3million won this very battle against the government - under GDPR.

These issues are familiar ground for net.wars. What's less so is the UK's behavior. As in other areas - the widely criticized covid response, its dealings throughout the Brexit negotiations - Britain seems to assume it can dictate terms. At UKIGF, Michael Veale tried to point out the reality: "The UK has to engage with GDPR in a way that shows it understands it's now a rule-taker." It's almost impossible to imagine this government understanding any such thing.

A little earlier, the MP Chris Philip, had said the UK is determined to be a scientific and technology "superpower". This country, he said, is number three behind the US and China; we need to get to "an even better position".

Pause for puzzlement. Does Philip think the UK can pass either the US or China in AI? What would that even mean? AI, of all technologies, requires collaboration. Is he really overlooking the EU's technical weight as a bloc? Is the argument that data is essential for AI, AI is the economic future of Britain, so therefore individuals should roll over and open up for...Apple and Google? Do Apple and Google see their role in life as helping the UK to become a world leader in AI?

After all, "the US" isn't really the US as a nation in this discussion; in AI "the US" is the six giant multinational companies Amy Webb that all want to dominate (Google, Microsoft, Apple, Facebook, IBM, Amazon). Data protection law is one of the essential tools for limiting their ability to slurp up everyone's data.

Meanwhile, this government's own policies seem to be in conflict with each other. Simultaneously, it's also at work on a digital identity framework. Getting people to use it will require trust, which proposals to reform data protection law undermine. And trust in this government with respect to data is already faltering, because of the fiasco over our medical data back in June. It's not clear the government is making any of these connections;

Twenty years ago, data protection was about privacy and the big threat was governments. Gradually, as the online advertising industry formed and start-ups became giant companies, the view of data protection law expanded to include helping to redress the imbalance of power between individuals and large companies. Now, with those companies dominating the landscape, data protection is also about restructuring power and ensuring that small players have a chance faced with giant competitors who can corral everyone's devices and extract their data. The more complicated the regulations, as European Digital Rights keeps saying, the more it's only the biggest companies that can afford the infrastructure to comply with them. "Data protection" sounds abstract and boring. Don't be fooled. It's about power.


Illustrations: Vampire squid (via Anne-Lise Heinrichs, on Flickr, following Michael Veale's comparison to Big Tech at UKIGF).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 15, 2021

The future is hybrid

grosser-somebody.JPGEvery longstanding annual event-turned-virtual these days has a certain tension.

"Next year, we'll be able to see each other in person!" says the host, beaming with hope. Excited nods in the Zoom windows and exclamation points in the chat.

Unnoticed, about a third of the attendees wince. They're the folks in Alaska, New Zealand, or Israel, who in normal times would struggle to attend this event in Miami, Washington DC, or London because of costs or logistics.

"We'll be able to hug!" the hosts say longingly.

Those of us who are otherwhere hear, "It was nice having you visit. Hope the rest of your life goes well."

When those hosts are reminded of this geographical disability, they immediately say how much they'd hate to lose the new international connections all these virtual events have fostered and the networks they have built. Of course they do. And they mean it.

"We're thinking about how to do a hybrid event," they say, still hopefully.

At one recent event, however, it was clear that hybrid won't be possible without considerable alterations to the event as it's historically been conducted - at a rural retreat, with wifi available only in the facility's main building. With concurrent sessions in probably six different rooms and only one with the basic capability to support remote participants, it's clear that there's a problem. No one wants to abandon the place they've used every year for decades. So: what then? Hybrid in just that one room? Push the facility whose selling point is its woodsy distance from modern life to upgrade its broadband connections? Bring a load of routers and repeaters and rig up a system for the weekend? Create clusters of attendees in different locations and do node-to-node Zoom calls? Send each remote participant a hugging pillow and a note saying, "Wish you were here"?

I am convinced that the future is hybrid events, if only because businesses sound so reluctant to resume paying for so much international travel, but the how is going to take a lot of thought, collaboration, and customization.

***

Recent events suggest that the technology companies' own employees are a bigger threat to business-as-usual than portending regulation and legislation. Facebook's had two major whistleblowers - Sophie Zhang and Frances Haugen in the last year, and basically everyone wants to fix the site's governance. But Facebook is not alone...

At Uber, a California court ruled in August that drivers are employees; a black British driver has filed a legal action complaining that Uber's driver identification face-matching algorithm is racist; and Kenyan drivers are suing over contract changes they say have cut their takehome pay to unsustainably low levels.

Meanwhile, at Google and Amazon, workers are demanding the companies pull out of contracts with the Israeli military. At Amazon India, a whistleblower has handed Reuters documents showing the company has exploited internal data to copy marketplace sellers' products and rig its search engine to display its own versions first. *And* Amazon's warehouse workers continue to consider unionizing - and some cities back them.

Unfortunately, the bigger threat of the legislation being proposed in the US, UK, New Zealand, Canada is *also* less to the big technology companies than to the rest of the Internet. For example, in reading the US legislation Mike Masnick finds intractable First Amendment problems. Last week I liked this idea of focusing on content social media companies' algorithms amplify, but Masnick persuasively argues it's not so simple, citing Daphne Koller, who thought more critically about the First Amendment problems that will arise in implementing that idea.

***

The governor of Missouri, Mike Parson, has accused Josh Renaud, a journalist with the St Louis Post-Dispatch, of hacking into a government website to view several teachers' social security numbers. From the governor's description, it sounds like Renaud hit either CTRL-U or hit F12, looked at the HTML code, saw startlingly personal data, and decided correctly that the security flaw was newsworthy. (He also responsibly didn't publish his article until he had notified the website administrators and they had fixed the issue.)

Parson disagrees about the legitimacy of all this, and has called for a criminal investigation into this incident of "hacking" (see also scraping). The ability to view the code that makes up a web page and tells the browser how to display it is a crucial building block of the web; when it was young and there were no instruction manuals, that was how you learned to make your own page by copying. A few years ago, the Guardian even posted technical job ads in its pages' HTML code, where the right applicants would see them. No password, purloined or otherwise, is required. The code is just sitting there in plain sight on a publicly accessible server. If it weren't, your web page would not display.

Twenty-five years ago, I believed that by now governments would be filled with 30-somethings who grew up with computers and the 2000-era exploding Internet and could restrain this sort of overreaction. I am very unhappy to be wrong about this. And it's only going to get worse: today's teens are growing up with tablets, phones, and closed apps, not the open web that was designed to encourage every person to roll their own.


Illustrations: Exhibit from Ben Grosser's "Software for Less, reimagining Facebook alerts, at the Arebyte Gallery until end October.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 1, 2021

Plausible diversions

amazon-astro.pngIf you want to shape a technology, the time to start is before it becomes fixed in the mindset of "'twas ever thus". This was the idea behind the creation of We Robot. At this year's event (see below for links to previous years), one clear example of this principle came from Thomas Krendl Gilbert and Roel I. J. Dobbe, whose study of autonomous vehicles pointed out the way we've privileged cars by coining "jaywalkification". On the blank page in the lawbook, we chose to make it illegal for pedestrians to get in cars'' way.

We Robot's ten years began with enthusiasm, segued through several depressed years of machine learning and AI, and this year has seemingly arrived at a twist on Arthur C. Clark's famous dictum To wit: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability. You could say it's been ten years of progressively removing robots' glamor.

Something like this was at the heart of the paper by Andrew Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar, which uses the computer science staple of abstraction as a model for assigning responsibility for the behavior of complex systems. Weed out debates over the innards - is the system's algorithm unfair, or was the training data biased? - and aim at the main point​: this employer chose this system that produced these results. No one needs to be inside its "black box" if you can understand its boundaries. In one analogy, it's not the manufacturer's fault if a coffee maker fails to produce drinkable coffee from poisoned water and ground acorns; it *is* their fault if the machine turns potable water and ground coffee into toxic sludge. Find the decision points, and ask: how were those decisions made?

Gilbert and Dobbe used two other novel coinages: "moral crumple zoning" (from Madeleine Claire Elish's paper at We Robot 2016) and "rubblization", for altering the world to assist machines. Exhibit A, which exemplifies all three, is the 2018 incident in which an Uber car on autopilot killed a pedestrian in Tempe, Arizona. She was jaywalking; she and the inattentive safety driver were moral crumple zoned; and the rubblized environment prioritized cars.

Part of Gilbert's and Dobbe's complaint was that much discussion of autonomous vehicles focused on the trolley problem, which has little relevance to how either humans or AIs drive cars. It's more useful instead to focus on how autonomous vehicles reshape public space as they begin to proliferate.

This reshaping issue also arose in two other papers, one on smart farming in East Africa by Laura Foster, Katie Szilagyi, Angeline Wairegi, Chidi Oguamanam, and Jeremy de Beer, and one by Annie Brett on the rapid, yet largely overlooked expansion of autonomous vehicles in ocean shipping, exploration, and data collection. In the first case, part of the concern is the extension of colonization by framing precision agriculture and smart farming as more valuable than the local knowledge held by small farmers, the majority of whom are black women, and viewing that knowledge as freely available for appropriation. As in the Western world, where manufacturers like John Deere and Monsanto claim intellectual property rights in seeds and knowledge that formerly belonged to farmers, the arrival of AI alienates local knowledge by stowing it in algorithms, software, sensors, and equipment and makes the plants on which our continued survival depends into inert raw material. Brett, in her paper, highlights the growing gaps in international regulation as the Internet of Things goes maritime and changes what's possible.

A slightly different conflict - between privacy and the need to not be "mis-seen" - lies at the heart of Alice Xiang's discussion of computer vision. Elsewhere, Agathe Balayn and Seda Gürses make a related point in a new EDRi report that warns against relying on technical debiasing tweaks to datasets and algorithms at the expense of seeing the larger social and economic costs of these systems.

In a final example, Marc Canellas studied whole cybernetic systems and finds they create gaps where it's impossible for any plaintiff to prove liability, in part because of the complexity and interdependence inherent in these systems. Canellas proposes that the way forward is to redefine intentional discrimination and apply strict liability. You do not, Cynthia Khoo observed in discussing the paper, have to understand the inner workings of complex technology in order to understand that the system is reproducing the same problems and the same long history if you focus on the outcomes, and not the process - especially if you know the process is rigged to begin with. The wide spread of move fast and break things, Canellas noted, mostly encumbers people who are already vulnerable.

I like this overall approach of stripping away the shiny distraction of new technology and focusing on its results. If, as a friend says, Facebook accurately described setting up an account as "adding a line to our database" instead of "connecting with your friends", who would sign up? Similarly, don't let Amazon get cute about its new "Astro" comprehensive in-home data collector.

Many look at Astro and see instead the science fiction robot butler of decades hence. As Frank Pasquale noted, we tend to overemphasize the far future at the expense of today's decisions. In the same vein, Deborah Raji called robot rights a way of absolving people of their responsibility. Today's greater threat is that gig employers are undermining workers' rights, not whether robots will become sentient overlords. Today's problem is not that one day autonomous vehicles may be everywhere, but that the infrastructure needed to make partly-autonomous vehicles safe will roll over us. Or, as Gilbert put it: don't ask how you want cars to drive; ask how you want cities to work.


Previous years: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference; 2020.

Illustrations: Amazon photo of Astro.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 24, 2021

Is the juice worth the squeeze?

Anywhere But Westminster- MK - robot.pngLast week, Gikii speakers pondered whether regulating magic could suggest how to regulate AI. This week, Woody Hartzog led a session at We Robot pondering how to regulate robots, and my best analogy was...old cars.

Bill Smart was explaining that "robot kits" wouldn't become a thing because of the complexity. Even the hassock-sized Starship delivery robot spotted on a Caltrain platform and deliver groceries in Milton Keynes are far too complex for a home build. "Like a car. There are no car kits."

Oh, yes, there are: old cars, made before electronics, that can be taken to pieces and rebuilt; you just need the motor vehicle people to pass it as roadworthy. See also: Cuba.

Smart's main point stands, though: the Starship robots have ten cameras, eight ultrasonic sensors, GPS, and radar, and that's just the hardware (which one could imagine someone plugging together). The software includes neural nets, 3D mapping, and a system for curb climbing, plus facilities to allow remote human operation. And yet, even with all that one drove straight into a canal last year.

"There's a tendency seen in We Robot to think about a robot as a *thing* and to write around that thing," Cindy Grimm observed. Instead, it's important to consider what task you want the robot to accomplish, what it's capable of, what it *can't* do, and what happens when someone decides to use it differently. Starship warns not to disturb its robots if they're sitting doing nothing. "It may just be having a rest."

A rest? To do what? Reorder the diodes all down its left side?

The discussion was part of a larger exercise in creating a law to govern delivery robots and trying to understand tradeoffs. A physical device that interacts with thea real world is, as Smart and Grimm have been saying all the way back to the first We Robot, in 2012, dramatically different from the devices we've sought to regulate so far. We tend, like the Starship people above, to attribute intentionality to things that can move, I believe as a matter of ancestral safety: things that can move autonomously can attack you. Your washing machine is more intelligent than your Roomba, but which one gets treated like a pet?

Really, though, Grimm said, "They're just a box of 1s and 0s."

So Hartzog began with a piece of proposed legislation. Posit: small delivery robots that uses sidewalks, roads, and bike lanes. Hypothetical city council doesn't want to ban outright. But the things can disrupt daily lives and impede humans' use of public space. So, they propose a law: delivery robots must have a permit, Must respect all city ordinances and physical safety of all people. Speed limited to 15 miles an hour. No contact with humans except the designated recipient. Must remain 12 feet apart and prioritize human mobility by moving away from assistive devices and making its presence known via audio signals. Only allowed to collect data for core functions; may not collect data from inside homes without consent; may not use facial recognition, only face detection for safety. What's missing?

Well, for one thing, 15 miles an hour is *dangerous* on a crowded sidewalk, and even in some bike lanes. For another, what capabilities does the robot need to recognize the intended recipient? Facial recognition? Fingerprint scanner? How much do permits cost and who can (and can't) afford them? Is it better to limit robot density rather than set a specific number? How does it recognize assistive devices? How much noise should we tolerate? Who has right of way if there's only one narrow path? If every robot's location must be known at all times, what are the implications of all that tracking? How and when do permits get revoked?

Hartzog left us with a final question: "Is the juice worth the squeeze?" Are there opportunity costs inherent in accepting the robots in the first place?

As Grimm said, nothing is for free; every new robot capability brings tradeoffs. Adding awareness, so the robot "knows" to move out of the way of strollers and wheelchairs, means adding data-gathering sensors, adding privacy risk? Grimm's work with apple-picking robots has taught her their success depends on pruning apple trees to make their task simpler. The job is a lot harder in her backyard, where this hasn't been done. So legal considerations must include how and whether we change the environment so it's safer for robots to be around people. Grimm calls this making a "tunnel" for the robot; narrow and simplify the task rather than making the robot "smarter".

Personally, I like the idea of barring the robots from weighing more than an average human can lift, so you can always pick the thing up and move it out of the way.

No such issues mar the cheery Starship promotional video linked above. This seems impossible; why should delivery robots be less of a nuisance than abandoned dockless scooters and bikes? In the more realistic view to be found in Anywhere But Westminster's 2019 visit to Milton Keynes, the robots still seem mostly inoffensive as they roll through an unpopulated park and wait to cross the empty street. Then the filmmakers encounter one broadcasting Boris Johnson speeches. Suddenly, ad-spewing sidewalk robots seem inescapable. Maybe instead hire the unemployed people the filmmakers find at the food bank?


Illustrations: Screenshot from Anywhere But Westminster, "We must deliver: Brexit, Johnson, and the robots of Milton Keynes".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 3, 2021

The trial

Elizabeth_Holmes_at_TechCrunch_Disrupt_on_September_8,_2014_(14996937900).jpgThe trial of Theranos founder and former CEO Elizabeth Holmes, which began jury selection this week, offers a rare opportunity to understand in depth how lawyers select from and frame the available evidence to build and present a court case. The opportunity arises because investigative reporter John Carreyrou has both the mountains of evidence he uncovered over the last seven years, and because true crime podcasts are now a thing. Most people facing the reality of the case, he observes, would have taken a plea deal. Not Holmes, or not yet.

The story of Theranos is well-known: Holmes dropped out of studying chemical engineering at Stanford at 19 and used her tuition money as seed funding to pursue the idea of developing diagnostic tests based on much smaller amounts of blood than was currently possible - a finger stick rather than a venous blood draw and many tests conducted at once on those few drops. Expert medical professors told her it was impossible. She persisted, nonetheless.

Holmes's path through medicine and business seemed charmed. She populated the Theranos board with famous names: Henry Kissinger and former secretary of state George Shultz (who responded angrily when his Theranos employee grandson tried to warn him). She raised hundreds of millions of dollars from the Walmart family ($150 million), Rupert Murdoch ($125 million), Trump administration education secretary Betsy DeVos ($100 million), and the Cox family ($100 million). Then-boyfriend Sunny Balwani joined as chief operating officer. Theranos won contracts with Walgreen's and Safeway, both anxious about remaining competitive. By 2014 she was everywhere on TV shows and magazine covers wearing a Steve Jobs-like all-black outfit of turtleneck and trousers, famous as the world's youngest self-made female billionaire.

And then, in 2015, Wall Street Journal reporter John Carreyrou began blowing it all up with a series of investigative articles that eventually underpinned his 2018 book, Bad Blood: Secrets and Lies in a Silicon Valley Startup. The Securities and Exchange Commission charged Holmes and Theranos with fraud; Holmes settled the case by paying $500,000, giving up her voting control over the company and surrendering her 18.9 million shares. She was barred from serving as an officer or director of a public company for ten years, and she and Balwani were indicted on criminal fraud charges. This is the trial that began this week; Balwani will be tried later.

Twitter reports suggest that it hasn't been easy to find jurors in Santa Clara County, California, where the trial is taking place, who haven't encountered at least some of the extensive media coverage, read Carreyrou's book, or seen Alex Gibney's HBO documentary The Inventor: Out for Blood in Silicon Valley. Holmes remains a media magnet as a prospective felon.

With the case approaching, Carreyrou has released the first three of a planned dozen episodes of Bad Blood: The Final Chapter. These cover, in order: Holmes's trial strategy as revealed by the papers her lawyers have filed; Theranos' foray into testing for Ebola and Zika during those epidemics; and Holmes' relationship with Balwani. There is enough new material to make the podcast worth your time (though it's difficult not to wince when Carreyrou damages his credibility by delivering the requisite podcast ads for dubious health drinks and hair loss remedies, and endorses meal kits).

What makes this stand out is the near real-time critique of the case's construction. When Carreyrou thinks, for example, that the "Svengali defense" Holmes's lawyers have filed - Holmes apparently intends to claim that Balwani abuse and manipulation robbed her of personal choice - is a long shot, it's because he's seen extensive text messages between Holmes and Balwani (a selection are read out by actors). More speculative are his comments on the effect on the jury of Holmes's new persona: the Steve Jobs costume and stylized hair and makeup are replaced by a more natural look as a married woman and new mother. Carreyrou revisits Holmes and Balwani's relationship in more detail in the third episode.

The second episode offers a horrifying inside look at medical malfeasance. As explained here by microbiologist and former Theranos lab worker Lina Castro, neither Holmes nor Balwani understood the safety protocols necessary for handling infectious and lethal pathogens. Castro and Aaron Richardson, the scientist who led the effort to develop a test for Ebola, conclude that even if Theranos' "miniLab" testing device had worked, the company's culture was too dysfunctional to be able to create a successful Ebola test.

At the Washington Post, Rachel Lurman argues that the case puts Silicon Valley's culture on trial. Others argue that Theranos isn't *really* Silicon Valley at all, since neither its board nor its list of investors included Silicon Valley names. In fact, Theranos was a PR-friendly Silicon Valley copy: the eccentric but unvarying clothing (see also: Zuckerberg's hoodie), the emotive origin story (the beloved uncle who died too soon), and the enthusiastic promotion of vaporware until a real product can be demoed. In the days of pure software, bullshit could sort of work. But not in the medical context, where careful validation and clinical testing are essential, and it won't work in the future of hybrid cyber-physical systems, where safety and real world function matter.

"First they call you crazy, then they fight you, and then you change the world," Holmes frequently said in defending her company against Carreyrou's reporting. Only if you have the facts on your side.

Illustrations: Elizabeth Holmes at TechCrunch Disrupt in 2014 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 16, 2021

When software eats the world

The_National_Archives_at_Kew_-_geograph.org.uk_-_2127149.jpgOne part of our brains knows that software can be fragile. Another part of our brains, when faced with the choice of trusting the human or trusting the machine...trusts the machine. It may have been easier to pry trust away from the machine twenty years ago, when systems crashed more often, sometimes ruining months of work and the mantra, "Have you tried turning it off and back on again?" didn't yet work as a reliable way of restoring function. Perhaps more important, we didn't *have* to trust software because we had canonical hard copies. Then, as predicted, the copies became "backups". Now, often, they don't exist at all, with the result that much of what we think we know is becoming less well-attested. How many of us even print out our bank statements any more? Three recent stories highlight this.

First is the biggest UK computer-related scandal for many years, the outrageous Post Office prosecution of hundreds of subpostmasters of theft and accounting fraud, all while insisting that their protests of innocence must all be lies because its software, sourced from Fujitsu, could not possibly be wrong. Eventually, the Court of Appeal quashed 39 convictions and excoriated both the Post Office and Fujitsu for denying the existence of two known bugs that led to accounting discrepancies. They should never have been able to get away with their claim of infallibility - first, because generations of software engineers could have told the court that all software has bugs, and second, because Ross Anderson's work proving that software vulnerabilities were the cause of phantom ATM withdrawals, overriding the UK banking industry's insistence that its software, too, was infallible.

At Lawfare, Susan Landau, discussing work she did in collaboration with Steve Bellovin, Matt Blaze, and Brian Owsley. uses the Post Office fiasco as a jumping-off point to discuss the increasing problem of bugs in software used to produce evidence presented in court. Much of what we think of as "truth" - Breathalyzer readings, forensic tools, Hawkeye line calls in tennis matches - are not direct measurements but software-derived interpretations of measurements. Hawkeye at least publishes its margin for error even though tennis has decided to pretend it doesn't exist. Manufacturers of evidence-producing software, however, claim commercial protection, leaving defendants unable to challenge the claims being made about them. Landau and her co-authors conclude that courts must recognize that they can't assume the reliability of evidence produced bysoftware and that defendants must be able to conduct "adversarial audits".

Second story. At The Atlantic, Jonathan Zittrain complains that the Internet is "rotting". Link rot - broken links when pages get deleted or reorganized - and content drift, which sees the contents of a linked page change over time, are familiar problems for anyone who posts anything online. Gabriel Weinberg, the founder of search engine DuckDuckGo, has has talked about API rot, which breaks dependent functionality. Zittrain's particular concern is legal judgments, which increasingly may incorporate disappeared or changed online references like TikTok videos and ebooks. Ebooks in particular can be altered on the fly, leaving no trace of that thing you distinctly remember seeing.

Zittrain's response has been to help create sites to track these alterations and provide permanent links. It probably doesn't matter much that the net.wars archive has (probably) thousands of broken links. As long as the Internet Archive's Wayback Machine continues to exist as a source for vaped web pages, most of the ends of those links can be recovered. The Archive is inevitably incomplete, and only covers the open web. But it *does* matter if the basis for a nation's legal reasoning and precedents - what Zittrain calls "long-term writing" - can't be established with any certainty. Hence the enormous effort put in by the UK's National Archives to convert millions of pages of EU legislation so all could understand the legitimacy of post-Brexit UK law.

Third story. It turns out the same is true for the brick-by-brick enterprise we call science. In the 2020 study Open is not forever, authors Mikael Laakso, Lisa Matthias, and Najko Jahn find journal rot. Print publications are carefully curated and preserved by librarians and archivists, as well as the (admittedly well-funded) companies that publish them. Open access journals, however, have had a patchy record of success, and the study finds that between 2000 and 2019 174 open access journals from all major research disciplines and from all geographical regions vanished from the web. In science, as in law, it's not enough to retain the end result; you must be able to show your work and replicate your reasoning.

It's more than 20 years since I heard experts begin to fret about the uncertain durability of digital media; the Foundation for Information Research included the need for reliable archives in its 1998 founding statement. The authors of the journal study note that the journals themselves are responsible for maintaining their archives and preserving their portion of the scholarly record; they conclude that solving this problem will require the participation of the entire scholarly community.

What isn't clear, at least to me, is how we assure the durability of the solutions. It seemed a lot easier when it was all on paper in a reassuringly solid building.

Illustrations: The UK National Archives, in Kew (photo by Erian Evans via Wikimedia)..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 28, 2021

Judgments day

1024px-Submarine_cable_map_umap.pngThis has been quite a week for British digital rights campaigners, who have won two significant cases against the UK government.

First is a case regarding migrants in the UK, brought by the Open Rights Group and the3mllion. The case challenged a provision in the Data Protection Act (2018) that exempted the Home Office from subject access requests, meaning that migrants refused settled status or immigration visas had no access to the data used to decide their cases, placing them at an obvious disadvantage. ORG and the3million argued successfully in the Court of Appeal that this was unfair, especially given that nearly half the appeals against Home Office decisions before the law came into effect were successful.

This is an important win, but small compared to the second case.

Eight years after Edward Snowden revealed the extent of government interception of communications, the reverberations continue. This week, the the Grand Chamber of the Europgean Court of Human Rights found Britain's data interception regime breached the rights to privacy and freedom of expression. Essentially, as Haroon Siddique sums it up at the Guardian, the court found deficiencies in three areas. First, bulk interception was authorized by the secretary of state but not by an independent body such as a court. Second, the application for a warrant did not specify the kinds of communication to be examined. Third, search terms linked to an individual were not subject to prior authorization. The entire process, the court ruled, must be subject to "end-to-end safeguards".

This is all mostly good news. Several of the 18 applicants (16 organizations and two individuals), argue the ruling didn't go far enough because it didn't declare bulk interference illegal in and of itself. Instead, it merely condemned the UK's implementation. Privacy International expects that all 47 members of the Council of Europe, all signatories to the European Convention on Human Rights, will now review their surveillance laws and practices and bring them into line with the ruling, giving the win much broader impact./

Particularly at stake for the UK is the adequacy decision it needs to permit seamless sharing data with EU member states under the General Data Protection Regulation. In February the EU issued a draft decision that would grant adequacy for four years. This judgment highlights the ways the UK's regime is non-compliant.

This case began as three separate cases filed between 2013 and 2015; they were joined together by the court. PI, along with ACLU, Amnesty International, Liberty, and six other national human rights organizations, was among the first group of applicants. The second included Big Brother Watch, Open Rights Group, and English PEN; the third added the Bureau of Investigative Journalism.

Long-time readers will know that this is not the first time the UK's surveillance practices have been ruled illegal. In 2008, the CJEU ruled against the UK's DNA database. More germane, in 2014, the CJEU invalidated the Data Retention Directive as a disproportionate intrusion on fundamental human rights, taking down with it the UK's supporting legislation. At the end of 2014, to solve the "emergency" created by that ruling, the UK hurriedly passed the Data Retention and Investigatory Powers Act (DRIPA). The UK lost the resulting legal case in 2016, when the CJEU largely struck it down again.

Currently, the legislation that enables the UK's communications surveillance regime is the Investigatory Powers Act (2016), which built on DRIPA and its antecedents, plus the Terrorism Prevention and Investigation Measures Act (2011), whose antecedents go back to the Anti-Terrorism, Crime, and Security Act (2001), passed two months after 9/11. In 2014, I wrote a piece explaining how the laws fit together.

Snowden's revelations were important in driving the post-2013 items on that list; the IPA was basically designed to put the practices he disclosed on a statutory footing. I bring up this history because I was struck by a comment in Albuquerque's dissent: "The RIPA distinction was unfit for purpose in the developing Internet age and only served the political aim of legitimising the system in the eyes of the British public with the illusion that persons within the United Kingdom's territorial jurisdiction would be spared the governmental 'Big Brother'".

What Albuquerque is criticizing here, I think, is the distinction made in RIPA between metadata, which the act allowed the government to collect, and content, which is protected. Campaigners like the late Caspar Bowden frequently warned that metadata is often more revealing than content. In 2015, Steve Bellovin, Matt Blaze, Susan Landau, and Stephanie Pell showed that the distinction is no longer meaningful (PDF in any case.

I understand that in military-adjacent circles Snowden is still regarded as a traitor. I can't judge the legitimacy of all his revelations, but in at least one category it was clear from the beginning that he was doing the world a favor. That is alerting the world to the intelligence services' compromising crucial parts of the world's security systems that protect all of us. In ruling that the UK practices he disclosed are illegal, the ECtHR has gone a long way toward vindicating him as a whistleblower in a second category.


Illustrations: Map of cable data by Greg Mahlknecht, map by Openstreetmap contributors (CC-by-SA 2.0), from the Privacy International report on the ruling.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 19, 2021

Vaccine conoisseurs

800px-International_Certificates_of_Vaccination.jpgThis is one of those weeks when numerous stories update. Australia's dispute over sharing news has spawned deals that are bad for everyone except Facebook, Google, and Rupert Murdoch; the EU is beginning the final stages of formulating the ePrivacy Regulation; the UK awaits its adequacy decision on data protection; 3D printed guns are back; and the arrival of covid vaccines has revived the push for some form of vaccination certificate, which may (or may not) revive governments' desires for digital identities tied to each of us via biometrics and personal data.

To start with Australia: since the lower house of the Australian parliament has passed the law requiring Google and Facebook to negotiate licensing fees with publishers, Facebook began blocking Australian users from sharing "news content" - and the rest of the world from sharing links to Australian publishers - without waiting for final passage. The block is as overbroad as you might expect.

Google has instead announced a three-year deal under which it will pay Rupert Murdoch's News Corporation for the right to showcase it's output - which is almost universally paywalled.

Neither announcement is good news. Google's creates a damaging precedent of paying for links, and small public interest publishers don't benefit - and any publisher that does becomes even more dangerously dependent on the platforms to keep them solvent. On Twitter, Kate Crawford calls Facebook's move deplatforming at scale.

Next, as Glyn Moody helpfully explains, where GDPR protects personal data at rest, the ePrivacy Regulation covers personal data in transit. It has been pending since 2017, when the European Commission published a draft, which the European Parliament then amended. Massive amounts of lobbying and now-resolved internal squabbling over the text within the Council of the EU have finally been resolved so the three legs of this legislative stool can begin negotiations. Moody highlights two areas to watch: provisions exempting metadata from the prohibition on changing use without consent, and the rules regarding cookie walls. As negotiations proceed, however, there may be more.

As a no-longer EU member, the UK will have to actively adopt this new legislation. The UK's motivation to do so is simple: it wants - or should want - an adequacy decision. That is, for data to flow between the UK and the EU, the EU has to agree that the UK's privacy framework matches the EU's. On Tuesday, The Register reported that such a decision is imminent, a small piece of good news for British businesses in the sea of Brexit issues arising since January 1.

The original panic over 3D-printed guns was in 2013, when the US Department of Justice ordered the takedown of Defcad. In 2018, Defcad's owner, Cody Wilson, won his case against the DoJ in a settlement. At the time, 3D-printed plastic guns were too limited to worry about, and even by 2018 3D printing had failed to take off on the consumer level. This week Gizmodo reported that home-printing alarmingly functional automated weapons may now be genuinely possible for someone with the necessary obsession, home equipment, and technical skill.

Finally, ever since the beginning of this pandemic there has been concern that public health would become the vector for vastly expanded permanent surveillance that would be difficult to dislodge later.

The arrival of vaccinations has brought the weird new phenomenon of the vaccine connoisseur. They never heard of mRNA until a couple of months ago, but if you say you've been vaccinated they'll ask which one. And then say something like, "Oh, that's not the best one, is it?" Don't be fussy! If you're offered a vaccination, just take it. Every vaccine should help keep you alive and out of the hospital; like Willie Nelson's plane landings you can walk away from, they're *all* perfect. All will also need updates.

Israel is first up with vaccination certificates, saying that these will be issued to everyone after their second shot. The certificate will exempt them from some of the requirements for testing and isolation associated with visiting public places.

None of the problems surrounding immunity passports (as they were called last spring) has changed. We are still not sure whether the vaccines halt transmission or how long they last, and access is still enormously limited. Certificates will almost certainly be inescapable for international travel, as for other diseases like yellow fever and smallpox. For ordinary society, however, they would be profoundly discriminatory. In agreement on this: Ada Lovelace Institute, Privacy International, Liberty, Germany's ethics council. At The Lancet some researchers suggest they may be useful when we have more data, as does the the Royal Society; others reject them outright.

There is an ancillary concern. Ever since identity papers were withdrawn after the end of World War II, UK governments have repeatedly tried to reintroduce ID cards. The last attempt, which ended in 2010, came close. There is therefore legitimate concern about immunity passports as ID cards, a concern not allayed by the government's policy paper on digital identities, published last week.

What we need is clarity about what problem certificates are intended to solve. Are they intended to allow people who've been vaccinated greater freedom consistent with the lower risks they face and pose? Or is the point "health theater" for businesses? We need answers.


Illustrations: International vaccination certificates (from SimonWaldherr at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 22, 2021

Surveillance without borders

637px-Sophie_in't_Veld,_print2.jpegThis time last year, the Computers, Privacy, and Data Protection conference was talking about inevitable technology. Two thousand people from all over the world enclosed in two large but unventilated spaces arguing closely over buffets and snacks for four days! I remember occasional nods toward a shadow out there on the Asian horizon, but it was another few weeks before the cloud of dust indicating the coronavirus's gallop westward toward London became visible to the naked eye. This week marks a year since I've traveled more than ten miles from home.

The virus laughs at what we used to call "inevitable". It also laughs at what we think of as "borders".

The concept of "privacy" was always going to have to expand. Europe's General Data Protection Regulation came into force in May 2018; by CPDP 2019 the conference had already moved on to consider its limitations in a world where privacy invasion was going physical. Since then, Austrian lawyer Max Schrems has poked holes in international data transfers, police and others began rolling out automated facial recognition without the least care for public consent...and emergency measures to contain the public health crisis have overwhelmed hard-won rights.

This year two themes are emerging. First is that, as predicted, traditional ideas about consent simply do not work in a world where technology monitors and mediates our physical movements, especially because most citizens don't know to ask what the "legal basis for processing" is when their local bar demands their name and address for contact tracing and claims the would-be drinker has no discretion to refuse. Second is the need for enforcement. This is the main point Schrems has been making through his legal challenges to the Safe Harbor agreement ("Schrems I") and then to its replacement, the EU-US Privacy Shield agreement ("Schrems II"). Schrems is forcing data protection regulators to act even when they don't want to.

In his panel on data portability, Ian Brown pointed out a third problem: access to tools. Even where companies have provided the facility for downloading your data, none provide upload tools, not even archives for academic papers. You can have your data, but you can't use it anywhere. By contrast, he said, open banking is actually working well in the UK. EFF's Christoph Schmon added a fourth: the reality that it's "much easier to monetize hate speech than civil discourse online".

Artist Jonas Staal and lawyer Jan Fermon have an intriguing proposal for containing Facebook: collectivize it. In an unfortunately evidence-free mock trial, witnesses argued that it should be neither nationalized nor privately owned nor broken up, but transformed into a space owned and governed by its 2.5 billion users. Fermon found a legal basis in the right to self-determination, "the basis of all other fundamental rights". In reality, given Facebook's wide-ranging social effects, non-users, too, would have to become part-owners. Lawyers love governing things. Most people won't even read the notes from a school board meeting.

Schmon favored finding ways to make it harder to monetize polarization, chiefly through moderation. Jennifer Cobbe, in a panel on algorithm-assisted decision making suggested stifling some types of innovation. "Government should be concerned with general welfare, public good, human rights, equality, and fairness" and adopt technology only where it supports those values. Transparency is only one part of the answer - and it must apply to all parts of systems such as those controlling whether someone stays in jail or is released on parole, not just the final decision making bit.

But the world in which these debates are taking place is also changing, and not just because of the coronavirus. In a panel on intelligence agencies and fundamental rights, for example, MEP Sophie in't Veld (NL) pointed out the difficulties of exercising meaningful oversight when talk begins about increasing cross-border cooperation. In her view, the EU pretends "national security" is outside its interests, but 20 years of legislation offers national security as a justification for bloc-wide action. The result is to leave national authorities to make their own decisions. and "There is little incentive for national authorities to apply safeguards to citizens from other countries." Plus, lacking an EU-wide definition of "national security", member states can claim "national security" for almost any exemption. "The walls between law enforcement and the intelligence agencies are crumbling."

A day later, Petra Molnar put this a different way: "Immigration management technologies are used as an excuse to infringe on people's rights". Molnar works to highlight the use of refugees and asylum-seekers as experimental subjects for news technologies - drones, AI lie detectors, automated facial recognition; meanwhile the technologies are blurring geographical demarcations, pushing the "border" away from its physical manifestation. Conversely, current UK policy moves the "border" into schools, rental offices, and hospitals by requiring for teachers, landlords, and medical personnel to check immigration status.

Edin Omanovic pointed out a contributing factor: "People are concerned about the things they use every day" - like WhatsApp - "but not bulk data interception". Politicians have more to gain by signing off on more powers than from imposing limits - but the narrowness of their definition of "security" means that despite powers, access to technology, and top-class universities, "We've had 100,000 deaths because we were unprepared for the pandemic we knew was coming and possible."


Illustrations: Sophie in't Veld (via Arnfinn Petersen at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 11, 2020

Facebook in review

parliament-whereszuck.jpgLed by New York attorney general Letitia James, this week 46 US states, plus Guam, and Washington, DC, and, separately, the Federal Trade Commission filed suits against Facebook alleging that it has maintained an illegal monopoly while simultaneously reducing privacy protections and services to boost its bottom line. The four missing states: Alabama, Georgia, South Carolina, and South Dakota.

As they say, we've had this date from the beginning.

It's seemed likely for months that legal action against Facebook was on the way. There were the we-mean-business Congressional hearings and the subsequent committee report, followed by the suit against Google the Department of Justice filed in October.

Facebook seems peculiarly deserving. It began in 2004 as a Harvard-only network, using its snob appeal to expand to the other Ivy League schools, then thousands of universities and high schools, and finally the general public. Mass market adoption grew in tandem with the post-2009 explosion of smart phones. By then, Facebook had frequently tweaked its privacy settings and repeatedly annoyed users with new privacy-invasive features in the (sadly correct) and arrogant belief they'd never leave. By 2010, Zuckerberg was claiming that "privacy is no longer a social norm", adding that were he starting then he would make everything public by default, like Twitter.

It's hard to pick Facebook's creepiest moments out of so many, but here are a few: in 2011 it began auto-recognizing user photographs, in 2012 it dallied with in-network "democracy" - a forerunner of today's unsatisfactory oversight board, and in 2014 it tested emotionally manipulating its users.

In 2011, based on the rise and fall of earlier services like CompuServe, AOL, Geocities, LiveJournal, and MySpace you can practically carbon-date people by their choice of social media - some of us wrongly surmised that perhaps Facebook had peaked. "The [online] party keeps moving" is certainly true; what was different was that Zuckerberg knew it and launched his program of aggressive and defensive acquisitions.

The 2012 $1 billion acquisition of Instagram and 2014 $19 billion purchase of WhatsApp are the heart of the suits. The lawsuits suggest that without Facebook's intervention we'd have social media successfully competing on privacy. In his summary, Matt Stoller credits this idea to Dina Srinivasan, who argued in 2019 that Facebook saw off then-dominant MySpace by presenting itself as "privacy-centered" at a time when the press was claiming that MySpace's openness made it unsafe for children. Once in pole position, Facebook began gradually pushing greater openness on its users - bait and switch, I called it in 2010.

I'm less convinced that MySpace's continued existence could have curbed Facebook's privacy invasion. In 2004, the year of Facebook's birth, Australian privacy activist Roger Clarke surveyed the earliest social networks - chiefly Plaxo - and predicted that all social networks would inevitably exploit their users. "The only logical business model is the value of consumers' data," he told me for the Independent (TXT). I think, therefore, that the privacy-destructive race to the bottom-of-the-business-model was inevitable given the US's regulatory desert. Google began heading that way soon after its 2004 IPO; by 2006 privacy advocates were already warning of its danger.

Srinivasan details Facebook's progressive privacy invasion: the cooption of millions of third parties via logins and the Like button propagandize its service to collect and leverage vast amounts of personal data while it became a vector for the unscrupulous to hack elections. This is all without considering non-US issues such as Free Basics, which has made Facebook effectively the only Internet service in parts of the world. Facebook also had Silicon Valley's venture capital ethos at its back and Facebook's share structure, which awards Zuckerberg full and permanent control.

In a useful paper on nascent competitors, Tim Wu and C. Scott Hemphill discuss how to spot anticompetitive acquisitions. As I recall, though, many - notably the ever-prescient Jeff Chester - protested the WhatsApp and Instagram acquisitions at the time; the EU only agreed because Facebook promised not to merge the user databases, and issued a €110 million fine when it realized the company lied. Last year Facebook announced it would merge the databases, which critics saw as a preemptive move to block a potential breakup. Allowing the mergers to go ahead seems less dumb, however, if you remember that it took until 2017 and Lina Khan to realize that the era of two guys in a garage up-ending entrenched monopolists was over.

The suits ask the court to find Facebook guilty under Section 2 of the Sherman Act (which is a felony) and Section 7 of the Clayton Act, block it from making further acquisitions valued at $10 million or above, and require it to divest or restructure illegally acquired companies or current Facebook assets or business lines. Restoring some competition to the Internet ecosystem in general and social media in particular seems within reach of this action - though there are many other cases that also need attention. It won't be enough to fixing the damage to democracy and privacy, but perhaps the change in attitude it represents will ensure the next Facebook doesn't become a monster.


Illustrations: Mark Zuckerberg's empty chair at last year's Grand Committee hearing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 4, 2020

Scraped

Somehow I had missed the hiQ Labs v. LinkedIn case until this week, when I struggled to explain on Twitter why condemning web scraping is a mistake. Over the years, many have made similar arguments to ban ordinary security tools and techniques because they may also be abused. The usual real world analogy is: we don't ban cars just because criminals can use them to escape.

The basics: hiQ, which styles itself as a "talent management company", used automated bots to scrape public LinkedIn profiles, and analyze them into a service advising companies what training they should invest in or which employee might be on the verge of leaving. All together now: *so* creepy! LinkedIn objected that the practice violates its terms of service and harms its business. In return, hiQ accused LinkedIn of purely anti-competitive motives, and claimed it only objected now because it was planning its own version.

LinkedIn wanted the court to rule that hiQ's scraping its profiles constitutes felony hacking under the Computer Fraud and Abuse Act (1986). Meanwhile, hiQ argued that because the profiles it scraped are public, no "hacking" was involved. EFF, along with DuckDuckGo and the Internet Archive, which both use web scraping as a basic tool, filed an amicus brief arguing correctly that web scraping is a technique in widespread use to support research, journalism, and legitimate business activities. Sure, hiQ's version is automated, but that doesn't make it different in kind.

There are two separate issues here. The first is web scraping itself, which, as EFF says, has many valid uses that don't involve social media or personal data. The TrainTimes site, for example, is vastly more accessible than the National Rail site it scrapes and re-presents. Over the last two decades, the same author, Matthew Somerville, has built numerous other such sites that avoid the heavy graphics and scripts that make so many information sites painful to use. He has indeed gotten in trouble for it sometimes; in this example, the Odeon movie theaters objected to his making movie schedules more accessible. (Query: what is anyone going to do with the Odeon movie schedule beyond choosing which ticket to buy?)

As EFF writes in its summary of the case, web scraping has also been used by journalists to investigate racial discrimination on Airbnb and find discriminatory pricing on Amazon; in the early days of the web, civic-minded British geeks used web scraping to make information about Parliament and its debates more accessible. Web scraping should not be illegal!

However, that doesn't mean that all information that can be scraped should be scraped or that all information that can be scraped should be *legal* to scrape. Like so many other basic techniques, web scraping has both good and bad uses. This is where the tricky bit lies.

Intelligence agency personnel these days talk about OSINT - "open source intelligence". "Open source" in this context (not software!) means anything they can find and save, which includes anything posted publicly on social media. Journalists also tend to view anything posted publicly as fair game for quotation and reproduction - just look at the Guardian's live blog any day of the week. Academic ethics require greater care.

There is plenty of abuse-by-scraping. As Olivia Solon reported last year, IBM scraped Flickr users' innocently posted photographs repurposed them into a database to train facial recognition algorithms, later used by Immigration and Customs Enforcement to identify people to deport. (In June, when the protests after George Floyd's murder led IBM to pull back on selling facial recognition "for mass surveillance or racial profiling".) Clearview AI scraped billions of photographs off social media and collating them into a database service to sell to law enforcement. It's safe to say that no one posted their profile on LinkedIn with the intention of helping a third-party company get paid by their employer to spy on them.

Nonetheless, those abuse cases do not make web scraping "hacking" or a crime. They are difficult to rectify in the US because, as noted in last week's review of 30 years of data protection, the US lacks relevant privacy laws. Here in the UK, since the data Somerville was scraping was not personal, his complainants typically argued that he was violating their copyright. The hiQ case, if brought outside the US, would likely be based in data protection law.

In 2019, the Ninth Circuit ruled in favor of hiQ, saying it did not violate CFAA because LinkedIn's servers were publicly accessible. In March, LinkedIn asked the Supreme Court to review the case. SCOTUS could now decide whether scraping publicly accessible data is (or is not) a CFAA violation.

What's wrong in this picture is the complete disregard for the users in the case. As the National Review says, a ruling for hiQ could deprive users of all control over their publicly posted information. So, call a spade a spade: at its heart this case is about whether LinkedIn has an exclusive right to abuse its users' data or whether it has to share that right with any passing company with a scraping bot. The profile data hiQ scraped is public, to be sure, but to claim that opens it up for any and all uses is no more valid than claiming that because this piece is posted publicly it is not copyrighted.


Illustrations: I simply couldn't think of one.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 13, 2020

Copyright in review

SitaHanuBananaSm.jpgAs if on cue, after last week's conclusion that the battle over crypto will never reach a settlement, the Irish Times, reports that the EU Council of Ministers has a draft council resolution demanding "lawful and targeted" access to encrypted communications. Has no one learned anything in the last four years?

Crypto was the first of a series of reviews of the most durable, intractable disputes of the last 20 years by highlighting how net.wars has written about it as the 1,000th column approaches. The second is copyright, which has been irredeemably altered by the arrival of digital technologies.

Where crypto is the same story endlessly repeated, copyright is a collection of interlinked conflicts that comprise a struggle by rightsholder industries (entertainment, music, publishing, news, software) to continue business as usual while the world changed. Loosely, these conflicts fall into three clusters: legislation, enforcement, and expansion.

New legislation beginning in the 1990s essentially sought to limit what many would see as the normal functioning of computer networks. The Digital Millennium Copyright Act (1998) in the US and the EU Copyright Directive (1996), modified in 2001 and 2019 both include banning technology that can be used to bypass copy protection. Contemporary critics pointed out that this could as easily be scissors and Liquid Paper, but the intended target was software to break digital rights management and copy protection. Today, DRM is built into ebooks and Blu-Ray discs - but also HDMI TV cables, third-party ink cartridges and even remote garage door openers.

These anti-circumvention provisions, however, have been abused to block security researchers from publishing unwanted findings, by John Deere to stop farmers from repairing their tractors, and by Apple to oppose modifying iPhones. It's also been used more creatively.

The DMCA and the EUCD are also vectors for censorship when rightsholders overreach in demanding the removal of copyrighted material or automated takedown systems make mistakes. The 2019 revision of the EUCD expects sites to pay for even small news snippets accompanying links (an old EU obsession) and filter copyrighted content at time of upload, requirements Poland has challenged in court.

Conflicts around enforcement have pursued each new method of sharing material in turn, beginning with bulletin boards and floppy disks and seguing through Usenet, Napster (2000), file-sharing, and torrents in the mid-2000s. The oft-forgotten case that originally created today's notice and takedown rules was the 1994-1995 fight between the Church of Scientology and Usenet critics that saw Scientology's secrets sprayed across the Internet. That case also heralded a period when rightsholders were decidedly hostile. The two biggest photo agencies pursued small businesses with licensing fee demands; recording companies and movie studios took downloaders to court; some rightsholders issued takedown notices against fan fiction and even knitting patterns based on Dr Who. Many of us said from the beginning that the best answer to pirate sites was building legal sites; by the 2010s this was proving correct.

The stage has shifted for both legislation and enforcement, as the US government in particular (but not solely) seeks to embed expansion of IP laws and anti-piracy enforcement in free trade agreements. In 2014, copyright was taken out of the Transatlantic Trade Investment Partnership agreement, but digital rights NGOs know they have to keep watching carefully - when they can get a look at the text.

Expansion has two forms: length and scope. Term extension means that when a song was written in 1969 its copyright would have expired in 1997, renewable until 2025 but now lasts for the author's life plus 70 years (2088, for the song I have in mind). Scope has expanded inevitably as copyrightable software becomes embedded in every physical device.

The fundamental conflict was predicted in 1996, when Pamela Samuelson published The Copyright Grab in Wired. Under "copyright maximalism", she warned, every piece of copyrighted work, no matter how small, would be chargeable, as suggested by Mark Stefik's Letting Loose the Light essay.

As Samuelson and others pointed out, until the Internet IP law only mattered to a few specialists. By opening universal distribution, the Internet turned the laws appropriate for geographically-delineated commercial publishers into laws that make no sense to consumers, as universities were the first to find out. These mismatches; many copyright revisions of the 1990s and 2000s sought "harmonization", always in the most restrictive direction. The Canadian legal scholar Michael Geist to established that these apparently distinct national initiatives had a common source.

There have been some exceptions, such as legal reviews and work to open orphan works and parody. Challenges such as 3D printing still await.

The real story, though, is the very difficult landscape for artists and creators, who lost much control over their work because of media consolidation in the 1980s and 1990s the economic shocks of 9/11 and the 2008 financial crash, and advertising's online shift. Creators seeking income are also facing floods of free blog postings, videos, music, and, especially, images. No amount of copyright shenanigans is solving the fundamental problem: how to help artists and creators make a living from their work. That is what copyright law was created to enable. Never forget that.


Illustrations: A still from Sita Sings the Blues, written and directed by Nina Paley, who believes copyright should be abolished.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 30, 2020

The reckoning

parliament-whereszuck.jpgIt seems clear that we're approaching a reckoning for Big Tech as the societal costs of their success keep becoming bigger and clearer. Like so many other things, the pandemic has made these issues more urgent, as the money these companies suck away from local businesses and communities is now badly needed to help rebuild suffering economies. Twenty-five years ago, some were celebrating the dawn of cyberspace as the approaching end of the nation-state. Today's crises remind that some problems only governments can solve.

In the US, two types of legal actions are heading GAFA's way, as suggested by the recent two-pronged antitrust hearing. The first, which led to the Democrat-led antitrust report of a few weeks ago, has spawned a lawsuit case against Google alleging anticompetitive behavior surrounding its search engine. The second, reflecting the Republican-led grievance that conservative voices are being suppressed, has led to this week's Commerce Committee hearing on platform censorship. Thoughts on that one, which will likely result in a push to reform S230, will have to wait for concrete proposals.

Pending elsewhere: both users and Epic Games are suing Apple over the 30% commissions charged by its App Store. Meanwhile, in France, a coalition of trade groups has filed an antitrust complaint ($) asking the French competition authority to stop Apple from following through on its plans to restrict mobile trackers for advertising. This is, as the FT puts it, "one of the first legal actions alleging that big tech groups are using privacy arguments to abuse their market power". On Twitter, Lukasz Olejnik rightly says that this case about "privacy-competition trade-off" will be fascinating. It will, not least because privacy has not in general been a market mover.

Tech-related antitrust suits are typically ten years late, largely because the industry's speed makes it hard to see where to push until the damage has become deeply entrenched. In 2014, I thought Google's purchase of Nest would be the antitrust case of 2024. Instead, Google is being accused of abusing its position by illegally tying its search engine, its main revenue source, to its Chrome browser and Android licensing agreements, and, paying other browser makers such as Apple for pole position as their default search engine. (Query: if Google search is so great, why do they need to do this? The steady degradation of the Google experience has been clearer to those of us who stopped using it.)

Both Sarah Miller and Matt Stoller see the Google case as a near-copy of the late 1990s case against Microsoft, which also focused on tying. In that case, Microsoft used its Windows dominance to make its Internet Explorer the default for browsing the web. The current complaint specifically references that case, calling Google's tactics "the same playbook". Privacy is not among its concerns, though it does at least note that the key to Google's success and scale is the data it collects as the price consumers pay for its "free" services.

It's rare that an antitrust case scores a hit on an entirely different company. Google pays Apple $8 to $12 billion a year - compared to Apple's Q4 2019 $13.7 billion in profits. Apple will survive if Google is enjoined from making such payments. Firefox, however, might not, since its Google contract represents most of its income. Diversifying the search market is good for competition; shrinking the browser market is not.

My suspicion is that an additional factor in the answer to "why now?" is the arrogance and indifference to complaints that these companies have often displayed. Facebook founder Mark Zuckerberg has been particularly resistant, refusing in 2018 to show up to testify in front of representatives of nine countries.

It's tempting to divide these companies into those still run by their founders - Amazon and Facebook - and those that are on their second (Google) or later (Apple) generation of leaders. But the better division is between normal share structures (Apple and Amazon) and kingmaker share structures. Google has ensured that founders Sergey Brin and Larry Page, along with original company chair Eric Schmidt, could never lose control of the company. Facebook's share structure is even more tightly controlled, giving Zuckerberg 60% of the voting rights; he is the company's king.

Neither hearings nor complaint mention this, but I think it's crucial. The benefit of these structures was supposed to be to keep the companies nimble and innovative. It's not clear it's worked. The downside is the showrunners can be unresponsive to complaints; Facebook will never change as long as Zuckerberg is in charge - and no one can push him out. For this reason, ownership structures should be a consideration in modernizing antitrust law/

In the end, the Microsoft case was largely abandoned - but it reportedly nonetheless left a mark by changing the company's culture into one vastly more cautious and risk-averse, like IBM before it. Today's biggest technology companies have been less easily intimidated by big and bigger fines or adverse decisions. But governments won't give up; these cases, like others before them are all part of the long arc of the power struggle between global technology and national governments. We are just at the beginning.


Illustrations: Mark Zuckerberg's empty chair in front of the Grand Committee.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 16, 2020

The rights stuff

Humans-Hurt-synth.pngIt took a lake to show up the fatuousness of the idea of granting robots legal personality rights.

The story, which the AI policy expert Joanna Bryson highlighted on Twitter, goes like this: in February 2019 a small group of people, frustrated by their inability to reduce local water pollution, successfully spearheaded a proposition in Toledo, Ohio that created the Lake Erie Bill of Rights. Its history since has been rocky. In February 2020, a farmer sued the city and a US district judge invalidated the bill. This week, three-judge panel from Ohio's Sixth District Court of Appeals ruled the February judge made a mistake. For now, the lake still has its rights. Just.

We will leave aside the question of whether giving lakes and the other ecosystems listed in the above-linked Vox article is an effective means of environmental protection. But given that the idea of giving robots rights keeps coming up - the EU is toying with the possibility - it seems worth teasing out the difference.

In response to Bryson, Nicholas Bohm noted the difference between legal standing and personality rights. The General Data Protection Regulation, for example, grants legal standing in two new ways: collective action and civil society representing individuals seeking redress. Conversely, even the most-empowered human often lacks legal standing; my outrage that a brick fell on your head from the top of a nearby building does not give me the right to sue the building's owner on your behalf.

Rights as a person, however, would allow the brick to sue on its own behalf for the damage done to it by landing on a misplaced human. We award that type of legal personhood to quite a few things that aren't people - corporations, most notoriously. In India, idols have such rights, and Bohm cites a case in which the trustee of a temple, because the idol they represented had these rights in India, was allowed to join a case claiming improper removal in England.

Or, as Bohm put it more succinctly, "Legal personality is about what you are; standing is about what it's your business to mind."

So if lakes, rivers, forests, and idols, why not robots? The answer lies in what these things represent. The lakes, rivers, and forests on whose behalf people seek protection were not human-made; they are parts of the larger ecosystem that supports us all, and most intimately the people who live on their banks and verges. The Toledoans who proposed granting legal rights to Lake Erie were looking for a way to force municipal action over the lake's pollution, which was harming them and all the rest of the ecosystem the lake feeds. At the bottom of the lake's rights, in other words, are humans in existential distress. Granting the lake rights is a way of empowering the humans who depend on it. In that sense, even though the Indian idols are, like robots, human-made, giving them personality rights enables action to be taken on behalf of the human community for whom they have significance. Granting the rights does not require either the lake or the idol to possess any form of consciousness.

In a paper to which Bryson linked, S.G. Solaiman argues that animals don't quality for rights, even though they have some consciousness, because a legal personality must be able to "enjoy rights and discharge duties". The Smithsonian National Zoo's giant panda, who has been diligently caring for her new cub for the last two months, is not doing so out of legal obligation.

Nothing like any of this can be said of rights for robots, certainly not now and most likely not for a long time into the future, if ever. Discussions such as David Gunkel's How to Survive a Robot Invasion, which compactly summarizes the pros and cons, generally assume that robots will only qualify for rights after a certain threshold of intelligent consciousness has been met. Giving robots rights in order to enable suffering humans to seek redress does not come up at all, even when the robots' owners hold funerals because the manufacturer has discontinued the product. Those discussions rightly focus on manufacturer liability.

In the 2015 British TV series Humans (a remake of the 2012 Swedish series Äkta människor), an elderly Alzheimer's patient (William Hurt) is enormously distressed when his old-model carer robot is removed, taking with it the only repository of his personal memories, which he can no longer recall unaided. It is not necessary to give the robot the right to sue to protect the human it serves, since family or health workers could act on his behalf. The problem in this case is an uncaring state.

The broader point, as Bryson wrote on Twitter, is that while lakes are unique and can be irreparably damaged, digital technology - including robots - "is typically built to be fungible and upgradeable". Right: a compassionate state merely needs to transfer George's memories into a new model. In a 2016 blog posting, Bryson also argues against another commonly raised point, which is whether the *robots* suffer: if designers can install suffering as a feature, they can take it out again.

So, the tl;dr: sorry, robots.


Illustrations: George (William Hurt) and his carer "synth", in Humans.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 25, 2020

The zero on the phone

WeRobot2020-Poster.jpegAmong the minor casualties of the pandemic has been the appearance of a Swiss prototype robot at this year's We Robot, the ninth year of this unique conference that crosses engineering, technology policy, and law to identify future conflicts and pre-emptively suggest solutions. The result was to leave the robots considered by this virtual We Robot remarkably (appropriately) abstract.

We Robot was founded to get a jump on the coming conflicts that robots will bring to law and policy, in part so that we don't repeat the Internet experience of repeating the same arguments decades on end. This year's event pre-empted the Internet experience in a new way: many authors have drawn on the failed optimism and cooperation of the 1990s to begin defining ways to ensure that robotics and AI do not follow the same path. Where at the beginning we were all eager to embrace robots, this year their disembodied AIs are being done *to* us.

In the one slight exception to this rule, Hallie Siegel's exploration of senior citizens' attitudes towards new technologies found that the seniors she studies are pragmatic, concerned about their privacy and autonomy and only really interested in technologies that provided benefits they really need.

Jason Millar and Elizabeth Gray drew directly on the Internet experience by comparing network neutrality to the issues surrounding the mapping software that controls turn-by-turn navigation systems in a discussion of "mobility shaping". Should navigation services be common carriers, as telephone lines are? The idea appeals to me, if only because the potential for physical control of where our vehicles are allowed to go seems so clear.

The theme of exploitation was particularly visible in the two papers on Africa. In the first, Arthur Gwagwa (Strathmore University, Nairobi), Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy de Beer warn that the combination of foreign capital and local resources is likely to reproduce the power structures of previous forms of colonialism, an argument also seen recently in a paper by Abeba Birhane. Women in particular, who run the majority of start-ups in some African countries, may be ignored, and the authors suggest that a GDPR-like rule awarding individuals control over their own data could be crucial in creating value for, rather than extracted from, Africa.

In the second, Laura Foster (Indiana University), Bram Van Wiele, and Tobias Schönwetter extracted a database of press stories about AI in Africa from Lexis-Nexus, to find the familiar set of claims for new technology: happy, value-neutral disruption, yay!. The failure of most of these articles to consider gender and race, they observed, doesn't make the emerging picture neutral, but serves to reinforce the default of the straight, white male.

One way we push back against AI/robot control is the "human in the loop" to whom the final decision is delegated. This human has featured in every We Robot conference, most notably in 2016 as Madeleine Elish's moral crumple zone. In his paper, Liam McCoy argues for the importance of meaningful control, because the middle ground, where the human is expected to solve the most complex situations where AI fails without support or authority is truly dangerous. The middle ground may be profitable; at UK IGF a few weeks ago, Gus Hosein noted that automating dispute resolution has what's made GAFA rich. But in the higher stakes of cyber-physical systems, the human you summon by pushing zero has to be able to make a difference.

Silvia de Conca's idea of "human-centered legal design", which sought to give autonomous agents a duty of care as a way of filling the gap in liability that presently exists, and Cynthia Khoo's interest in vulnerable communities who are harmed by behavior that emerges from combined business models, platform scale, human nature, and algorithm design presented different methods of putting a human in the loop. Often, Khoo has found in investigating this idea, the potential harm was in fact known and simply ignored; how much can and should be foreseen when system parts interact in unexpected ways is a rising issue.

Several papers explored previously unnoticed vectors for bias and control. Sentiment analysis, last seen being called "the snake oil of 2011", and its successor, emotion analysis, which I first saw explored in the 1990s by Rosalind Picard at MIT, are creeping into AI systems. Some are particularly dubious: aggression detection systems and emotion recognition cameras.

Emily McBain-Ashfield and Jason Millar are the first I'm aware of to study how stereotyping gets into these systems. Yes, it's in the data - but the problem lies in the process analyzing and tagging it. The authors found three methods of doing this: manual (human, slow), dictionary-based using seed words (automated), and crowdsourced (see also Mary L. Gray and Siddharth Suri's 2019 book, Ghost Work. All have problems; automating that sort of issue creates notoriously crude mistakes, and the participants in crowdsourcing may be from very different linguistic and cultural contexts.

The discussant for this paper, Osonde Osaba sounded appalled: "By having these AI models of emotion out in the wild in commercial products we are essentially sanctioning the unregulated experimentation on humans and their emotional processes without oversight or control."

Remedies have to contend, however, with the legacy infrastructure. Alice Xiang discovered a conflict between traditional anti-discrimination law, which bars decision making based on a set of protected classes and the technical methods of mitigating algorithmic bias. "If we're not careful," she said, "the vast majority of approaches proposed in machine learning literature might actually be illegal if they are ever tested in court."

We Robot 2020 was the first to be held outside the US, and chairs Florian Martin-Bariteau, Jason Millar, and Katie Szilagyi set out to widen its international character and diversity. When the pandemic hit, the resulting exceptional breadth of location of authors and discussants made it infeasible to ask everyone to pretend they were in Ottawa's time zone. The conference therefore has recorded the authors' and discussants' conversations as if live - which means that you, too, can experience the originals. Just follow the links. We Robot events not already linked here: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference.


Illustrations: Our robot avatars attend the conference for us on the We Robot 2020 poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 31, 2020

Driving while invisible

jamesbond-invisiblecar.jpegThe point is not whether it's ludicrous but whether it breaks the law.

Until Hannah Smethurst began speaking at this week's gikii event - the year's chance to mix law, digital rights, and popular culture - I had not realized just how many invisible vehicles there are in our books and films. A brief trawl turns up: Wonder Woman's invisible jet, Harry Potter's invisibility cloak and other invisibility devices, and James Bond's invisible Aston Martin. Do not trouble me with your petty complaints about physics. This is about the law.

Every gikii (see here for links to writeups of previous years) - ranges from deeply serious-with-a-twist to silly-with-an-insightful undercurrent. This year's papers included the need for a fundamental rethink of how we regulate power (Michael Veale), the English* "bubble" law that effectively granted flatmates permanent veto power over each other's choice of sex partner (gikii founder Lilian Edwards), and the mistaken-identity frustrations of having early on used your very common name as your Gmail address (Jat Singh).

In this context, Smethurst's paper is therefore business as usual. As she explained, there is nothing in highway legislation that requires your car to be visible. The same is not true of number plates, which the law says must be visible at all times. But can you enforce it? If you can't see the car, how do you know you can't see the number plate? More uncertain is the highway code's requirement to indicate braking and turns when people don't know you're there; Smethurst suggested that a good lawyer could argue successfully that turning on the lights unexpectedly would dazzle someone. No, she said, the main difficulty is the dangerous driving laws. Well, that and the difficulty of getting insurance to cover the many accidents when people - pedestrians, cyclists, other cars - collide with it.

This raised the possibly of "invisibility lanes", an idea that seems like it should be the premise for a sequel to Death Race 2000. My overall conclusion: invisibility is like online anonymity. People want it for themselves, but not for other people - at least, not for other people they don't trust to behave well. If you want an invisible car so you can drive 100 miles an hour with impunity, I suggest a) you probably aren't safe to have one, and b) try driving across Kansas.

We then segued into the really important question: if you're riding an invisible bike, are *you* visible? (General consensus: yes, because you're not enclosed.)

On a more serious note, people have a tendency to laugh nervously when you mention that numerous jurisdictions are beginning to analyze sewage for traces of coronavirus. Actually, wastewater epidemiology, as this particular public health measure is known, is not a new surveillance idea born of just this pandemic, though it does not go all the way back to John Snow and the Broadwick Street pump. Instead, Snow plotted known cases on a map, and spotted the pump as the source of contagion when they formed a circle around it. Still, epidemiology did start with sewage.

In the decades since wastewater epidemiology was developed, some of its uses have definitely had an adversarial edge, such asestablishing the level of abuse of various drugs and doping agents or particular diseases in a given area. The goal, however, is not to supposed to be trapping individuals; instead it's to provide population-wide data. Because samples are processed at the treatment plant along with everyone else's, there's a reasonable case to be made the system is privacy-preserving; even though you could analyze samples for an individual's DNA and exact microbiome, matching any particular sample to its own seems unlikely.

However, Reuben Binns argued, that doesn't mean there are no privacy implications. Like anything segmented by postcode, the catchment areas defined for such systems are likely to vary substantially in the number of households and individuals they contain, and a lot may depend on where you put the collection points. This isn't so much an issue for the present purpose, which is providing an early-warning system for coronavirus outbreaks, but will be later, when the system is in place and people want to use it for other things. A small neighborhood with a noticeable concentration of illegal drugs - or a small section of an Olympic athletes village with traces of doping agents above a particular threshold - could easily find itself a frequent target of more invasive searches and investigations. Also, unless you have your own septic field, there is no opt-out.

Binns added this unpleasant prospect: even if this system is well-intentioned and mostly harmless, it becomes part of a larger "surveillant assemblage" whose purpose is fundamentally discriminatory: "to create distinctions and hierarchies in populations to treat them differently," as he put it. The direction we're going, eventually every part of our infrastructure will be a data source, for our own good.

This was also the point of Veale's paper: we need to stop focusing primarily on protecting privacy by regulating the use and collection of data, and start paying attention to the infrastructure. A large platform can throw away the data and still have the models and insights that data created - and the exceptional computational power to make use of it. All that infrastructure - there's your invisible car.

Illustrations: James Bond's invisible car (from Live and Let Die).

*Correction: I had incorrectly identified this law as Scottish.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 10, 2020

Trading digital rights

The_Story_of_Mankind_-_Mediæval_Trade.pngUntil this week I hadn't fully appreciated the number of ways Brexiting UK is trapped between the conflicting demands of major international powers of the size it imagines itself still to be. On the question of whether to allow Huawei to participate in building the UK's 5G network, the UK is caught between the US and China. On conditions of digital trade - especially data protection - the UK is trapped between the US and the EU with Northern Ireland most likely to feel the effects. This was spelled out on Tuesday in a panel on digital trade and trade agreements convened by the Open Rights Group.

ORG has been tracking the US-UK trade negotiations and their effect on the UK's continued data protection adequacy under the General Data Protection Regulation. As discussed here before, the basic problem with respect to privacy is that outside the state of California, the US has only sector-specific (mainly health, credit scoring, and video rentals) privacy laws, while the EU regards privacy as a fundamental human right, and for 25 years data protection has been an essential part of implementing that right.

In 2018 when the General Data Protection Regulation came into force, it automatically became part of British law. On exiting the EU at the end of January, the UK replaced it with equivalent national legislation. Four months ago, Boris Johnson said the UK intends to develop its own policies. This is risky; according to Oliver Patel and Nathan Lea at UCL, 75% of the UK's data flows are with the EU (PDF). Deviation from GDPR will mean the UK will need the EU to issue an adequacy ruling that the UK's data protection framework is compatible. The UK's data retention and surveillance policies may make obtaining that adequacy decision difficult; as Anna Fielder pointed out in Tuesday's discussion, this didn't arise before because national security measures are the prerogative of EU member states. The alternatives - standard contractual clauses and binding corporate rules - are more expensive to operate, are limited to the organization that uses them, and are being challenged in the European Court of Justice.

So the UK faces a quandary: does it remain compatible with the EU, or choose the dangerous path of deviation in order to please its new best friend, the US? The US, says Public Citizen's Burcu Kilic, wants unimpeded data flows and prohibitions on requirements for data localization and disclosure of source code and algorithms (as proposals for regulating AI might mandate).

It is easy to see these issues purely in terms of national alliances. The bigger issue for Kilic - and for others such as Transatlantic Consumer Dialogue - is the inclusion of these issues in trade agreements at all, a problem we've seen before with intellectual property provisions. Even when the negotiations aren't secret, which they generally are, international agreements are relatively inflexible instruments, changeable only via the kinds of international processes that created them. The result is to severely curtail the ability of national governments and legislatures to make changes - and the ability of civil society to participate. In the past, most notably with respect to intellectual property rights, corporate interests' habit of shopping their desired policies around from country to country until one bit and then using that leverage to push the others to "harmonize" has been called "policy laundering". This is a new and updated version, in which you bypass all that pesky, time-consuming democracy nonsense. Getting your desired policies into a trade agreement gets you two - or more - countries for the price of one.

In the discussion, Javier Ruiz called it "forum shifting" and noted that the latest example is intermediary liability, which is included in the US-Mexico-Canada agreement that replaced NAFTA. This is happening just as countries - including the US - are responding to longstanding problems of abuse on online platforms by considering how to regulate the big online platforms - in the US, the debate is whether and how to amend S230 of the Communications Decency Act, which offers a shield against intermediary liability, in the UK it's the online harms bill and the age-appropriate design code.

Every country matters in this game. Kilic noted that the US is also in the process of negotiating a trade deal with Kenya that will also include digital trade and intellectual property - small in and of itself, but potentially the model for other African deals - and for whatever deal Kenya eventually makes with the UK.

Kilic traces the current plans to the Trans-Pacific Partnership, which included the US during the Obama administration and which attracted public anger over provisions for investor-state dispute settlement. On assuming the presidency, Trump withdrew, leaving the other countries to recreate it as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership, which was formally signed in March 2018. There has been some discussion of the idea that a newly independent Britain could join it, but it's complicated. What the US wanted in TPP, Kilic said, offers a clear guide to what it wants in trade agreements with the UK and everywhere else - and the more countries enter into these agreements, the harder it becomes to protect digital rights. "In trade world, trade always comes first."


Illustrations: Medieval trade routes (from The Story of Mankind, 1921).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 29, 2020

Tweeted

sbisson-parrot-49487515926_0c97364f80_o.jpgAnyone who's ever run an online forum has at some point grappled with a prolific poster who deliberately spreads division, takes over every thread of conversation, and aims for outraged attention. When your forum is a few hundred people, one alcohol-soaked obsessive bent on suggesting that anyone arguing with him should have their shoes filled with cement before being dropped into the nearest river is enormously disruptive, but the decision you make about whether to ban, admonish, or delete their postings matters only to you and your forum members. When you are a public company, your forum is several hundred million people, and the poster is a world leader...oy.

Some US Democrats have been calling Donald Trump's outrage this week over having two tweets labeled with a fact-check an attempt to distract us all from the terrible death toll of the pandemic under his watch. While this may be true, it's also true that the tweets Trump is so fiercely defending form part of a sustained effort to spread misinformation that effectively acts as voter suppression for the upcoming November election. In the 12 hours since I wrote this column, Trump has signed an Executive Order to "prevent online censorship", and Twitter has hidden, for "glorifying violence", Trump tweets suggesting shooting protesters in Minneapolis. It's clear this situation will escalate over the coming week. Twitter has a difficult balance to maintain: it's important not to hide the US president's thoughts from the public, but it's equally important to hold the US president to the same standards that apply to everyone else. Of course he feels unfairly picked on.

Rewind to Tuesday. Twitter applied its recently-updated rules regarding election integrity by marking two of Donald Trump's tweets. The tweets claimed that conducting the November presidential election via postal ballots would inevitably mean electoral fraud. Trump, who moved his legal residence to Florida last year, voted by mail in the last election. So did I. Twitter added a small, blue line to the bottom of each tweet: "! Get the facts about mail-in ballots". The link leads to numerous articles debunking Trump's claim. At OneZero, Will Oremus explains Twitter's decision making process. By Wednesday, Trump was threatening to "shut them down" and sign an Executive Order on Thursday.

Thursday morning, a leaked draft of the proposed executive order had been found, and Daphne Keller had color coded it to show which bits matter. In a fact-check of what power Trump actually has for Vox, Shirin Ghaffary quotes a tweet from Lawrence Tribe, who calls Trump's threat "legally illiterate". Unlike Facebook, Twitter doesn't accept political ads that Trump can threaten to withdraw, and unlike Facebook and Google, Twitter is too small for an antitrust action. Plus, Trump is addicted to it. At the Washington Post, Tribe adds that Trump himself *is* violating the First Amendment by continuing to block people who criticize his views, a direct violation of a 2019 court order.

What Trump *can* do - and what he appears to intend to do - is push the FTC and Congress to tinker with Section 230 of the Communications Decency Act (1996), which protects online platforms from liability for third-party postings spreading lies and defamation. S230 is widely credited with having helped create the giant Internet businesses we have today; without liability protection, it's generally believed that everything from web comment boards to big social media platforms will become non-viable.

On Twitter, US Senator Ron Wyden (D-OR), one of S230's authors, explains what the law does and does not do. At the New York Times, Peter Baker and Daisuke Wakabayashi argue, I think correctly, that the person a Trump move to weaken S230 will hurt most is...Trump himself. Last month, the Washington Post put the count of Trump's "false or misleading claims" while in office at 18,000 - and the rate has grown over time. Probably most of them have been published on Twitter.

As the lawyer Carrie A. Goldberg points out on Twitter, there are two very different sets of issues surrounding S230. The victims she represents cannot sue the platforms where they met serial rapists who preyed on them or continue to tolerate the revenge porn their exes have posted. Compare that very real damage to the victimhood conservatives are claiming: that the social media platforms are biased against them and disproportionately censor their posts. Goldberg wants access to justice for the victims she represents, who are genuinely harmed, and warns against altering S230 for purposes such as "to protect the right to spread misinformation, conspiracy theory, and misinformation".

However, while Goldberg's focus on her own clients is understandable, Trump's desire to tweet unimpeded about mail-in ballots or shooting protesters is not trivial. We are going to need to separate the issue of how and whether S230 should be updated from Trump's personal behavior and his clearly escalating war with the social medium that helped raise him from joke to viable presidential candidate. The S230 question and how it's handled in Congress is important. Calling out Trump when he flouts clearly stated rules is important. Trump's attempt to wield his power for a personal grudge is important. Trump versus Twitter, which unfortunately is much easier to write about, is a sideshow.


Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 15, 2020

Quincunx

Thumbnail image for sidewalklabs-streetcrossing.pngIn the last few weeks, unlike any other period in the 965 (!) previous weeks of net.wars columns: there were *five* pieces of (relatively) good news in the (relatively) restricted domain of computers, freedom, and privacy.

One: Google sibling Sidewalk Labs has pulled out of the development it had planned with Waterfront Toronto. This project has been contentious ever since the contract was signed in 2017 to turn a 12-acre section of Toronto's waterfront into a data-driven, sensor-laden futuristic city. In 2018, leading Canadian privacy pioneer Ann Cavoukian quit the project after Sidewalk Labs admitted that instead of ensuring the data it collected wouldn't be identifiable it actually would grant third parties access to it. At a panel on smart city governance at Computers, Privacy, and Data Protection 2019, David Murakami Wood gave the local back story (go to 43:30) on the public consultations and the hubris on display. Now, blaming the pandemic-related economic conditions, Sidewalk Labs has abandoned the plan altogether; its public opponents believe the scheme was really never viable in the first place. This is good news, because although technology can help some of urban centers' many problems, it should always be in the service of the public, not an opportunity for a private company to seize control.

Two: The Internet Corporation for Assigned Names and Numbers has rejected the Internet Society's proposal to sell PIR, the owner of the .org generic top-level domain, to the newly created private equity firm Ethos Capital, Timothy B. Lee reports at Ars Technica. Among its concerns, ICANN cited the $360 million in debt that PIR would have been required to take on, Ethos' lack of qualifications to run such a large gTLD, and the lack of transparency around the whole thing. The decision follows an epistolary intervention by California's Attorney General, who warned ICANN that it thought that the deal "puts profit above the public interest" and that ICANN was "abandoning its core duty to protect the public interest". As the overseer of both it (as a non-profit) and the sale, the AG was in a position to make its opinion hurt. At the time when the sale was announced, the Internet Society claimed there were other suitors. Perhaps now we'll find out who those were.

Three: The textbook publishers Cengage and McGraw-Hill have abandoned their plan to merge, saying that antitrust enforcers' requirements that they divest their overlapping businesses made the merger uneconomical. The plan had attracted pushback from students, consumer groups, libraries, universities, and bookstores, as well as lawmakers and antitrust authorities.

Four: Following a similar ruling from the UK Intellectual Property Office, the US Patent and Trademark Office has rejected two patents listing the Dabus AI system as the inventor. The patent offices argue that innovations must be attributed to humans in order to avoid the complications that would arise from recognizing corporations as inventors. There's been enough of a surge in such applications that the World Intellectual Property Organization held a public consultation on this issue that closed in February. Here again my inner biological supremacist asserts itself: I'd argue that the credit for anything an AI creates belongs with the people who built the AI. It's humans all the way down.

Five: The US Supreme Court has narrowly upheld the right to freely share the official legal code of the state of Georgia. Carl Malamud, who's been liberating it-ought-to-be-public data for decades - he was the one who first got Securities and Exchange Commission company reports online in the 1990s, and on and on - had published the Official Code of Georgia Annotated. The annotations in question, which include summaries of judicial opinions, citations, and other information about the law, are produced by Lexis-Nexus under contract to the state of Georgia. No one claimed the law itself could be copyrighted, but the state argued it owned copyright in the annotations, with Lexis-Nexus as its contracted commercial publisher. The state makes no other official version of its code available, meaning that someone consulting the non-annotated free version Lexis-Nexus does make available would be unaware of later court decisions rejecting parts of some of the laws the legislature passed. So Malamud paid the hundreds of dollars to buy a full copy of the official annotated version, and published it in full on his website for free access. The state sued. Public.Resource lost in the lower courts but won on appeal - and, in a risky move, urged the Supreme Court to take the case and set the precedent. The vote went five to four. The impact will be substantial. Twenty-two other states publish their legal code under similar arrangements with Lexis-Nexus. They will now have to rethink.

All these developments offer wins for the public in one way or another. None should be cause for complacence. Sidewalk Labs and other "surveillance city" purveyors will try again elsewhere with less well-developed privacy standards - and cities still have huge problems to solve. The future of .org, the online home for the world's non-profits and NGOs, is still uncertain. Textbook publishing is still disturbingly consolidated. The owners of AIs will go on seeking ways to own their output. And ensuring that copyright does not impede access to the law that governs those 23 American states does not make those laws any more just. But, for a brief moment, it's good.

Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

This week's net.wars, "Quincunx", wakes up to discover a confluence of (relatively) good news in the last few weeks of computers, freedom, and privacy: https://www.pelicancrossing.net/netwars/2020/05/quincunx.html

April 2, 2020

Uncontrolled digital unlending

800px-Books_HD_(8314929977).jpg
The Internet has made many aspects of intellectual property contentious at the best of times. In this global public health emergency, it seems inarguable that some of them should be set aside. Who can seriously object to copying ventilator parts so they can be used to save lives in this crisis? Similarly, if there were ever a moment for scientific journals to open up access to all paywalled research on coronaviruses to aid scientists all over the world, this is it.

But what about book authors, the vast majority of whom make only modest sums from their writing? This week, National Public Radio set off a Twitter storm when it highlighted the Internet Archive's "National Emergency Library". On Twitter, authors demanded to know why NPR was promoting a "pirate site". One wrote, "They stole [my book]." Another called it "Flagrant and wilful stealing." Some didn't mind: "Thrilled there's 15 of my books"; Longtime open access campaigner Cory Doctorow endorsed it.

The background: the Internet Archive's Open Library originally launched in 2006 with a plan to give every page of every book its own URL. Early last year, public conflict over the project built enough for net.wars to notice, when dozens of authors', creators', and publishers' organizations accused the site of mass copyright violation and demanded it cease distributing copyrighted works without permission.

The Internet Archive finds self-justification in a novel argument: that because the state of California has accepted it as a library it can buy and scan books and "lend" the digital copies without requiring explicit permission. On this basis, the Archive offers anyone two weeks to read any of the 1.4 million copyrighted books in its collection either online as images or downloaded as copy-protected Adobe Digital Editions. Meanwhile, the book is unavailable to others, who wait on a list, as in a physical library. The Archive's white paper by lawyers David Hansen and Kyle K. Courtney argues that this "controlled digital lending" is legal.

Enter the coronavirus,. On the basis that the emergency has removed access to library books from both school kids and adults for teaching, research, scholarship, and "intellectual stimulation", the Archive is dropping the controls - "suspending waitlists" - and is presenting those 1.4 million books as the globally accessible National Emergency Library. "An opportunistic attack", the Association of American Publishers calls it.

The anger directed at the Archive has led it to revise its FAQ (Google Doc) and publish a blog posting. In both it explains that you can still only "borrow" a book for 14 days, but no waitlists means others can, too, and you can renew immediately if you want more time. The change will last until June 30, 2020 or the end of the US national emergency, whichever is later. It claims support "from across the library and educational communities". According to the FAQ, the collection includes very few current textbooks; the collection is primarily ordinary books published between 1922 and the early 2000s.

The Archive still justifies all this as "fair use" by saying it's what libraries do: buy (or accept as donations) and lend books. Outside the US, however, library lending pays authors a small but real royalty on those loans, payments the Archive ignores. For the National Writers Union, Edward Hasbrouck objects strenuously: besides not paying authors or publishers, the Archive takes no account of whether the works are still in print or available elsewhere in authorized digital editions. Authors who have updated digital editions specifically for the current crisis have no way to annotate the holdings to redirect people. Authors *can* opt out -but opt-out is the opposite of how copyright law works. " Do librarians and archivists really want to kick authors while our incomes are down?" he asks, pointing to the NWU's 2019 explanation of why CDL is a harmful divergence from traditional library lending. Instead, he suggests that public funds should be spent to purchase or license the books for public use.

Other objectors make similar points: many authors make very little in the first place; authors with new books, the result of years of work, are seeing promotional tours and paid speaking engagements collapse. Others' books are being delayed or canceled. Everyone else involved in the project is being paid - just not the people who created the works in the first place.

At the New Yorker, writer Jill Lepore again cites Courtney, who argues that in exigent circumstances libraries have "superpowers" that allows them to grant exceptional access "for research, scholarship, and study". This certainly seems a reason for libraries of scientific journal articles, like JSTOR, to open up their archives. But is the Archive's collection comparable?

Overall, it seems to me there are two separate issues. The first is the service itself - the unique legal claim, the service's poor image quality and typo-ridden uncorrected ebooks, and the refusal to engage with creators and publishers. The second - that it's an emergency stop-gap - is more defensible; no one expected the abrupt closure of libraries and schools. A digital service is ideally placed to fill the resulting gaps, and ensuring universal access to books should be part of our post-crisis efforts to rebuild with better resilience. For the first, however, the Internet Archive should engage with authors and publishers. The result could be a better service for all sides.


Illustrations: Books (Abhi Sharma via wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 14, 2020

Pushy algorithms

cyberporn.jpgOne consequence of the last three and a half years of British politics, which saw everything sucked into the Bermuda Triangle of Brexit debates, is that things that appeared to have fallen off the back of the government's agenda are beginning to reemerge like so many sacked government ministers hearing of an impending cabinet reshuffle and hoping for reinstatement.

One such is age verification, which was enshrined in the Digital Economy Act (2017) and last seen being dropped to wait for the online harms bill.

A Westminster Forum seminar on protecting children online shortly before the UK's December 2019 general election, reflected that uncertainty. "At one stage it looked as if we were going to lead the world," Paul Herbert lamented before predicting it would be back "sooner or later".

The expectation for this legislation was set last spring, when the government released the Online Harms white paper. The idea was that a duty of care should be imposed on online platforms, effectively defined as any business-owned website that hosts "user-generated content or user interactions, for example through comments, forums, or video sharing". Clearly they meant to target everyone's current scapegoat, the big social media platforms, but "comments" is broad enough to include any ecommerce site that accepts user reviews. A second difficulty is the variety of harms they're concerned about: radicalization, suicide, self-harm, bullying. They can't all have the same solution even if, like one bereaved father, you blame "pushy algorithms".

The consultation exercise closed in July, and this week the government released its response. The main points:

- There will be plentiful safeguards to protect freedom of expression, including distinguishing between illegal content and content that's legal but harmful; the new rules will also require platforms to publish and transparently enforce their own rules, with mechanisms for redress. Child abuse and exploitation and terrorist speech will have the highest priority for removal.

- The regulator of choice will be Ofcom, the agency that already oversees broadcasting and the telecommunications industry. (Previously, enforcing age verification was going to be pushed to the British Board of Film Classification.)

- The government is still considering what liability may be imposed on senior management of businesses that fall under the scope of the law, which it believes is less than 5% of British businesses.

- Companies are expected to use tools to prevent children from accessing age-inappropriate content "and protect them from other harms" - including "age assurance and age verification technologies". The response adds, "This would achieve our objective of protecting children from online pornography, and would also fulfill the aims of the Digital Economy Act."

There are some obvious problems. The privacy aspects of the mechanisms proposed for age verification remain disturbing. The government's 5% estimate of businesses that will be affected is almost certainly a wild underestimate. (Is a Patreon page with comments the responsibility of the person or business that owns it or Patreon itself?). At the Guardian, Alex Hern explains the impact on businesses. The nastiest tabloid journalism is not within scope.

On Twitter, technology lawyer Neil Brown identifies four fallacies in the white paper: the "Wild West web"; that privately operated computer systems are public spaces; that those operating public spaces owe their users a duty of care; and that the offline world is safe by default. The bigger issue, as a commenter points out, is that the privately operated computer systems UK government seeks to regulate are foreign-owned. The paper suggests enforcement could include punishing company executives personally and ordering UK ISPs to block non-compliant sites.

More interesting and much less discussed is the push for "age-appropriate design" as a method of harm reduction. This approach was proposed by Lorna Woods and Will Perrin in January 2019. At the Westminster eForum, Woods explained, "It is looking at the design of the platforms and the services, not necessarily about ensuring you've got the latest generation of AI that can identify nasty comments and take it down."

It's impossible not to sympathize with her argument that the costs of move fast and break things are imposed on the rest of society. However, when she started talking about doing risk assessments for nascent products and services I could only think she's never been close to software developers, who've known for decades that from the instant software goes out into the hands of users they will use it in ways no one ever imagined. So it's hard to see how it will work, though last year the ICO proposed a code of practice.

The online harms bill also has to be seen in the context of all the rest of the monitoring that is being directed at children in the name of keeping them - and the rest of us - safe. DefendDigital.me has done extensive work to highlight the impact of such programs as Prevent, which requires schools and libraries to monitor children's use of the Internet to watch for signs of radicalization, and the more than 20 databases that collect details of every aspect of children's educational lives. Last month, one of these - the Learning Records Service - was caught granting betting companies access to personal data about 28 million children. DefendDigital.me has called for an Educational Rights Act. This idea could be usefully expanded to include children's online rights more broadly.


Illustrations: Time magazine's 1995 "Cyberporn" cover, which marked the first children-Internet panic.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.


Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 29, 2019

Open season

A_Large_Bird_Attacking_a_Stag_LACMA_65.37.315.jpgWith no ado, here's the money quote:

The [US Trade Representative] team is keen to move into the formal phase of negotiations. Ahead of the publication of UK negotiating objectives, there now little that we will be able to achieve in further pre-negotiation engagement. USTR officials noted continued pressure from their political leadership to pursue an FTA [free trade agreement] and a desire to be fully prepared for the launch of negotiations after the end of October. They envisage a high cadence negotiation - with rounds every 6 weeks - but it was interesting that my opposite number thought that there would remain a political and resource commitment to a UK negotiation even if it were thought that the chances of completing negotiations in a Trump first term were low. He felt that being able to point to advanced negotiations with the UK was viewed as having political advantages for the President going in to the 2020 elections. USTR were also clear that the UK-EU situation would be determinative: there would be all to play for in a No Deal situation but UK commitment to the Customs Union and Single Market would make a UK-U.S. FTA a non-starter.

This quote appears on page two of one of the six leaked reports that UK Labour leader Jeremy Corbyn flourished at a press conference this week. The reports summarize the US-UK Trade and Investment Working Group's efforts to negotiate a free trade agreement between the US and post-Brexit Britain (if and when). The quote dates to mid-July 2019; to recap, Boris Johnson became prime minister on July 24 swearing the UK would exit the EU on October 31.

Three key points jump out:

- Donald Trump thinks a deal with Britain will help him win re-election next year. This is not a selling point to most people in Britain.

- The US negotiators condition the agreement on a no-deal Brexit - the most damaging option for the UK and European economies. Despite the last Parliament's efforts, this could still happen because two cliff edges still loom: the revised January 31 exit date, and December 2020, when the transition period is due to end (and which Johnson swears he won't extend). Whose interests is Johnson prioritizing here?

- Wednesday's YouGov model poll predicts that Johnson will win a "comfortable" majority, suggesting that the cliff edge remains a serious threat.

At Open Democracy, Nick Dearden sums up the worst damage. Among other things, it shows the revival of some of the most-disliked provisions in the abandoned Transatlantic Trade Investment Partnership treaty, most notably investor-state dispute resolution (ISDS), which grants corporations the right to sue governments that pass laws they oppose in secret tribunals. As Dearden writes, these documents make clear that "taking back control" means "giving the US control". The Trade Justice Movement's predictions from earlier this year seem accurate enough.

On Twitter, UKTrade Forum co-founder David Henig has posted a thread explaining why adopting a US-first trade policy will be disastrous for British farmers and manufacturers.

Global Justice's analysis highlights both the power imbalance, and the US's demands for free rein. It's also clear that Johnson can say the NHS is not on the table, Trump can say the opposite, and both can be telling some value of truth, because the focus is on pharmaceutical pricing and patent extension. An unscrupulous government filled with short-term profiteers might figure that they'll be gone by the time the costs become clear.

For net.wars, this is all background and outside our area of expertise. The picture is equally alarming for digital rights. In 1999, Simon Davies predicted that data protection would become a trade war between the US and EU. Even a partial reading of these documents suggests that now, 20 years on, may be the moment. Data protection is a hinge, in that you might, at some expense, manage varying food standards for different trading regions, but data regimes want to be unitary. The UK can either align with the EU, GDPR, which enshrines privacy and data protection as human rights, or with the US and its technology giants. This goes double if Max Schrems, whose legal action brought down the Safe Harbor agreement, wins his NOYB case against Privacy Shield. Choose the EU and GDPR, and the US likely walks, as the February 2019 summary of negotiation objectives (PDF) makes plain. That document also is clear that the US wants to bar the UK from mandating local data storage, restricting cross-border data flows, imposing customs duties on digital products, requiring the disclosure of computer code or algorithms, and holding online platforms liable for third-party content. Many of these are opposite to the EU's general direction of travel.

The other hinge issue is the absolute US ban on mentioning climate change. The EU just declared a climate emergency and set out an action list.

The UK cannot hope to play both sides. It's hard to overstress how much worse a position these negotiations seem to offer the UK, which *is* a full EU partner, but which will always be viewed by the US as a lesser entity.

Illustrations: A large bird attacking a stag (Hendrik Hondius, 1610; from LA County Museum of Art, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 21, 2019

The choices of others

vlcsnap-2019-11-21-21h32m40s545.pngFor the last 30 years, I've lived in the same apartment on a small London street. So small, in fact, that even though London now has so many CCTV cameras - an estimated 627,707 - that the average citizen is captured on camera 300 times a day, it remains free of these devices. Camera surveillance and automated facial recognition are things that happen when I go out to other places.

Until now.

It no longer requires state-level resources to put a camera in place to watch your front door. This is a function that has been wholly democratized. And so it is that my downstairs neighbors, whose front door is side by side with mine, have inserted surveillance into the alleyway we share via an Amazon Ring doorbell.

Now, I understand there are far worse things, both as neighbors go and as intrusions go. My neighbors are mostly quiet. We take in each other's packages. They would never dream of blocking up the alleyway with stray furniture. And yet it never occurred to them that a 180-degree camera watching their door is, given the workings of physics and geography, also inevitably watching mine. And it never occurred to them to ask me whether I minded.

I do mind.

I have nothing to hide, and I mind.

Privacy advocates have talked and written for years about the many ways that our own privacy is limited by the choices of others. I use Facebook very little - but less-restrained friends nonetheless tag me in photographs, and in posts about shared activities. My sister's decision to submit a DNA sample to a consumer DNA testing service in order to get one of those unreliable analyses of our ancestry inevitably means that if I ever want to do the same thing the system will find the similarity and identify us as relatives, even though it may think she's my aunt.

We have yet to develop social norms around these choices. Worse, most people don't even see there's a problem. My neighbor is happy and enthusiastic about the convenience of being able to remotely negotiate with package-bearing couriers and be alerted to possible thieves. "My office has one," he said, explaining that they got it after being burgled several times to help monitor the premises.

We live down an alleyway so out of the way that both we and couriers routinely leave packages on our doorsteps all day.

I do not want to fight with my neighbor. We live in a house with just two flats, one up, one down, on a street with just 20 households. There is no possible benefit to be had from being on bad terms. And yet.

I sent him an email: would he mind walking me through the camera's app so I can see what it sees? In response, he sent a short video; the image above, taken from it, shows clearly that the camera sees all the way down the alleyway in both directions.

So I have questions: what does Amazon say about what data it keeps and for how long? If the camera and microphone are triggered by random noises and movements, how can I tell whether they're on and if they're recording?

Obviously, I can read the terms and conditions for myself, but I find them spectacularly unclear. Plus, I didn't buy this device or agree to any of this. The document does make mention of being intended for monitoring a single-family residence, but I don't think this means Amazon is concerned that people will surveil their neighbors; I think it means they want to make sure they sell a separate doorbell to every home.

Examination of the video and the product description reveals that camera, microphone, and recording are triggered by movement next to his - and therefore also next to my - door. So it seems likely that anyone with access to his account can monitor every time I come or go, and all my visitors. Will my privacy advocate friends ever visit me again? How do my neighbors not see why I think this is creepy?

Even more disturbing is the cozy relationship Amazon has been developing with police, especially in the US, where the company has promoted the doorbells by donating units for neighborhood watch purposes, effectively allowing police to build private surveillance networks with no public oversight. The Sun reports similar moves by UK police forces.

I don't like the idea of the police being able to demand copies of recordings of innocent people - couriers, friends, repairfolk - walking down our alleyway. I don't want surveillance-by-default. But as far as I can tell, this is precisely what this doorbell is delivering.

A lawyer friend corrects my impression that GDPR does not apply. The Information Commissioner's Office is clear that cameras should not be pointed at other people's property or shared spaces, and under GDPR my neighbor is now a data controller. My friends can make subject access requests. Even so: do I want to pick a fight with people who can make my life unpleasant? All over the country, millions of people are up against the reality that no matter how carefully they think through their privacy choices they are exposed by the insouciance of other people and robbed of agency not by police or government action but by their intimate connections - their neighbors, friends, and family..

Yes, I mind. And unless my neighbor chooses to care, there's nothing I can practically do about it.

Illustrations: Ring camera shot of alleyway.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2019

Balancing acts

800px-Netherlands-4589_-_Lady_of_Justice_&_William_of_Orange_Coat-o-Arms_(12171086413).jpgThe Court of Justice of the European Union had an important moment on Tuesday, albeit overshadowed by another court elsewhere, ruling that the right to be forgotten can be limited to the EU. To recap: in 2014, in its ruling in Google Spain v. AEPD and Mario Costeja González ("Costeja") CJEU required Google to delist results returned by searches on a person's name under certain circumstances. Costeja had complained that the fact that a newspaper record of the foreclosure on his house in 1998 was the first thing people saw when they searched for him gave them a false impression. In an effort to balance freedom of expression and privacy, the court's ruling left the original newspaper announcement intact, but ordered Google to remove the link from its index of search results. Since then, Google says it has received 845,501 similar requests representing 3.3 million links, of which it has dereferenced 45%.

Well, now. Left unsettled was the question of territorial jurisdiction: one would think that a European court doesn't have the geographical reach to require Google to remove listings worldwide - but if Google doesn't, then the ability to switch to a differently-located version of the search engine trivially defeats the ruling. What is a search engine to do?

This is a dispute we've seen before, beginning in 2000, when, in a case brought by the Ligue contre le racisme et l'antisémitisme et Union des étudiants juifs de France (LICRA), a French tribunal ordered Yahoo to block sales of Nazi memorabilia on its auction site. Yahoo argued that it was a US company, therefore the sales were happening in the US, and don't-break-the-Internet; the French court claimed jurisdiction anyway. Yahoo appealed *in the US*, where the case was dismissed for lack of jurisdiction. Eventually, Yahoo stopped selling the memorabilia everywhere, and the fuss died down.

Costeja offered the same conundrum with a greater degree of difficulty; the decision has been subsumed into GDPR as Article 17, "right to erasure". Google began delisting Costeja's unwanted result, along with those many others, from EU versions of its search engine but left them accessible in the non-EU domains. The French data protection regulator, CNIL, however, felt this didn't go far enough and in May 2015 it ordered Google to expand dereferencing to all its servers worldwide. Google's version of compliance was to deny access to the listings to anyone coming from the country where the I-want-to-be-forgotten complaint originated. In March 2016 CNIL fined Google €100,000 (pocket change!), saying that the availability of content should not depend on the geographic location of the person seeking to view it. In response to Google's appeal, the French court referred several questions to CJEU, leading to this week's ruling.

The headlines announcing this judgment - for example, the Guardian's - give the impression that the judgment is more comprehensive than it is. Yes, the court ruled that search engines are not required to delist results worldwide in right to be forgotten cases, citing the need to balance the right to be forgotten against other fundamental rights such as freedom of expression. But it also ruled that search engines are not prohibited from doing so. The judgment suggests that they should take into account the details of the particular case and the complainant, as well as the need to balance data protection and privacy rights against the public interest.

The remaining ambiguity means we should expect there will be another case along any minute. Few are going to much happier than they were in 2013, when putting right to be forgotten into law was proposed, or in 2014, when Costeja was decided, or shortly afterwards, when Google first reported on its delisting efforts. Freedom of speech advocates and journalists are still worried that the system is an invitation to censorship, as it has proved to be in at least one case; the French regulator, and maybe some other privacy advocates and data protection authorities, is still unhappy; and we still have a situation where a private company is being asked to make even more nuanced decisions on our behalf. The reality, however, is that given the law there is no solution, only compromise.

This is a good moment for a couple of other follow-ups:

- Mozilla has announced it will not turn on DNS-over-HTTPS by default in Firefox in the UK. This is in response to the complaints noted in May that DoH will break workarounds used in the UK to block child abuse images.

- Uber and Transport for London aren't getting along any better than they were in 2017, when TfL declined to renew its license to operate. Uber made a few concessions, and on appeal it was granted a 15-month extension. With that on the verge of running out, TfL has given the company two months to produce additional information before it makes a final decision. As Hubert Horan continues to point out, the company's aggressive regulation-breaking approach is a strategy, not the work of a rogue CEO, and its long-term prospects remain those of a company with "terrible underlying economics".


Illustrations: Justitia outside the Delft Town Hall, the Netherlands (via Dennis Jarvis at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 13, 2019

Purposeful dystopianism

Truman-Show-exist.pngA university comparative literature class on utopian fiction taught me this: all utopias are dystopias underneath. I was reminded of this at this week's Gikii, when someone noted the converse, that all dystopias contain within themselves the flaw that leads to their destruction. Of course, I also immediately thought of the bare patch on Smaug's chest in The Hobbit because at Gikii your law and technology come entangled with pop culture. (Write-ups of past years: 2018; 2016; 2014; 2013; 2008.)

Granted, as was pointed out to me, fictional utopias would have no dramatic conflict without dystopian underpinnings, just as dystopias would have none without their misfits plotting to overcome. But the context for this subdiscussion was the talk by Andres Guadamuz, which he began by locating "peak Cyber-utopianism" at 2006 to 2010, when Time magazine celebrated the power the Internet had brought each of us, Wikileaks was doing journalism, bitcoin was new, and social media appeared to have created the Arab Spring. "It looked like we could do anything." (Ah, youth.)

Since then, serially, every item on his list has disappointed. One startling statistic Guadamuz cited: streaming now creates more carbon emissions than airplanes. Streaming online video generates as much carbon dioxide per year as Belgium; bitcoin uses as much energy as Austria. By 2030, the Internet is projected to account for 20% of all energy consumption. Cue another memory, from 1995, when MIT Media Lab founder Nicholas Negroponte was feted for predicting in Being Digital that wired and wireless would switch places: broadcasting would move to the Internet's series of tubes, and historically wired connections such as the telephone network would become mobile and wireless. Meanwhile, all physical forms of information would become bits. No one then queried the sense of doing this. This week, the lab Negroponte was running then is in trouble, too. This has deep repercussions beyond any one institution.

Twenty-five years ago, in Tainted Truth, journalist Cynthia Crossen documented the extent to which funders get the research results they want. Successive generations of research have backed this up. What the Media Lab story tells us is that they also get the research they want - not just, as in the cases of Big Oil and Big Tobacco, the *specific* conclusions they want promoted but the research ecosystem. We have often told the story of how the Internet's origins as a cooperative have been coopted into a highly centralized system with central points of failure, a process Guadamuz this week called "cybercolonialism". Yet in focusing on the drivers of the commercial world we have paid insufficient attention to those driving the academic underpinnings that have defined today's technological world.

To be fair, fretting over centralization was the most mundane topic this week: presentations skittered through cultural appropriation via intellectual property law (Michael Dunford, on Disney's use of Māui, a case study of moderation in a Facebook group that crosses RuPaul and Twin Peaks fandom (Carolina Are), and a taxonomy of lying and deception intended to help decode deepfakes of all types (Andrea Matwyshyn and Miranda Mowbray).

Especially, it is hard for a non-lawyer to do justice to the discussions of how and whether data protection rights persist after death, led by Edina Harbinja, Lilian Edwards, Michael Veale, and Jef Ausloos. You can't libel the dead, they explained, because under common law, personal actions die with the person: your obligation not to lie about someone dies when they do. This conflicts with information rights that persist as your digital ghost: privacy versus property, a reinvention of "body" and "soul". The Internet is *so many* dystopias.

Centralization captured so much of my attention because it is ongoing and threatening. One example is the impending rollout of DNS-over-HTTPS. We need better security for the Internet's infrastructure, but DoH further concentrates centralized control. In his presentation Derek MacAuley noted that individuals who need the kind of protection DoH is claimed to provide would do better to just use Tor. It, too, is not perfect, but it's here and it works. This adds one more to so many historical examples where improving the technology we had that worked would have spared us the level of control now exercised by the largest technology companies.

Centralization completely undermines the Internet's original purpose: to withstand a bomb outage. Mozilla and Google surely know this. The third DoH partner, Cloudflare, the content delivery network in the middle, certainly does: when it goes down, as it did for 15 minutes in July, millions of websites become unreachable. The only sensible response is to increase resilience with multiple pathways. Instead, we have Facebook proposing to further entrench its central role in many people's lives with its nascent Libra cryptocurrency. "Well, *I*'m not going to use it" isn't an adequate response when in some countries Facebook effectively *is* the Internet.

So where are the flaws in our present Internet dystopias? We've suggested before that advertising saturation may be one; the fakery that runs all the way through the advertising stack is probably another. Government takeovers and pervasive surveillance provide motivation to rebuild alternative pathways. The built-in lack of security is, as ever, a growing threat. But the biggest flaw built into the centralized Internet may be this: boredom.


Illustrations: The Truman Show.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 6, 2019

Traffic stop

rotated-dead-end.jpgIn a week when Brexit has been at peak chaos generation, it's astonishing how little attention has been paid to what would happen to data flows if the UK exits the EU on October 31 with no agreement in place. At a stroke, the UK would become a "third country" in data protection parlance. Granted, at the instant of withdrawal, under the Withdrawal Act (2018), all EU law is immediately incorporated into UK law - which in turn means that the General Data Protection Regulation, which came into force in 2018, is recreated as a UK law. But as far as I can tell, there still has to be a decision that the UK's data protection regime qualifies under EU law as adequate for data flows to continue unimpeded from the EU27 into the UK.

Which means that at the very least a no-deal Brexit will deliver a lengthy delay while the European Commission makes that decision. Most of the other things people are worrying about since the leaked "Yellowhammer" documents outlining the government's expectations in case of a no-deal exit alerted the country to the likely disruption - food, medicines, Customs and immigration clearance - have widespread impact but are comparatively confined to one or a few sectors. Data is *everything*. Food and medicine supply chains, agriculture, national security, immigration, airline systems...there is hardly an aspect of this country's life that won't be disrupted if data flows can't continue. As DP Network explains it, the process of assessing the adequacy of the UK's data protection regime can't even start until the UK has left - and can take months or even years. During that time, the UK can send data to the EU perfectly well - but transfers the other way will require a different legal framework. The most likely is Standard Contractual Clauses - model clauses that are already approved that can be embedded in contracts with suppliers and partners. I haven't seen any assessment of what kind of progress companies have made in putting these in place.

But this, too, is not assured. These clauses form part of the second case brought to the Court of Justice of the European Union by Max Schrems, the Austrian lawyer whose court action brought down Safe Harbor in 2015. Schrems 2.0, calls into question the legal validity of those SCCs as part of his challenge to Privacy Shield, the EU/US agreement that replaced Safe Harbor in 2016. Schrems himself believes that SCCs can meet the adequacy standard if they are properly enforced, and that they can be used to stop specific illegal transfers. For larger companies with lawyers on call, SCCs may be a reasonable option. It's harder to see how smaller companies will cope. The Information Commissioner's Office has advice. Its guidance on international transfers refers businesses to the European Data Protection Bureau's note on the subject (PDF), which outlines the options.

That's if there's a no-deal crash-out. The Withdrawal Agreement, which Theresa May tried three times to get through Parliament and saw voted down three times, has provisions preserving the status quo - unimpeded data flows - until at least 2020 as part of the transition period. This is the agreement that Boris Johnson is grandstanding about, insisting that the EU must and will make changes and that negotiations are ongoing - which the EU denies. I believe the EU, if only because for the last three years it has consistently done what it said it would do, whereas Boris Johnson...

While the UK of course participated in the massive legislative exercise that led to GDPR, it's worth remembering that a number of the business-oriented ministers of the day were not fans of some of its provisions and wanted it watered down. No matter how Brexit comes out, however, the UK will not get to do this: GDPR, like Richard Stallman's GNU license carries with it like a stowaway the pay-it-forward requirement that future use of the same material must be subject to its rules. The UK can choose: it can be a "vassal state" and "surrender" to ongoing EU enhancements to data protection - OR it can cut itself off entirely from the modern international business world.

It's not clear if any of the data issues have filtered through into the public consciousness, perhaps because stopped data flows, as SA Mathieson writes at The Register, don't sound like much compared to the specter of bare supermarket shelves. Mathieson goes into some detail about the fun businesses are going to have: EU-based travel agencies that can't transfer tourists' data to the hotels they've booked, internal transfers within companies with offices spread across several countries, financial services... If "data is the new oil", then we're talking banning all the tankers. No wonder the EU is reportedly regarding no-deal Brexit as the equivalent of a natural disaster, and accordingly setting aside funds to mitigate the damage.


Illustrations: Dead-end sign.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 12, 2019

Public access

WestWing-Bartlet-campaign-phone.pngIn the fantasy TV show The West Wing, when fictional US president Jed Bartlet wants to make campaign phone calls, he departs the Oval Office for the "residence", a few feet away, to avoid confusing his official and political roles. In reality, even before the show began in 1999, the Internet was altering the boundaries between public and private; the show's end in 2006 coincided with the founding of Twitter, which is arguably completing the job.

The delineation of public and private is at the heart of a case filed in 2017 by seven Twitter users backed by the Knight First Amendment Institute against US president Donald Trump. Their contention: Trump violated the First Amendment by blocking them for responding to his tweets with criticism. That Trump is easily offended, is not news. But, their lawyers argued, because Trump uses his Twitter account in his official capacity as well as for personal and campaign purposes, barring their access to his feed means effectively barring his critics from participating in policy. I liked their case. More important, lawyers liked their case; the plaintiffs cited many instances where Trump or members of his administration had characterized his tweets as official policy..

In May 2018, Trump lost in the Southern District of New York. This week, the US Court of Appeals for the Second Circuit unanimously upheld the lower court. Trump is perfectly free to block people from a personal account where he posts his golf scores as a private individual, but not from an account he uses for public policy announcements, however improvised and off-the-cuff they may be.

At The Volokh Conspiracy, Stuart Benjamin finds an unexplored tension between the government's ability to designate a space as a public forum and the fact that a privately-owned company sets the forum's rules. Here, as Lawrence Lessig showed in 1999, system design is everything. The government's lawyers contended that Twitter's lack of tools for account-holders leaves Trump with the sole option of blocking them. Benjamin's answer is: Trump didn't have to choose Twitter for his forum. True, but what other site would so reward his particular combination of impulsiveness and desperate need for self-promotion? A moderated blog, as Benjamin suggests, would surely have all the life sucked out of it by being ghost-written.

Trump's habit of posting comments that would get almost anyone else suspended or banned has been frequently documented - see for example Cory Scarola at Inverse in November 2016. In 2017, Jack Moore at GQ begged Twitter to delete his account to keep us all safer after a series of tweets in which Trump appeared to threaten North Korea with nuclear war. The site's policy team defended its decision not to delete the tweets on the grounds of "public interest". At the New York Times, Kara Swisher (heralding the piece on Twitter with the neat twist on Sartre, Hell is other tweeters) believes that the ruling will make a full-on Trump ban less likely.

Others have wondered whether the case gives Americans that Twitter has banned for racism and hate speech the right to demand readmission by claiming that they are being denied their First Amendment rights. Trump was already known to be trying to prove that social media sites are systemically biased towards banning far-right voices; those are the people he invited to the White House this week for a summit on social media.

It seems to me, however, that the judges in this case have correctly understood the difference between being banned from a public forum because of your own behavior and being banned because the government doesn't like your kind. The first can and does happen in every public space anywhere; as a privately-owned space, Twitter is free to make such decisions. But when the government decides to ban its critics, that is censorship, and the First Amendment is very clear about it. It's logical enough, therefore, to feel that the court was right.

Female politicians, however, probably already see the downside. Recently, Amnesty International highlighted the quantity and ferocity of abuse they get. No surprise that within a day the case was being cited by a Twitter user suing Alexandria Ocasio-Cortez for blocking him. How this case resolves will be important; we can't make soaking up abuse the price of political office, while the social media platforms are notoriously unresponsive to such complaints.

No one needs an account to read any Twitter user's unprotected tweets. Being banned costs the right to interact,, not the right to read. But because many tweets turn into long threads of public discussion it makes sense that the judges viewed the plaintiffs' loss as significant. One consequence, though, is that the judgment conceptually changes Trump's account from a stream through an indivisible pool into a subcommunity with special rules. Simultaneously, the company says it will obscure - though not delete - tweets from verified accounts belonging to politicians and government officials with more than 100,000 followers that violate its terms and conditions. I like this compromise: yes, we need to know if leaders are lighting matches, but it shouldn't be too easy to pour gasoline on them - and we should be able to talk (non-abusively) back.


Illustrations:The West Wing's Jed Bartlet making phone calls from the residence.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 5, 2019

Legal friction

ny-public-library-lions.JPGWe normally think of the Internet Archive, founded in 1996 by Brewster Kahle, as doing good things. With a mission of "universal access to all knowledge", it archives the web (including many of my otherwise lost articles), archives TV news footage and live concerts, and provides access to all sorts of information that would otherwise be lost.

Equally, authors usually love libraries. Most grew up burrowed into the stacks, and for many libraries are an important channel to a wider public. A key element of the Archive's position in what follows rests on the 2007 California decision officially recognizing it as a library.

Early this year, myriad authors and publishers organizations - including the UK's Society of Authors and the US's Authors Guild - issued a joint statement attacking the Archive's Open Library project. In this "controlled digital lending" program, borrowers - anyone, via an Archive account - get two weeks to read ebooks, either online in the Archive's book reader or offline in a copy-protected format in Adobe Digital Editions.

What offends rights holders is that unlike the Gutenberg Project, which offers downloadable copies of works in the public domain, Open Library includes still-copyrighted modern works (including net.wars-the-book). The Archive believes this is legal "fair use".

You may, like me, wonder if the Archive is right. The few precedents are mixed. In 2000, "My MP3.com" let users stream CDs after proving ownership of a physical copy by inserting it in their CD drive. In the resulting lawsuit the court ruled MP3.com's database of digitized CDs an infringement, partly because it was a commercial, ad-supported service. Years later, Amazon does practically the same thing..

In 2004, Google Books began scanning libraries' book and magazine collections into a giant database that allows searchers to view scraps of interior text. In 2015, publishers lost their lawsuit. Google is a commercial company - but Google Books carries no ads (though it presumably does collect user data), and directs users to source copies from libraries or booksellers.

A third precedent, cited by the Authors Guild, is Capitol Records v. ReDigi. In that case, rulings have so far held that ReDigi's resale process, which transfers music purchased on iTunes from old to new owners means making new and therefore infringing copies. Since the same is true of everything from cochlear implants to reading a web page, this reasoning seems wrong.

Cambridge University Press v. Patton, filed in 2008 and still ongoing, has three publishers suing Georgia State University over its e-reserves system, which loans out course readings on CDL-type terms. In 2012, the district court ruled that most of this is fair use; appeal courts have so far mostly upheld that view.

The Georgia case is cited David R. Hansen and Kyle K. Courtney in their white paper defending CDL. As "format-shifting", they argue CDL is fair use because it replicates existing library lending. In their view, authors don't lose income because the libraries already bought copies, and it's all covered by fair use, no permission needed. One section of their paper focuses on helping libraries assess and minimize their legal risk. They concede their analysis is US-only.

From a geek standpoint, deliberately introducing friction into ebook lending in order to replicate the time it takes the book to find its way back into the stacks (for example) is silly, like requiring a guy with a flag on a horse to escort every motor car. And it doesn't really resolve the authors' main complaints: lack of permission and no payment. Of equal concern ought to be user complaints about zillions of OCR errors. The Authors Guild's complaint that saved ebooks "can be made readable by stripping DRM protection" is, true, but it's just as true of publishers' own DRM - so, wash.

To this non-lawyer, the white paper appears to make a reasonable case - for the US, where libraries enjoy wider fair use protection and there is no public lending right, which elsewhere pays royalties on borrowing that collection societies distribute proportionately to authors.

Outside the US, the Archive is probably screwed if anyone gets around to bringing a case. In the UK, for example, the "fair dealing" exceptions allowed in the Copyright, Designs, and Patents Act (1988) are narrowly limited to "private study", and unless CDL is limited to students and researchers, its claim to legality appears much weaker.

The Authors Guild also argues that scanning in physical copies allows libraries to evade paying for library ebook licenses. The Guild's preference, extended collective licensing, has collection societies negotiating on behalf of authors. So that's at least two possible solutions to compensation: ECL, PLR.

Differentiating the Archive from commercial companies seems to me fair, even though the ask-forgiveness-not-permission attitude so pervasive in Silicon Valley is annoying. No author wants to be an indistinguishable bunch of bits an an undifferentiated giant pool of knowledge, but we all consume far more knowledge than we create. How little authors earn in general is sad, but not a legal argument: no one lied to us or forced us into the profession at gunpoint. Ebook lending is a tiny part of the challenges facing anyone in the profession now, and my best guess is that whatever the courts decide now eventually this dispute will just seem quaint.

Illustrations: New York Public Library (via pfhlai at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 7, 2019

The right to lie

Sand_Box-wikimedia.JPGPrivacy, pioneering activist Simon Davies writes in his new book, Privacy: A Personal Chronicle, "varies widely according to context and environment to the extent that even after decades of academic interest in the subject, the world's leading experts have been unable to agree on a single definition." In 2010, I suggested defining it as being able to eat sand without fear. The reference was to the prospect that detailed electronic school records present to small children and their parents of a permanently stored data on everything they do. It didn't occur to me at the time, but in a data-rich future when eating sand has been outlawed (because some pseudoscientist believes it leads to criminality) and someone asks, "Did you eat sand as a child?", saying no because you forgot the incident (because you were *three* and now you're 65) will make you a dangerous liar.

The fact that even innocent pastimes - like eating sand - look sinister when the beholder is already prejudiced - is the kind of reason why sometimes we need privacy even from the people we're supposed to be able to trust. This year's Privacy Law Scholars tossed up two examples, provided by Najarian Peters, whose project examines the reasons why black Americans adopt edu0cational alternatives - home-schooling, "un-schooling" (children follow their own interests, Summerhill-style), and self-directed education (children direct their own activities), and Carleen M. Zubrzycki, who has been studying privacy from doctors. Cue Greg House: Everybody lies. Judging from the responses Zubrzycki is getting from everyone she talks to about her projects, House is right, but, as he would not accept, we have our reasons.

Sometimes lying is essential to get a new opinion untainted by previous incorrect diagnoses or dismissals (women in pain, particularly). In some cases, the problem isn't the doctor but the electronic record and the wider health system that may see it. In some cases, lying may protect the doctor, too; under the new, restrictive Alabama law that makes performing an abortion after six weeks a felony, doctors would depend on their patients' silence. This last topic raised a question: given that women are asked the date of their last period at every medical appointment, will states with these restrictive laws (if they are allowed to stand) begin demanding to inspect women's menstrual apps?

The intriguing part of Peters' project is that most discussions of home-schooling and other alternative approaches to education focus on the stereotype of parents who don't want their kids to learn about evolution, climate change, or sex. But her interviewees have a different set of concerns: they want a solid education for their children, but they also want to protect them from prejudice, stigmatization, and the underachievement that comes with being treated as though you can't achieve much. The same infraction that is minor for a white kid may be noted and used to confirm teachers' prejudices against a black child. And so on. It's another reminder of how little growing up white in America may tell you about growing up black in America.

Zybrzycki and Peters were not alone in finding gaps in our thinking: Anne Toomey McKenna, Amy C. Gaudion, and Jenni L. Evans have discovered that existing laws do not cover the use of data collected by satellites and aggregated via apps - think last year's Strava incident, in which a heat map published by the company from aggregated data exposed the location of military bases and the identities of personnel - while PLSC co-founder Chris Hoofnagle began the initial spadework on the prospective privacy impacts of quantum computing.

Both of these are gaps in current law. GDPR covers processing data; it says little about how the predictions derived from that data may be used. GDPR also doesn't cover the commercial aggregation of satellite data, an intersectional issue requiring expertise in both privacy law and satellite technology. Yet all data may eventually be personal data, as 100,000 porn stars may soon find out. (Or they may not; the claim that a programmer has been able to use facial recognition to match porn performers to social media photographs is considered dubious, at least for now) For this reason, Margot Kaminski is proposing "binary governance", in which one prong governs the use of data and the other ensures due process.

Tl;dr: it's going to be rough. Quantum computing is expected to expose things that today can successfully be hidden- including stealth surveillance technologies. It's long been mooted, for example, that quantum computing will render all of today's encryption crackable, opening up all our historical encrypted data. PLSC's discussion suggests it will also vastly increase the speed of communications. More interesting was a comment from Pam Dixon, whose research shows that high-speech biometric analysis is already beginning to happen, as companies in China find new, much faster, search methods that are bringing "profound breakthroughs" in mass surveillance.

"The first disruption was the commodification of data and data breakers," she said. "What's happening now is the next phase, the commodification of prediction. It's getting really cheap." If the machine predicts that you fit the profile of people who ate sand, what will it matter if you say you didn't? Even if it's true.


Illustrations: Sand box (via Janez Novak at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 18, 2019

Math, monsters, and metaphors

Twitter-moral-labyrinth.jpg "My iPhone won't stab me in my bed," Bill Smart said at the first We Robot, attempting to explain what was different about robots - but eight years on, We Robot seems less worried about that than about the brains of the operation. That is, AI, which conference participant Aaron Mannes described as, "A pile of math that can do some stuff".

But the math needs data to work on, and so a lot of the discussion goes toward possible consequences: delivery drones displaying personalized ads (Ryan Calo and Stephanie Ballard); the wrongness of researchers who defend their habit of scraping publicly posted data by saying it's "the norm" when their unwitting experimental subjects have never given permission; the unexpected consequences of creating new data sources in farming (Solon Barocas, Karen Levy, and Alexandra Mateescu); and how to incorporate public values (Alicia Solow-Neiderman) into the control of...well, AI, but what is AI without data? It's that pile of math. "It's just software," Bill Smart (again) said last week. Should we be scared?

The answer seems to be "sometimes". Two types of robots were cited for "robotic space colonialism" (Kristen Thomasen), because they are here enough and now enough for legal cases to be emerging. These are 1) drones, and 2) delivery robots. Mostly. Mason Marks pointed out Amazon's amazing Kiva robots, but they're working in warehouses where their impact is more a result of the workings of capitalism that that of AI. They don't scare people in their homes at night or appropriate sidewalk space like delivery robots, which Paul Colhoun described as "unattended property in motion carrying another person's property". Which sounds like they might be sort of cute and vulnerable, until he continues: "What actions may they take to defend themselves?" Is this a new meaning for move fast and break things?

Colhoun's comment came during a discussion of using various forecasting methods - futures planning, design fiction, the futures wheel (which someone suggested might provide a usefully visual alternative to privacy policies) - that led Cindy Grimm to pinpoint the problem of when you regulate. Too soon, and you risk constraining valuable technology. Too late, and you're constantly scrambling to revise your laws while being mocked by technical experts calling you an idiot (see 25 years of Internet regulation). Still, I'd be happy to pass a law right now barring drones from advertising and data collection and damn the consequences. And then be embarrassed; as Levy pointed out, other populations have a lot more to fear from drones than being bothered by some ads...

The question remains: what, exactly do you regulate? The Algorithmic Accountability Act recently proposed by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) would require large companies to audit machine learning systems to eliminate bias. Discrimination is much bigger than AI, said conference co-founder Michael Froomkin in discussing Alicia Solow-Neiderman's paper on regulating AI, but special to AI is unequal access to data.

Grimm also pointed out that there are three different aspects: writing code (referring back to Petros Terzis's paper proposing to apply the regime of negligence laws to coders); collecting data; and using data. While this is true, it doesn't really capture the experience Abby Jacques suggested could be a logical consequence of following the results collected by MIT's Moral Machine: save the young, fit, and wealthy, but splat the old, poor, and infirm. If, she argued, you followed the mandate of the popular vote, old people would be scrambling to save themselves in parking lots while kids ran wild knowing the cars would never hit them. An entertaining fantasy spectacle, to be sure, but not quite how most of us want to live. As Jacques tells it, the trolley problem the Moral Machine represents is basically a metaphor that has eaten its young. Get rid of it! This was a rare moment of near-universal agreement. "I've been longing for the trolley problem to die," robotics pioneerRobin Murphy said. Jacques herself was more measured: "Philosophers need to take responsibility for what happens when we leave our tools lying around."

The biggest thing I've learned in all the law conferences I go to is that law proceeds by analogy and metaphor. You see this everywhere: Kate Darling is trying to understand how we might integrate robots into our lives by studying the history of domesticating animals; Ian Kerr and Carys Craig are trying to deromanticize "the author" in discussions of AI and copyright law; the "property" in "intellectual property" draws an uncomfortable analogy to physical objects; and Hideyuki Matsumi is trying to think through robot registration by analogy to Japan's Koseki family registration law.

Google koala car.jpgGetting the metaphors right is therefore crucial, which explains, in turn, why it's important to spend so much effort understanding what the technology can really do and what it can't. You have to stop buying the images of driverless cars to produce something like the "handoff model" proposed by Jake Goldenfein, Deirdre Mulligan, and Helen Nissenbaum to explore the permeable boundaries between humans and the autonomous or connected systems driving their cars. Similarly, it's easy to forget, as Mulligan said in introducing her paper with Daniel N. Kluttz, that in "machine learning" algorithms learn only from the judgments at the end; they never see the intermediary reasoning stages.

So metaphor matters. At this point I had a blinding flash of realization. This is why no one can agree about Brexit. *Brexit* is a trolley problem. Small wonder Jacques called the Moral Machine a "monster".

Previous We Robot events as seen by net.wars: 2018 workshop and conference; 2017; 2016 workshop and conference, 2015; 2013, and 2012. We missed 2014.

Illustrations: The Moral Labyrinth art installation, by Sarah Newman and Jessica Fjeld, at We Robot 2019; Google driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 14, 2019

Copywrong

Anti-copyright.svg.pngJust a couple of weeks ago it looked like the EU's proposed reform of the Copyright Directive, last updated in 2001, was going to run out of time. In the last three days, it's revived, and it's heading straight for us. As Joe McNamee, the outgoing director of European Digital Rights (EDRi), said last year, the EU seems bent on regulating Facebook and Google by creating an Internet in which *only* Facebook and Google can operate.

We'll start with copyright. As previously noted, the EU's proposed reforms include two particularly contentious clauses: Article 11, the "link tax", which would require anyone using more than one or two words to link to a news article elsewhere to get a license, and Article 13, the "upload filter", which requires any site older than three years *or* earning more than €10,000,000 a year in revenue to ensure that no user posts anything that violates copyright, and sites that allow user-generated content must make "best efforts" to buy licenses for anything they might post. So even a tiny site - like net.wars, which is 13 years old - that hosted comments would logically be required to license all copyrighted content in the known universe, just in case. In reviewing the situation at TechDirt, Mike Masnick writes, "If this becomes law, I'm not sure Techdirt can continue publishing in the EU." Article 13, he continues, makes hosting comments impossible, and Article 11 makes their own posts untenable. What's left?

Thumbnail image for Thumbnail image for Julia Reda-wg-2016-06-24-cropped.jpgTo these known evils, the German Pirate Party MEP Julia Reda finds that the final text adds two more: limitations on text and data mining that allow rights holders to opt out under most circumstances, and - wouldn't you know it? - the removal of provisions that would have granted authors the right to proportionate remuneration (that is, royalties) instead of continuing to allow all-rights buy-out contracts. Many younger writers, particularly in journalism, now have no idea that as recently as 1990 limited contracts were the norm; the ability to resell and exploit their own past work was one reason the writers of the mid-20th century made much better livings than their counterparts do now. Communia, an association of digital rights organizations, writes that at least this final text can't get any *worse*.

Well, I can hear Brexiteers cry, what do you care? We'll be out soon. No, we won't - at least, we won't be out from under the Copyright Directive. For one thing, the final plenary vote is expected in March or April - before the May European Parliament general election. The good side of this is that UK MEPs will have a vote, and can be lobbied to use that vote wisely; from all accounts the present agreed final text settled differences between France and Germany, against which the UK could provide some balance. The bad side is that the UK, which relies heavily on exports of intellectual property, has rarely shown any signs of favoring either Internet users or creators against the demands of rights holders. The ugly side is that presuming this thing is passed before the UK brexits - assuming that happens - it will be the law of the land until or unless the British Parliament can be persuaded to amend it. And the direction of travel in copyright law for the last 50 years has very much been toward "harmonization".

Plus, the UK never seems to be satisfied with the amount of material its various systems are blocking, as the Open Rights Group documented this week. If the blocks in place weren't enough, Rebecca Hill writes at the Register: under the just-passed Counter-Terrorism and Border Security Act, clicking on a link to information likely to be useful to a person committing or preparing an act of terrorism is committing an offense. It seems to me that could be almost anything - automotive listings on eBay, chemistry textbooks, a *dictionary*.

What's infuriating about the copyright situation in particular is that no one appears to be asking the question that really matters, which is: what is the problem we're trying to solve? If the problem is how the news media will survive, this week's Cairncross Review, intended to study that exact problem, makes some suggestions. Like them or loathe them, they involve oversight and funding; none involve changing copyright law or closing down the Internet.

Similarly, if the problem is market dominance, try anti-competition law. If the problem is the increasing difficulty of making a living as an author or creator, improve their rights under contract law - the very provisions that Reda notes have been removed. And, finally, if the problem is the future of democracy in a world where two companies are responsible for poisoning politics, then delving into campaign finances, voter rights, and systemic social inequality pays dividends. None of the many problems we have with Facebook and Google are actually issues that tightening copyright law solves - nor is their role in spreading anti-science, such as this, just in from Twitter, anti-vaccination ads targeted at pregnant women.

All of those are problems we really do need to work on. Instead, the only problem copyright reform appears to be trying to solve is, "How can we make rights holders happier?" That may be *a* problem, but it's not nearly so much worth solving.


Illustrations: Anti-copyright symbol (via Wikimedia); Julia Reda MEP in 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 1, 2019

Beyond data protection

3rd-cpdp2019-sign.jpgFor the group assembled this week in Brussels for Computers, Privacy, and Data Protection, the General Data Protection Regulation that came into force in May 2018 represented the culmination of years of effort. The mood, however, is not so much self-congratulatory as "what's next?".

The first answer is a lot of complaints. An early panel featured a number of these. Max Schrems, never one to shirk, celebrated GDPR day in 2018 by joining with La Quadrature du Net to file two complaints against Google, WhatsApp, Instagram, and Facebook over "forced consent". Last week, he filed eight more complaints against Amazon, Apple, Spotify, Netflix, YouTube, SoundCloud, DAZN, and Flimmit regarding their implementation of subject access rights. A day or so later, the news broke: the French data protection regulator, CNIL, has fined Google €50 million (PDF) on the basis of their complaint - the biggest fine so far under the new regime that sets the limit at 4% of global turnover. Google is considering an appeal.

It's a start. We won't know for probably five years whether GDPR will have the intended effect of changing the balance of power between citizens and data-driven companies (even though one site is already happy to call it a failure already. Meanwhile, one interesting new development is Apple's crackdown on Facebook and then Google for abusing its enterprise app system to collect comprehensive data on end users. While Apple is certainly far less dependent on data collection than the rest of GAFA/FAANG, this action is a little like those types of malware that download anti-virus software to clean your system of the competition.

The second - more typical of a conference - is to stop and think: what doesn't GDPR cover? The answers are coming fast: AI, automated decision-making, household or personal use of data, and (oh, lord) blockchain. And, a questioner asked late on Wednesday, "Is data protection privacy, data, or fairness?"

Several of these areas are interlinked: automated decision-making is currently what we mean when we say "AI", and while we talk a lot about the historical bias stored in data and the discrimination that algorithms derive from training data and bake into their results. Discussions of this problem, Angsar Koene tend to portray accuracy and fairness as a tradeoff, with accuracy presented as a scientifically neutral reality and fairness as a fuzzy human wish. Instead, he argued, accuracy depends on values we choose to judge it by. Why shouldn't fairness just be one of those values?

A bigger limitation - which we've written about here since 2015 - is that privacy law tends to focus on the individual. Seda Gürses noted that focusing on the algorithm - how to improve it and reduce its bias - similarly ignores the wider context and network externalities. Optimize the Waze algorithm so each driver can reach their destination in record time, and the small communities whose roads were not built for speedy cut-throughs bear the costs of extra traffic, noise, and pollution they generate. Next-generation privacy will have to reflect that wider context; as Dennis Hirsch put it, social protection rather than individual control. As Schrems' and others' complaints show, individual control is rarely ours on today's web in any case.

Privacy is not the only regulation that suffers from that problem. At Tuesday's pre-conference Privacy Camp, several speakers deplored the present climate in which platforms' success in removing hate speech, terrorist content, and unauthorized copyright material is measured solely in numbers: how many pieces, how fast. Such a regime does not foster thoughtful consideration, nuance, respect for human rights, or the creation of a robust system of redress for the wrongly accused. "We must move away from the idea that illegal content can be perfectly suppressed and that companies are not trying hard enough if they aren't doing it," Mozilla Internet policy manager Owen Bennett said, going on to advocate for a wider harm reduction approach.

The good news, in a way, is that privacy law has fellow warriors: competition, liability, and consumer protection law. The first two of those, said Mireille Hildebrandt need to be rethought, in part because some problems will leave us no choice. She cited, for example, the energy market: as we are forced to move to renewables both supply and demand will fluctuate enormously. "Without predictive technology I don't see how we can solve it." Continuously predicting the energy use of each household will, she wrote in a paper in 2013 (PDF), pose new threats to privacy, data protection non-discrimination, and due process.

One of the more interesting new (to me, at least) players on this scene is Algorithm Watch, which has just released a report on algorithmic decision-making in the EU that recommends looking at other laws that are relevant to specific types off decisions, such as applying equal pay legislation to the gig economy. Data protection law doesn't have to do it all.

Some problems may not be amenable to law at all. Paul Nemitzposed this question: given that machine learning training data is always historical, and that therefore the machines are always perforce backward-looking, how do we as humans retain the drive to improve if we leave all our decisions to machines? No data protection law in the world can solve that.

Illustrations: The CPDP 2019 welcome sign in Brussels.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2019

Misforgotten

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpg"It's amazing. We're all just sitting here having lunch like nothing's happening, but..." This was on Tuesday, as the British Parliament was getting ready to vote down the Brexit deal. This is definitely a form of privilege, but it's hard to say whether it's confidence born of knowing your nation's democracy is 900 years old, or aristocrats-on-the-verge denial as when World War I or the US Civil War was breaking out.

Either way, it's a reminder that for many people historical events proceed in the background while they're trying to get lunch or take the kids to school. This despite the fact that all of us in the UK and the US are currently hostages to a paralyzed government. The only winner in either case is the politics of disgust, and the resulting damage will be felt for decades. Meanwhile, everything else is overshadowed.

One of the more interesting developments of the past digital week is the European advocate general's preliminary opinion that the right to be forgotten, part of data protection law, should not be enforceable outside the EU. In other words, Google, which brought the case, should not have to prevent access to material to those mounting searches from the rest of the world. The European Court of Justice - one of the things British prime minister Theresa May has most wanted the UK to leave behind since her days as Home Secretary - typically follows these preliminary opinions.

The right to be forgotten is one piece of a wider dispute that one could characterize as the Internet versus national jurisdiction. The broader debate includes who gets access to data stored in another country, who gets to crack crypto, and who gets to spy on whose citizens.

This particular story began in France, where the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection regulator, fined Google €100,000 for selectively removing a particular person's name from its search results on just its French site. CNIL argued that instead the company should delink it worldwide. You can see their point: otherwise, anyone can bypass the removal by switching to .com or .co.jp. On the other hand, following that logic imposes EU law on other countries, such as the US's First Amendment. Americans in particular tend to regard the right to be forgotten with the sort of angry horror of Lady Bracknell contemplating a handbag. Google applied to the European Court of Justice to override CNIL and vacate the fine.

A group of eight digital rights NGOs, led by Article 19 and including Derechos Digitales, the Center for Democracy and Technology, the Clinique d'intérêt public et de politique d'Internet du Canada (CIPPIC), the Electronic Frontier Foundation, Human Rights Watch, Open Net Korea, and Pen International, welcomed the ruling. Many others would certainly agree.

The arguments about jurisdiction and censorship were, like so much else, foreseen early. By 1991 or thereabouts, the question of whether the Internet would be open everywhere or devolve to lowest-common-denominator censorship was frequently debated, particularly after the United States v. Thomas case that featured a clash of community standards between Tennessee and California. If you say that every country has the right to impose its standards on the rest of the world, it's unclear what would be left other than a few Disney characters and some cat videos.

France has figured in several of these disputes: in (I think) the first international case of this kind, in 2000, it was a French court that ruled that the sale of Nazi memorabilia on Yahoo!'s site was illegal; after trying to argue that France was trying to rule over something it could not control, Yahoo! banned the sales on its French auction site and then, eventually, worldwide.

Data protection law gave these debates a new and practical twist. The origins of this particular case go back to 2014, when the European Court of Justice ruled in Google Spain v AEPD and Mario Costeja González that search engines must remove links to web pages that turn up in a name search and contain information that is irrelevant, inadequate, or out of date. This ruling, which arguably sought to redress the imbalance of power between individuals and corporations publishing information about them and free expression. Finding this kind of difficult balance, the law scholar Judith Rauhofer argued at that year's Computers, Freedom, and Privacy, is what courts *do*. The court required search engines to remove from the search results that show up in a *name* search the link to the original material; it did not require the original websites to remove it entirely or require the link's removal from other search results. The ruling removed, if you like, a specific type of power amplification, but not the signal.

How far the search engines have to go is the question the ECJ is now trying to settle. This is one of those cases where no one gets everything they want because the perfect is the enemy of the good. The people who want their past histories delinked from their names don't get a complete solution, and no one country gets to decide what people in other countries can see. Unfortunately, the real winner appears to be geofencing, which everyone hates.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 14, 2018

Entirely preventable

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgThis week, the US House of Representatives Committee on Oversight and Government Reform used this phrase to describe the massive 2017 Equifax data breach: "Entirely preventable." It's not clear that the ensuing recommendations, while all sensible and valuable stuff - improve consumers' ability to check their records, reduce the use of Social Security numbers as unique identifiers, improve oversight of credit reporting agencies, increase transparency and accountability, hold federal contractors liable, and modernize IT security - will really prevent another similar breach from taking place. A key element was a bit of unpatched software that left open a vulnerability used by the attackers to gain a foothold - in part, the report says, because the legacy IT systems made patching difficult. Making it easier to do the right thing is part of the point of the recommendation to modernize the IT estate.

How closely is it feasible to micromanage companies the size and complexity of Equifax? What protection against fraud will we have otherwise?

The massive frustration is that none of this is new information or radical advice. On the consumer rights side, the committee is merely recommending practices that have been mandated in the EU for more than 20 years in data protection law. Privacy advocates have been saying for more than *30* years that the SSN is every example of how a unique identifier should *not* be used. Patching software is so basic that you can pick any random top ten security tips and find it in the top three. We sort of make excuses for small businesses because their limited resources mean they don't have dedicated security personnel, but what excuse can there possibly be for a company the size of Equifax that holds the financial frailty of hundreds of millions of people in its grasp?

The company can correctly say this: we are not its customers. It is not its job to care about us. Its actual customers - banks, financial services, employers, governments - are all well served. What's our problem? Zeynep Tufecki summed it up correctly on Twitter when she commented that we are not Equifax's customers but its victims. Until there are proportionate consequences for neglect and underinvestment in security, she said later, the companies and their departing-with-bonuses CEOs will continue scrimping on security even though the smallest consumer infraction means they struggle for years to reclaim their credit rating.

If Facebook and Google should be regulated as public utilities, the same is even more true for the three largest credit agencies, Equifax, Experian, and TransUnion, who all hold much more power over us, and who are much less accountable. We have no opt-out to exercise.

But even the punish-the-bastards approach merely smooths over and repaints the outside of a very ugly tangle of amyloid plaques. Real change would mean, as Mydex CEO David Alexander is fond of arguing, adopting a completely different approach that puts each of us in charge of our own data and avoids creating these giant attacker-magnet databases in the first place. See also data brokers, which are invisible to most people.

Meanwhile, in contrast to the committee, other parts of the Five Eyes governments seem set on undermining whatever improvements to our privacy and security we can muster. Last week the Australian parliament voted to require companies to back-door their encryption when presented with a warrant. While the bill stops at requiring technology companies to build in such backdoors as a permanent fixture - it says the government cannot require companies to introduce a "systemic weakness" or "systemic vulnerability" - the reality is that being able to break encryption on demand *is* a systemic weakness. Math is like that: either you can prove a theorem or you can't. New information can overturn existing knowledge in other sciences, but math is built on proven bedrock. The potential for a hole is still a hole, with no way to ensure that only "good guys" can use it - even if you can agree who the good guys are.

In the UK, GCHQ has notified the intelligence and security committee that it will expand its use of "bulk equipment interference". In other words, having been granted the power to hack the world's computers - everything from phones and desktops to routers, cars, toys, and thermostats - when the 2016 Investigatory Powers Act was being debated, GCHQ now intends to break its promise to use that power sparingly.

As I wrote in a submission to the consultation, bulk hacking is truly dangerous. The best hackers make mistakes, and it's all too easy to imagine a hacking error becoming the cause of a 100-car pile-up. As smart meters roll out, albeit delayed, and the smart grid takes shape, these, too, will be "computers" GCHQ has the power to hack. You, too, can torture someone in their own home just by controlling their thermostat. Fun! And important for national security. So let's do more of it.

In a time when attacks on IT infrastructure are growing in sophistication, scale, and complexity, the most knowledgeable people in government, whose job it is to protect us, are deliberately advocating weakening it. The consequences that are doubtless going to follow the inevitable abuse of these powers - because humans are humans and the mindset inside law enforcement is to assume the worst of all of us - will be entirely preventable.


Illustrations: GCHQ listening post at dawn (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 16, 2018

Septet

bush-gore-hanging-chad-florida.jpgThis week catches up on some things we've overlooked. Among them, in response to a Twitter comment: two weeks ago, on November 2, net.wars started its 18th unbroken year of Fridays.

Last year, the writer and documentary filmaker Astra Taylor coined the term "fauxtomation" to describe things that are hyped as AI but that actually rely on the low-paid labor of numerous humans. In The Automation Charade she examines the consequences: undervaluing human labor and making it both invisible and insecure. Along these lines, it was fascinating to read that in Kenya, workers drawn from one of the poorest places in the world are paid to draw outlines around every object in an image in order to help train AI systems for self-driving cars. How many of us look at a self-driving car see someone tracing every pixel?

***

Last Friday, Index on Censorship launched Demonising the media: Threats to journalists in Europe, which documents journalists' diminishing safety in western democracies. Italy takes the EU prize, with 83 verified physical assaults, followed by Spain with 38 and France with 36. Overall, the report found 437 verified incidents of arrest or detention and 697 verified incidents of intimidation. It's tempting - as in the White House dispute with CNN's Jim Acosta - to hope for solidarity in response, but it's equally likely that years of politicization have left whole sectors of the press as divided as any bullying politician could wish.

***

We utterly missed the UK Supreme Court's June decision in the dispute pitting ISPs against "luxury" brands including Cartier, Mont Blanc, and International Watch Company. The goods manufacturers wanted to force BT, EE, and the three other original defendants, which jointly provide 90% of Britain's consumer Internet access, to block more than 46,000 websites that were marketing and selling counterfeits. In 2014, the High Court ordered the blocks. In 2016, the Court of Appeal upheld that on the basis that without ISPs no one could access those websites. The final appeal was solely about who pays for these blocks. The Court of Appeal had said: ISPs. The Supreme Court decided instead that under English law innocent bystanders shouldn't pay for solving other people's problems, especially when solving them benefits only those others. This seems a good deal for the rest of us, too: being required to pay may constrain blocking demands to reasonable levels. It's particularly welcome after years of expanded blocking for everything from copyright, hate speech, and libel to data retention and interception that neither we nor ISPs much want in the first place.

***

For the first time the Information Commissioner's Office has used the Computer Misuse Act rather than data protection law in a prosecution. Mustafa Kasim, who worked for Nationwide Accident Repair Services, will serve six months in prison for using former colleagues' logins to access thousands of customer records and spam the owners with nuisance calls. While the case reminds us that the CMA still catches only the small fry, we see the ICO's point.

***

In finally catching up with Douglas Rushkoff's Throwing Rocks at the Google Bus, the section on cashless societies and local currencies reminded us that in the 1960s and 1970s, New Yorkers considered it acceptable to tip with subway tokens, even in the best restaurants. Who now would leave a Metro Card? Currencies may be local or national; cashlessness is global. It may be great for those who don't need to think about how much they spend, but it means all transactions are intermediated, with a percentage skimmed off the top for the middlefolk. The costs of cash have been invisible to us, as Dave Birch says, but it is public infrastructure. Cashlessness privatizes that without any debate about the social benefits or costs. How centralized will this new infrastructure become? What happens to sectors that aren't commercially valuable? When do those commissions start to rise? What power will we have to push back? Even on-the-brink Sweden is reportedly rethinking its approach for just these reasons In a survey, only 25% wanted a fully cashless society.

***

Incredibly, 18 years after chad hung and people disposed in Bush versus Gore, ballots are still being designed in ways that confuse voters, even in Broward County, which should have learned better. The Washington Post tell us that in both New York and Florida ballot designs left people confused (seeing them, we can see why). For UK voters accustomed to a bit of paper with big names and boxes to check with a stubby pencil, it's baffling. Granted, the multiple federal races, state races, local officers, judges, referendums, and propositions in an average US election make ballot design a far more complex problem. There is advice available, from the US Election Assistance Commission, which publishes design best practices, but I'm reliably told it's nonetheless difficult to do well. On Twitter, Dana Chisnell provides a series of links that taken together explain some background. Among them is this one from the Center for Civic Design, which explains why voting in the US is *hard* - and not just because of the ballots.

***

Finally, a word of advice. No matter how cool it sounds, you do not want a solar-powered, radio-controlled watch. Especially not for travel. TMOT.

Illustrations: Chad 2000.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2018

Lost in transition

End_all_DRM_in_the_world_forever,_within_a_decade.jpg"Why do I have to scan my boarding card?" I demanded loudly of the machine that was making this demand. "I'm buying a thing of milk!"

The location was Heathrow Terminal 5. The "thing of milk" was a pint of milk being purchased with a view to a late arrival in a continental European city where tea is frequently offered with "Kafeesahne", a thick, off-white substance that belongs with tea about as much as library paste does.

A human materialized out of nowhere, and typed in some codes. The transaction went through. I did not know you could do that.

The incident sounds minor - yes, I thanked her - but has a real point. For years, UK airport retailers secured discounts for themselves by demanding to scan boarding cards at the point of purchase while claiming the reason was to exempt the customers from VAT when they are taking purchases out of the country. Just a couple of years ago the news came out: the companies were failing to pass the resulting discounts on to customers and simply pocketing the VAT. Legally, you are not required to comply with the request.

They still ask, of course.

If you're dealing with a human retail clerk, refusing is easy: you say "No" and they move on to completing the transaction. The automated checkout (which I normally avoid), however is not familiar with No. It is not designed for No. No is not part of its vocabulary unless a human comes along with an override code.

My legal right not to scan my boarding card therefore relies on the presence of an expert human. Take the human out of that loop - or overwhelm them with too many stations to monitor - and the right disappears, engineered out by automation and enforced by the time pressure of having to catch a flight and/or the limited resource of your patience.

This is the same issue that has long been machinified by DRM - digital rights management - and the locks it applies to commercially distributed content. The text of Alice in Wonderland is in the public domain, but wrap it in DRM and your legal rights to copy, lend, redistribute, and modify all vanish, automated out with no human to summon and negotiate with.

Another example: the discount railcard I pay for once a year is renewable online. But if you go that route, you are required to upload your passport, photo driver's license, or national ID card. None of these should really be necessary. If you renew at a railway station, you pay your money and get your card, no identification requested. In this example the automation requires you to submit more data and take greater risk than the offline equivalent. And, of course, when you use a website there's no human to waive the requirement and restore the status quo.

Each of these services is designed individually. There is no collusion, and yet the direction is uniform.

Most of the discussion around this kind of thing - rightly - focuses on clearly unjust systems with major impact on people's lives. The COMPAS recidivism algorithm, for example, is used to risk-assess the likelihood that a criminal defendant will reoffend. A ProPublica study found that the algorithm tended to produced biased results of two kinds: first, black defendants were more likely than white defendants to be incorrectly rated as high risk; second, white reoffenders were incorrectly classified as low-risk more often than black ones. Other such systems show similar biases, all for the same basic reason: decades of prejudice are baked into the training data these systems are fed. Virginia Eubanks, for example, has found similar issues in systems such as those that attempt to identify children at risk and that appear to see poverty itself as a risk factor.

By contrast, the instances I'm pointing out seem smaller, maybe even insignificant. But the potential is that over time wide swathes of choices and rights will disappear, essentially automated out of our landscape. Any process can be gamed this way.

At a Royal Society meeting last year, law professor Mireille Hildebrandt outlined the risks of allowing the atrophy of governance through the text-driven law that today is negotiated in the courts. The danger, she warned, is that through machine deployment and "judgemental atrophy" it will be replaced with administration, overseen by inflexible machines that enforce rules with no room for contestability, which Hildebrandt called "the heart of the rule of law".

What's happening here is, as she said, administration - but it's administration in which our legitimate rights dissipate in a wave of "because we can" automated demands. There are many ways we willingly give up these rights already - plenty of people are prepared to give up anonymity in financial transactions by using all manner of non-cash payment systems, for example. But at least those are conscious choices from which we derive a known benefit. It's hard to see any benefit accruing from the loss of the right to object to unreasonable bureaucracy imposed upon us by machines designed to serve only their owners' interests.


Illustrations: "Kill all the DRM in the world within a decade" (via Wikimedia.).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?


Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2018

Ghosted

GDPR-LATimes.pngThree months after the arrival into force of Europe's General Data Protection Regulation, Nieman Lab finds that more than 1,000 US newspapers are still blocking EU visitors.

"We are engaged on the issue", says the placard that blocks access to even the front pages of the New York Daily News and the Chicago Tribune, both owned by Tronc, as well as the Los Angeles Times, which was owned by Tronc until very recently. Ironically, Wikipedia tells us that the silly-sounding name "Tronc" was derived from "Tribune Online Content"; you'd think a company whoe name includes "online" would grasp the illogic of blocking 500 million literate readers. Nieman Lab also notes that Tronc is for sale, so I guess the company has more urgent problems.

Also apparently unable to cope with remediating its systems, despite years of notice, is Lee Enterprises, which owns numerous newspapers including the Carlisle, PA Sentinel and the Arizona Daily Star; these return "Error 451: Unavailable due to legal reasons", and blame GDPR as the reason "access cannot be granted at this time". Even the giant retail chain Williams-Sonoma has decided GDPR is just too hard, redirecting would-be shoppers to a UK partner site that is almost, but not quite, entirely unlike Williams-Sonoma - and useless if you want to ship a gift to someone in the US.

If you're reading this in the US, and you want to see what we see, try any of those URLs in a free proxy such as Hide Me, setting set your location to Amsterdam. Fun!

Less humorously, shortly after GDPR came into force a major publisher issued new freelance contracts that shift the liability for violations onto freelances. That is, if I do something that gets the company sued for GDPR violations, in their world I indemnify them.

And then there are the absurd and continuing shenanigans of ICANN, which is supposed to be a global multi-stakeholder modeling a new type of international governance, but seems so unable to shake its American origins that it can't conceive of laws it can't bend to its will.

Years ago, I recall that the New York Times, which now embraces being global, paywalled non-US readers because we were of no interest to their advertisers. For that reason, it seems likely that Tronc and the others see little profit in a European audience. They're struggling already; it may be hard to justify the expenditure on changing their systems for a group of foreign deadbeats. At the same time, though, their subscribers are annoyed that they can't access their home paper while traveling.

On the good news side, the 144 local daily newspapers and hundreds of other publications belonging to GateHouse Media seem to function perfectly well. The most fun was NPR, which briefly offered two alternatives: accept cookies or view in plain text. As someone commented on Twitter, it was like time-traveling back to 1996.

The intended consequence has been to change a lot of data practices. The Reuters Institute finds that the use of third-party cookies is down 22% on European news sites in the three months GDPR has been in force - and 45% on UK news sites. A couple of days after GDPR came into force, web developer Marcel Freinbichler did a US-vs-EU comparison on USA Today: load time dropped from 45 seconds to three, from 124 JavaScript files to zero, and a more than 500 requests to 34.

gdpr-unbalanced-cookingsite.jpgBut many (and not just US sites) are still not getting the message, or are mangling it. For example, numerous sites now display boxes displaying the many types of cookies they use and offering chances to opt in or out. A very few of these are actually well-designed, so you can quickly opt out of whole classes of cookies (advertising, tracking...) and get on with reading whatever you came to the site for. Others are clearly designed to make it as difficult as possible to opt out; these sites want you to visit a half-dozen other sites to set controls. Still others say that if you click the button or continue using the site your consent will be presumed. Another group say here's the policy ("we collect your data"), click to continue, and offer no alternative other than to go away. Not a lawyer - but sites are supposed to obtain explicit consent for collecting data on an opt-in basis, not assume consent on an an opt-out basis while making it onerous to object.

The reality is that it is far, far easier to install ad blockers - such as EFF's Privacy Badger - than to navigate these terrible user interfaces. In six months, I expect to see surveys coming from American site owners saying that most people agree to accept advertising tracking, and what they will mean is that people clicked OK, trusting their ad blockers would protect them.

None of this is what GDPR was meant to do. The intended consequence is to protect citizens and redress the balance of power; exposing exploitative advertising practices and companies' dependence on "surveillance capitalism" is a good thing. Unfortunately, many Americans seem to be taking the view that if they just refuse service the law will go away. That approach hasn't worked since Usenet.


Illustrations: Personally collected screenshots.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 20, 2018

Competing dangerously

Thumbnail image for Conversation_with_Margrethe_Vestager,_European_Commissioner_for_Competition_(17222242662).jpgIt is just over a year since the EU fined Google what seemed a huge amount, and here we are again: this week the EU commissioner for competition Margrethe Vestager levied an even bigger €4.34 billion fine over "serious illegal behavior". At issue was Google's licensing terms for its Android apps and services, which essentially leveraged its ownership of the operating system to ensure its continued market dominance in search as the world moved to mobile. Google has said it will appeal; it is also appealing the 2017 fine. The present ruling gives the company 90 days to change behaviour or face further fines of up to 5% of daily worldwide turnover.

Google's response is to say that Google's rules have enabled it not to charge manufacturers to use Android, made Android phones easier to use, and are efficient for both developers and consumers. The ruling, writes CEO Sundar Pichai, will "upset the balance of the Android ecosystem".

Google's claim that users are free to install other browsers and search engines and are used to downloading apps is true but specious. It's widely known that 95% of users never change default settings. Defaults *matter*, and Google certainly knows this. When you reach a certain size - Android holds 80% of European and worldwide smart mobile devices, and 95% of the licensable mobile market outside of China - the decisions you make about choice architecture determine the behavior of large populations.

Also, the EU's ruling isn't about a user's specific choice on their individual smartphone. Instead, it's based on three findings: 1) Google's licensing terms made access to the Play Store contingent on pre-installing Google's search app and Chrome; 2) Google paid some large manufacturers and network operators to exclusively pre-install Google's search app; 3) Google prevented manufacturers that pre-install Google apps from selling *any* devices using non-Google-approved ("forked") versions of Android. It puts the starting date at 2011, "when Google became dominant".

There are significant similarities here to the US's 1998 ruling against Microsoft over tying Internet Explorer to Windows. Back then, Microsoft was the Big Evil on the block, and there were serious concerns that it would use Internet Explorer as a vector for turning the web into a proprietary system under its control. For a good account, see Charles H. Ferguson's 1999 book, High St@kes, No Prisoners. Ferguson would know: his web page design start-up, Vermeer, was the subject of an acquisition battle between Microsoft and Netscape. Google, which was founded in 1998, ultimately benefited from this ruling, because it helped keep the way open for "alternative" browsers such as Google's own Chrome.

There are also similarities to the EU's 2004 ruling against Microsoft, which required the company to stop bundling its media player with Windows and to disclose the information manufacturers needed to integrate non-Microsoft networking and streaming software. The EU's fine was the largest-ever at the time: €497 million. At that point, media players seemed like important gateways to content. The significant gateway drug turned out to be Web browsers; either way, Microsoft and streaming have both prospered.

Since 1998, however, in another example of EU/US divergence, the US has largely abandoned enforcing anti-competition law. As Lina M. Khan pointed out last year, it's no longer the case that waiting will produce two guys in a garage with a new technology that up-ends the market and its biggest players. The EU explains carefully in its announcement that Android is different from Apple's iOS or Blackberry because as vertically integrated companies that do not license their products they are not part of the same market. In the Android market, however, it says, "...it was Google - and not users, app developers, and the market - that effectively determined which operating systems could prosper."

Too little, too late, some are complaining, and more or less correctly: the time for this action was 2009; even better, says the New York Times, block in advance the mergers that are creating these giants. Antitrust actions against technology companies are almost always a decade late. Others buy Google's argument that consumers will suffer, but Google is a smart company full of smart engineers who are entirely capable of figuring out well-designed yet neutral ways to present choices, just as Microsoft did before it.

There's additional speculation that Google might have to recoup lost revenues by charging licensing fees; that Samsung might be the big winner, since it already has its own full competitive suite of apps; and that the EU should fine Apple, too, on the basis that the company's closed system bars users from making *any* unapproved choices.

Personally, I wish the EU had applied more attention to the ways Google leverages the operating system to enable user tracking to fuel its advertising business. The requirement to tie every phone to a Gmail address is an obvious candidate for regulatory disruption; so is the requirement to use it to access the Play Store. The difficulty of operating a phone without being signed into Google has ratcheted up over time - and it seems wholly unnecessary *unless* the purpose is to make it easier to do user tracking. This issue may yet find focus under GDPR.

Illustrations: Margrethe Vestager.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 13, 2018

Exporting the Second Amendment

Ultimaker_3D_Printer_(16656068207).jpgOne thing about a fast-moving world in a time of technological change is that it's easy to lose track of things in the onslaught. This week alone - the UK Information Commissioner's Office fined Facebook the pre-GDPR maximum £500,000, ; Uber is firing human safety drivers because it's scaling back its tests of autonomous vehicles; and Twitter is currently deleting more than 1 million fake accounts a *day*.

Until a couple of days ago, one such forgotten moment in internet history was the 2013 takedown of the 3D printing designs site Defcad after it was prosecuted for publishing blueprints for various guns. In the years since, Andy Greenberg writes at Wired, Defcad owner Cody Wilson went on to sue the US Department of Justice, arguing that in demanding removal of his gun blueprints from the internet the DoJ was violating both the First Amendment (freedom of speech) and the Second (the right to bear arms). Wilson has now won his case in a settlement.

It's impossible for anyone with a long memory of the internet's development to read this and not be immediately reminded of the early 1990s battles surrounding the PC-based encryption software PGP. In a 1993 interview for a Guardian piece about his investigation, PGP creator Phil Zimmermann explicitly argued that keeping strong cryptography available for public use, like the right to bear arms enshrined in the Second Amendment, was essential to limit the power of the state.

The reality is that crypto is much more of a leveler than guns are. Few governments are so small that a group of civilians can match their military might. Crypto is much more of a leveler. In World War II, only governments had enough resources to devise and crack the strongest encryption. Today, which government has a cluster the size of GAFA's?

More immediately relevant is the fact that the law the DoJ used in both cases - Wilson and Zimmermann - is the same one: the International Traffic in Arms Regulations. Based on crypto's role in World War II, ITAR restricted strong encryption restricted as a weapon of strategic importance. The Zimmerman investigation focused on whether he had exported PGP to other countries by uploading it to the internet. The contemporaneous Computers, Freedom, and Privacy conferences quivered with impassioned fury over the US's insistence that export restrictions were essential. It all changed around 1996, when cryptographer Daniel Bernstein won his court case against the US government over ITAR's restrictions. By then cryptography's importance in ecommerce made restrictions untenable anyway. Lifting the restrictions did not end the arguments over law enforcement access; these continue today.

The battles over cryptography, however, are about a technology that is powerfully important in preserving the privacy and security of everyone's data, from banks to retailers to massive numbers of innocent citizens. Human rights organizations argue that the vast majority who are innocent citizens have a right to protect the confidentiality of the records we keep of our conversations with our doctors, lawyers, and best friends. In addition, the issues surrounding encryption are the same irrespective of location and timing. For nearly three decades myriad governments have cited the dangers of terrorists, drug dealers, pedophiles, and organized crime in demanding free access to encrypted data. Similarly, privacy activists worldwide have responded with the need to protect journalists, whistleblowers, human rights activists, victims of domestic violence, and other vulnerable people from secret snooping and the wrongness of mass surveillance.

Arguments over guns, however, play out as differently outside the US as arguments about data protection, competition, and antitrust laws do. Put simply, outside the US there is no Second Amendment, and the idea that guns should be restricted is much less controversial. European friends often comment on how little Americans trust their government.

For this reason, it's likely that publishing blueprints for DIY guns, though now explicitly ruled legal in the US, will become a new excuse for censoring the internet for other governments. In the US, the Electronic Frontier Foundation backed Wilson as a matter of protecting free speech; it's doubtful that human rights organizations elsewhere will see gun designs in the same way.

One major change since this case first came up: 3D printing has not become anything like the mass phenomenon its proponents were predicting in 2013. Then, many thought 3D printing was the coming thing. Scientists like Hod Lipson were imagining the new shapes and functions strange materials composited molecule by molecule would imminently create. Then, few people had 3D printers in their homes.

But today...although 3D printing has made some inroads in manufacturing and prototyping, consumers still find 3D printers too expensive for their limited usefulness, even though they can be fun. Some gain access to them through Hackspaces/FabLabs/Makerspaces, but that movement, though important and valuable, seems similarly to have largely stalled a few years back. Lipson's future may still happen. But it isn't happening yet to any appreciable degree.

Instead, the future that's rushing at us is the Internet of Things, where the materials are largely familiar and what's different is that they're laced with lectronics that make them programmable. There is more to worry about in "smart" guns than in readily downloadable designs for guns.


Illustrations: Ultimaker 3D printer in London, 2014 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 6, 2018

This is us

Thumbnail image for ACTA_Protest_Crowd_in_London.JPGAfter months of anxiety among digital rights campaigners such as the Open Rights Group and the Electronic Frontier Foundation, the European Parliament has voted 318-278 against fast-tracking a particularly damaging set of proposed changes to copyright law.

There will be a further vote on September 10, so as a number of commentators are reminding us on Twitter, it's not over yet.

The details of the European Commission's alarmingly wrong-headed approach have been thoroughly hashed out for the last year by Glyn Moody. The two main bones of contention are euphoniously known as Article 11 and Article 13. Article 11 (the "link tax") would give publishers the right to require licenses (that is, payment) for the text accompanying links shared on social media, and Article 13 (the "upload filter") would require sites hosting user content to block uploads of copyrighted material.

In a Billboard interview with MEP Helga Trüpel, Muffett quite rightly points out the astonishing characterization of the objections to Articles 11 and 13 as "pro-Google". There's a sudden outburst of people making a similar error: Even the Guardian's initial report saw the vote as letting tech giants (specifically, YouTube) off the hook for sharing their revenues. Paul McCartney's last-minute plea hasn't helped this perception. What was an argument about the open internet is now being characterized as a tussle over revenue share between a much-loved billionaire singer/songwriter and a greedy tech giant that exploits artists.

Yet, the opposition was never about Google. In fact, probably most of the active opponents to this expansion of copyright and liability would be lobbying *against* Google on subjects like privacy, data protection, tax avoidance, and market power, We just happen to agree with Google on this particular topic because we are aware that forcing all sites to assume liability for the content their users post will damage the internet for everyone *else*. Google - and its YouTube subsidiary - has both the technology and the financing to play the licensing game.

But licensing and royalties are a separate issue from mandating that all sites block unauthorized uploads. The former is about sharing revenues; the latter is about copyright enforcement, and conflating them helps no one. The preventive "copyright filter" that appears essential for compliance with Article 13 would fail the "prior restraint" test of the US First Amendment - not that the EU needs to care about that. As copyright-and-technology consultant Bill Rosenblatt writes, licensing is a mess that this law will do nothing to fix. If artists and their rights holders want a better share of revenues, they could make it a *lot* easier for people to license their work. This is a problem they have to fix themselves, rather than requiring lawmakers to solve it for them by placing the burden on the rest of us. The laws are what they are because for generations they made them.

Article 11, which is or is not a link tax depending who you listen to, is another matter. Germany (2013) and Spain (2014) have already tried something similar, and in both cases it was widely acknowledged to have been a mistake. So much so that one of the opponents to this new attempt is the Spanish newspaper El País.

My guess is that those who want these laws passed are focusing on Google's role in lobbying against them - for example, Digital Music News reports that Google spent more than $36 million on opposing Article 13 - is preparation for the next round in September. Google and Facebook are increasingly the targets people focus on when they're thinking about internet regulation. Therefore, if you can recast the battle as being one between deserving artists and a couple of greedy American big businesses, they think it will be an easier sell to legislators.

But there are two of them and billions of us, and the opposition to Articles 11 and 13 was never about them. The 2012 SOPA and PIPA protests and the street protests against ACTA were certainly not about protecting Google or any other large technology company. No one goes out on the street or dresses up their website in protest banners in order to advocate for *Google*. They do it because what's been proposed threatens to affect them personally.

There's even a sound economic argument: had these proposed laws been in place in 1998, when Sergey Brin and Larry Page were meeting in dorm rooms, Google would not exist. Nor would thousands of other big businesses. Granted, most of these have not originated in the EU, but that's not a reason to wreck the open internet. Instead, that's a reason to find ways to make the internet hospitable to newcomers with bright ideas.

This debate is about the rest of us and our access to the internet. We - for some definition of "we" - were against these kinds of measures when they first surfaced in the early 1990s, when there were no tech giants to oppose them, and for the same reasons: the internet should be open to all of us.

Let the amendments begin.

Illustrations: Protesters against ACTA in London, 2012 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 28, 2018

Divergence

siliconvalleyopoloy.jpgLast September, Open Markets Institute director of legal policy Lina M. Khan published a lengthy law review article discussing two related topics: American courts' narrowing of antitrust law to focus on pricing and profits rather than promoting competition and balancing market power, and the application of that approach to Amazon in particular. The US's present conception of antitrust law, she writes and we see in action, provides no lens through which to curb the power of data-driven platforms. If cheaper is always better, what can possibly be wrong with free?

This week, the US Supreme Court provided another look at this sort of reasoning when it issued its opinion in Ohio v. American Express. The short version, as Henry Farrell explains on Twitter: SCOTUS sided with American Express, and in doing so made it harder to bring antitrust action against the big technology companies in the US *and* widened the gulf between the EU and US approaches to such things.

As Erik Hovenkamp explains in a paper analyzing the case (PDF), credit card transactions, like online platforms, are multi-sided markets. That is, they connect two distinguishable groups of customers whose relationships with the platform are markedly different. For Facebook and Google, these groups are consumers and advertisers; for credit card companies they're consumers and retailers. The intermediary platforms make their money by taking a small slice of each transaction. Credit card companies' slice is a percentage of the value of the transaction, paid directly by the merchant and indirectly by card holders through fees and interest; social media's slice is the revenue from the advertising it can display when you interact with your friends. Historically, American Express charges merchants a higher commission than other cards, money the merchant can't reclaim by selectively raising prices. Network effects - the fact that the more people use them the more useful they are to users - mean all these platforms benefit hugely from scale.

American Express v. Ohio, began in 2010, when the US Department of Justice, eventually joined by 17 states, filed a civil antitrust suit against American Express, Visa, and Mastercard. At issue were their "anti-steering" merchant contract clause rules, which barred merchants from steering customers toward cheaper (to the merchant) forms of payment.

Visa and Mastercard settled and removed the language. In 2015, the District Court ruled in favor of the DoJ. American Express then won in the 2nd Circuit Appeals Court in 2016; 11 of the states appealed. Now, SCOTUS has upheld the circuit court, and the precedent it sets, Beth Farmer writes at SCOTUSblog suggests that the plaintiffs in future antitrust cases covering two-sided markets will have to show that both sides have suffered harm in order to succeed. Applied to Facebook, this judgment would appear to say that harm to users (the loss of privacy) or to society at large (gamed elections) wouldn't count if no advertisers were harmed.

Farrell goes on to note the EU's very different tack, like last year's fine against Google for abusing its market dominance. Americans also underestimate the importance of Max Schrems's case against Google, Instagram, WhatsApp, and Facebook, launched the day the General Data Protection Regulation came into force. For 20 years, American companies have tried to cut a deal with data protection law, but, as Simon Davies warned in 1999, this is no more feasible than Europeans doing the same to the US 1st Amendment.

Schrems's case is that the all-or-nothing approach ("give us your data or go away") is not the law's required meaningful consent. In its new report, Deceived by Design, the Norwegian Consumer Council finds plenty of evidence to back up this contention. After studying the privacy settings provided by Windows 10, Facebook, and Google, the NCC argues that the latter two in particular deliberately make it easier for users to accept the most intrusive options than to opt out; they also use "nudge" techniques to stress the benefits of accepting the intrusive defaults.

"This is not privacy by default," the authors conclude after showing that opting into Facebook's most open privacy setting requires only a single click, while opting out requires foraging through "Manage your privacy settings". These are dark patterns - nudges intended to mislead users into doing things they wouldn't normally choose. In advising users to turn on photo tagging, for example, Facebook implies that choosing otherwise will harm the visually impaired using screenreaders, a technique the Dark Patterns website calls confirmshaming. Five NGOs have written to EU Data Protection Board chair Andrea Jelinek to highlight the report and asking her to investigate further.

The US has no comparable GDPR on which to base regulatory action, even if it were inclined to do so, and the American Express case makes clear that it has little interest in applying antitrust law to curb market power. For now, the EU is the only other region or government large enough to push back and make it stick. The initial response from some American companies', ghosting everyone in the EU, is unlikely to be sufficient. It's hard to see a reconciliation of these diverging approaches any time soon.

Illustrations: Silicon Valleyopoly game, 1996..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2018

Data protection panic

gdpr-countdown.jpgWherever you go at the moment someone is asking panicked questions about the General Data Protection Regulation, which comes into effect on May 25, 2018. The countdown above appeared at a privacy engineering workshop on April 27, and looked ominous enough for Buffy to want to take a whack at it.

Every day new emails arrive asking me to confirm I want to stay on various mailing lists and announcing new privacy policies. Most seem to have grasped the idea that positive consent is required, but some arrive saying you need to nothing to stay stay on their list. I am not a lawyer, but I know that's backwards. The new regime is opt-in, not opt-out. You cannot extract consent from silence.

At the local computer repair place (hard drive failure, don't ask), where my desktop was being punished with diagnostics, the owner asks, "Is encryption necessary? A customer is asking." We agree, from our own reading, that encryption is not *required*, but that liability is less if the data is encrypted and therefore can't be read, and as a consequence sold, reidentified, sprayed across the internet, or used for blackmail. And you don't have to report it as a data breach or notify customers. I explain this to my tennis club and another small organization. Then I remember: crypto is ridiculously hard to implement.

The UK's Information Commissioner's Office has a helpful 12-step guide to assessing what you have to do. My reading, for example, is that a small community interest organization does not have to register or appoint a data controller, though it does need to agree who will answer any data protection complaints it gets. The organization's web host, however, has sent a contract written in data-protectionese, a particularly arcane subset of lawyerese. Asked to look at it, I blanched and started trying to think which of my privacy lawyer friends might be most approachable. Then I realized: tear up that contract and write a new one in English that says who's responsible for what. Someone probably found a model contract somewhere that was written for businesses with in-house lawyers who understood it.

So much is about questioning your assumptions. You think the organization you're involved with has acquired all its data one record at a time when people have signed up to become members. Well, is that true? Have you ever used anyone else's mailing list to trawl for new members? Have you ever shared yours with another organization because you were jointly running a conference? How many copies of the data exist and where are they stored, and how? These are audits few ever stop to do. The threat of the loss of 4% of global revenues is very effective in making them happen.

The computer repair store owner began to realize this aspect. The shop asks new customers to fill out a form, and then adds their information to their database, which means that the next time you bring your machine in they have its whole service history. We mulled over this form for a bit. "I should add a line at the bottom," he said. Yes: a line that asks for permission to include the person on their mailing list for offers and discounts and that says the data won't be shared.

Then I asked him, "How much benefit does the shop get from emailing these offers?" Um, well...none, really. People sometimes come in and ask about them, but they don't buy. So why do them? Good point. The line shrank to something on the order of: "We do not share your data with any third parties".

This is in fact the effect GDPR is intended to have: make people rethink their practices. Some people don't need to keep all the data they have - one organization I'm involved with has a few thousand long-lapsed members in its database with no clear way to find and delete them. For others, the marketing they do isn't really worth the customer irritation. Getting organizations to clean up just those two things seems worth the trouble.

But then he asked, "Who is going to enforce this?" And the reality is there is probably no one until there's a complaint. In the UK, the ICO's budget (PDF) is widely held to be inadequate, and it's not increasing. Elsewhere, it took the tenacity of Max Schrems to get regulators to take the actions that eventually brought down Safe Harbor. A small shop would be hugely unlucky to be a target of regulatory action unless customers were complaining and possibly not even then. Except in rare cases these aren't the people we want targeted; we want the regulators to focus first on egregious harms, repeat offenders with great power, such as Google, and incessant offenders, such as Facebook, whose list of apologies and missteps includes multiple entries for every year of its existence. No wonder the WhatsApp CEO quit (though there's little else he can do, since he sold his company).

Nonetheless, it's the smallest companies and charities who are in the greatest panic about this. Possibly for good reason: there is mounting concern that GDPR will be the lever via which the big data-driven companies lock out small competitors and start-ups. Undesirable unintended consequences, if that's the outcome.


Illustrations: GDPR countdown clock on April 27.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 30, 2018

Conventional wisdom

Hague-Grote_Markt_DH3.JPGOne of the problems the internet was always likely to face as a global medium was the conflict over who gets to make the rules and whose rules get to matter. So far, it's been possible to kick the can down the road for Future Government to figure out while each country makes its own rules. It's clear, though, that this is not a workable long-term strategy, if only because the longer we go on without equitable ways of solving conflicts, the more entrenched the ad hoc workarounds and because-we-can approaches will become. We've been fighting the same battles for nearly 30 years now.

I didn't realize how much I longed for a change of battleground until last week's Internet Law Works-in-Progress paper workshop, when for the first time I heard an approach that sounded like it might move the conversation beyond the crypto wars, the censorship battles, and the what-did-Facebook-do-to-our-democracy anguish. The paper was presented by Asaf Lubin, a Yale JSD candidate whose background includes a fellowship at Privacy International. In it, he suggested that while each of the many cases of international legal clash has been considered separately by the courts, the reality is that together they all form a pattern.

asaf-lubin-yale.pngThe cases Lubin is talking about include the obvious ones, such as United States v. Microsoft, currently under consideration in the US Supreme Court and Apple v. FBI. But they also include the prehistoric cases that created the legal environment we've lived with for the last 25 years: 1996's US v. Thomas, the first jurisdictional dispute, which pitted the community standards of California against those of Tennessee (making it a toss-up whether the US would export the First Amendment Puritanism); 1995's Stratton Oakmont v. Prodigy, which established that online services could be held liable for the content their users posted; and 1991's Cubby v. CompuServe, which ruled that CompuServe was a distributor, not a publisher and could not be held liable for user-posted content. The difference in those last two cases: Prodigy exercised some editorial control over postings; CompuServe did not. In the UK, notice-and-takedown rules were codified after the Godfrey v. Demon Internet extended defamation law to the internet..

Both access to data - whether encrypted or not - and online content were always likely to repeatedly hit jurisdictional boundaries, and so it's proved. Google is arguing with France over whether right-to-be-forgotten requests should be deindexed worldwide or just in France or the EU. The UK is still planning to require age verification for pornography sites serving UK residents later this year, and is pondering what sort of regulation should be applied to internet platforms in the wake of the last two weeks of Facebook/Cambridge Analytica scandals.

The biggest jurisdictional case, United States v. Microsoft, may have been rendered moot in the last couple of weeks by the highly divisive Clarifying Lawful Overseas Use of Data (CLOUD) Act. Divisive because: the technology companies seem to like it, EFF and CDT argue that it's an erosion of privacy laws because it lowers the standard of review for issuing warrants, and Peter Swire and Jennifer Daskal think it will improve privacy by setting up a mechanism by which the US can review what foreign governments do with the data they're given; they also believe it will serve us all better than if the Supreme Court rules in favor of the Department of Justice (which they consider likely).

Looking at this landscape, "They're being argued in a siloed approach," Lubin said, going on to imagine the thought process of the litigants involved. "I'm only interested in speech...or I'm a Mutual Legal Assistance person and only interested in law enforcement getting data. There are no conversations across fields and no recognition that the problems are the same." In conversation at conferences, he's catalogued reasons for this. Most cases are brought against companies too small to engage in too-complex litigation and who fear antagonizing the judge. Larger companies are strategic about which cases they argue and in front of whom; they seek to avoid having "sticky precedents" issued by judges who don't understand the conflicts or the unanticipated consequences. Courts, he said, may not even be the right forums for debating these issues.

The result, he went on to say, is that these debates conflate first-order rules, such as the right balance on privacy and freedom of expression, with second-order rules, such as the right procedures to follow when there's a conflict of laws. To solve the first-order rules, we'd need something like a Geneva Convention, which Lubin thought unlikely to happen.

To reach agreement on the second-order rules, however, he proposes a Hague Convention, which he described as "private international law treaties" that could address the problem of agreeing the rules to follow when laws conflict. As neither a lawyer nor a policy wonk, the idea sounded plausible and interesting: these are not debates that should be solved by either "Our lawyers are bigger and more expensive than your lawyers" or "We have bigger bombs." (Cue Tom Lehrer: "But might makes right...") I have no idea if such an idea can work or be made to happen. But it's the first constructive new suggestion I've heard anyone make for changing the conversation in a long, long time.


Illustrations: The Hague's Grote Markt (via Wikimedia; Asaf Lubin.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 16, 2018

Homeland insecurity

United_Kingdom_foreign_born_population_by_country_of_birth.pngTo the young people," a security practitioner said at a recent meeting, speaking of a group he'd been working with, "it's life lived on their phone."

He was referring to the tendency for adults to talk to kids about fake news, or sexting, or sexual abuse and recruitment, and so on as "online" dangers the adults want to protect them from. But, as this practitioner was trying to explain (and we have said here before), "online" isn't separate to them. Instead, all these issues are part of the context of pressures, relationships, economics, and competition that makes up their lives. This will become increasingly true as widely deployed sensors and hybrid cyber-physical systems and tracking become the norm.

This is a real generation gap. Older adults have taken on board each of these phenomena as we've added it into our existing understanding of the world. Watching each arrive singly over time allows the luxury of consideration and the mental space in which to plot a strategy. If you're 12, all of these things are arriving at once as pieces that are coalescing into your picture of the world. Even if you only just finally got your parents to let you have your own phone you've been watching videos on YouTube, FaceTiming your friends, and playing online games all your life.

An important part of "life lived on the phone" is in the UK's data protection bill implementation of the General Data Protection Regulation, now going through Parliament. The bill carves out some very broad exemptions. Most notably, opposed by the Open Rights Group and the3million, the bill would remove a person's rights as a data subject in the interests of "effective immigration control". In other words, under this exemption the Home Office could make decisions about where and whether you were allowed to live but never have to tell you the basis for its decisions. Having just had *another* long argument with a different company about whether or not I've ever lived in Iowa, I understand the problem of being unable to authenticate yourself because of poor-quality data.

It's easy for people to overlook laws that "only" affect immigrants, but as Gracie Mae Bradley, an advocacy and policy officer, made clear at this week's The State of Data 2018 event, hosted by Jen Persson, one of the consequences is to move the border from Britain's ports into its hospitals, schools, and banks, which are now supposed to check once a quarter that their 70 million account holders are legitimate. NHS Digital is turning over confidential patient information to help the Home Office locate and deport undocumented individuals. Britain's schools are being pushed to collect nationality. And, as Persson noted, remarkably few parents even know the National Pupil Database exists, and yet it catalogues highly detailed records of every schoolchild.

"It's obviously not limited to immigrants," Bradley said of the GDPR exemption. "There is no limit on the processes that might apply this exemption". It used to be clear when you were approaching a national border; under these circumstances the border is effectively gummed to your shoe.

The data protection bill also has the usual broad exemptions for law enforcement and national security.

Both this discussion (implicitly) and the security conversation we began with (explicitly) converged on security as a felt, emotional state. Even a British citizen living in their native country in conditions of relative safety - a rich country with good health care, stable governance, relatively little violence, mostly reasonable weather - may feel insecure if they're constantly being required to prove the legitimacy of their existence. Conversely, people may live in objectively more dangerous conditions and yet feel more secure because they know the local government is not eying them suspiciously with a view to telling them to repatriate post-haste.

Put all these things together with other trends, and you have the potential for a very high level of social insecurity that extends far outwards from the enemy class du jour, "illegal immigrants". This in itself is a damaging outcome.

And the potential for social control is enormous. Transport for London is progressively eliminating both cash and its Oyster payment cards in favor of direct payment via credit or debit card. What happens to people who one quarter fail the bank's inspection. How do they pay the bus or tube fare to get to work?

Like gender, immigration status is not the straightforward state many people think. My mother, brought to the US when she was four, often talked about the horror of discovering in her 20s that she was stateless: marrying my American father hadn't, as she imagined, automatically made her an American, and Switzerland had revoked her citizenship because she had married a foreigner. In the 1930s, she was naturalized without question. Now...?

Trying to balance conflicting securities is not new. The data protection bill-in-progress offers the opportunity to redress a serious imbalance, which Persson called, rightly, a "disconnect between policy, legislation, technological change, and people". It is, as she and others said, crucial that the balance of power that data protection represents not be determined by a relatively small, relatively homogeneous group.


Illustrations: 2008 map of nationalities of UK residents (via Wikipedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 2, 2018

Schrödinger's citizen

cpdp-nationality2.pngOne of the more intriguing panels at this year's Computers, Privacy, and Data Protection (obEgo: I moderated) began with a question from Peter Swire: Can the nationality of the target ever be a justified basis for different surveillance rules?

France, the Netherlands, Sweden, Germany, and the UK, explained Mario Oetheimer, an expert on data protection and international human rights with the European Union Agency for Fundamental Rights, do apply a lower level of safeguards for international surveillance as compared to domestic surveillance. He believes Germany is the only EU country whose surveillance legislation includes nationality criteria.

The UK's 2016 Investigatory Powers Act (2016), parts of which were struck down this week in the European Court of Justice, was an example. Oetheimer, whose agency has a report on fundamental rights in surveillance, said introducing nationality-based differences will "trickle down" into an area where safeguards are already relatively underdeveloped and hinder developing further protections.

Thumbnail image for peterswire-cpdp2018.pngIn his draft paper, Swire favors allowing greater surveillance of non-citizens than citizens. While some countries - he cited the US and Germany - provide greater protection from surveillance to their own citizens than to foreigners, there is little discussion about why that's justified. In the US, he traces the distinction to Watergate, when Nixon's henchmen were caught unacceptably snooping on the opposition political party. "We should have very strong protections in a democracy against surveilling the political opposition and against surveilling the free press." But granting everyone else the same protection, he said, is unsustainble politically and incorrect as a matter of law and philosophy.

This is, of course, a very American view, as the late Caspar Bowden impatiently explained to me in 2013. Elsewhere, human rights - including privacy - are meant to be universal. Still, there is a highly practical reason for governments and politicians to prefer their own citizens: foreigners can't vote them out of office. For this reason (besides being American), I struggle to believe in the durability of any rights granted to non-citizens. The difference seems to me the whole point of having citizens in the first place. At the very least, citizens have the unquestioned right to live and enter the country, which non-citizens do not have. But, as Bowden might have said, there is a difference between *fewer* rights and *no* rights. Before that conversation, I did not really understand about American exceptionalism.

Like so many other things, citizenship and nationality are multi-dimensional rather than binary. Swire argues that it's partly a matter of jurisdiction: governments have greater ability and authority to ask for information about their own citizens. Here is my reference to Schrödinger's cat: one may be a dual citizen, simultaneously both foreign and not-foreign and regarded suspiciously by all.

Joseph Cannataci disagreed, saying that nationality does not matter: "If a person is a threat, I don't care if he has three European passports...The threat assessment should reign supreme."

German privacy advocate Thorsten Wetzling outlined Germany's surveillance law, recently reformulated in response to the Snowden revelations. Germany applies three categories to data collection: domestic, domestic-foreign (or "international"), and foreign. "International" means that one end of the communication is in Germany; "foreign" means that both ends are outside the country. The new law specifically limits data collected on those outside Germany and subjects non-targeted foreign data collection to new judicial oversight.

Wetzling believes we might find benefits in extending greater protection to foreigners than accrues to domestic citizens. Extending human rights protection would mean "the global practice of intelligence remains within limits", and would give a country the standing to suggest to other countries that they reciprocate. This had some resonance for me: I remember hearing the computer scientist George Danezis say something about since we all have few nationalities, at any given time we can be surveilled by a couple of hundred other countries. We can have a race to the bottom...or to the top.

One of Swire's points was that one reason to allow greater surveillance of foreigners is that it's harder to conduct. Given that technology is washing away that added difficulty, Amie Stepanovich asked, shouldn't we recognize that? Like Wetzling, she suggested that privacy is a public good; the greater the number of people who have it the more we may benefit.

As abstruse as these legal points may sound, ultimately the US's refusal to grant human rights to foreigners is part of what's at stake in determining whether the US's privacy regime is strong enough for the EU-US Privacy Shield to pass its legal challenges. As the internet continues to raise jurisdictional disputes, Swire's question will take its place alongside others, such as how much location should matter when law enforcement wants access to data (Microsoft v. United States, due to be heard in the US Supreme Court on February 27) and countries follow the UK's lead in claiming extraterritorial jurisdiction over data and the right to bulk-hack computers around the world.

But, said Cannataci in disputing Swire's arguments, the US Constitution says, "All men are created equal". Yes, it does. But in "men" the Founding Fathers did not include women, black people, slaves, people who didn't own property.... "They didn't mean it," I summarized. Replied Cannataci: "But they *should* have." Indeed.


Illustrations: The panel, left to right: Cannataci, Swire, Stepanovich, Grossman, Wetzling, Oetheimer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 26, 2018

Bodies in the clouds

andrea-matwyshyn.jpgThis year's Computers, Privacy, and Data Protection conference had the theme "The Internet of Bodies". I chaired the "Bodies in the Clouds" panel, which was convened by Lucie Krahulcova of Access Now, and this is something like what I may have said to introduce it.

The notion of "cyberspace" as a separate space derives from the early days of the internet, when most people outside of universities or large science research departments had to dial up and wait while modems mated to get there. Even those who had those permanent connections were often offline in other parts of their lives. Crucially, the people you met in that virtual land were strangers, and it was easy to think there were no consequences in real life.

In 2013, New America Foundation co-founder Michael Lind called cyberspace an idea that makes you dumber the moment you learn of it and begged us to stop believing the internet is a mythical place that governments and corporations are wrongfully invading. While I disagreed, I can see that those with no memory of those early days might see it that way. Today's 30-year-olds were 19 when the iPhone arrived, 18 when Facebook became a thing, 16 when Google went public, and eight when Netscape IPO'd. They have grown up alongside iTunes, digital maps, and GPS, surrounded online by everyone they know. "Cyberspace" isn't somewhere they go; online is just an extension of their phones or laptops..

And yet, many of the laws that now govern the internet were devised with the separate space idea in mind. "Cyberspace", unsurprisingly, turned out not to be exempt from the laws governing consumer fraud, copyright, defamation, libel, drug trafficking, or finance. Many new laws passed in this period are intended to contain what appeared to legislators with little online experience to be a dangerous new threat. These laws are about to come back to bite us.

At the moment there is still *some* boundary: we are aware that map lookups, video sites, and even Siri requests require online access to answer, just as we know when we buy a device like a "smart coffee maker" or a scale that tweets our weight that it's externally connected, even if we don't fully understand the consequences. We are not puzzled by the absence of online connections as we would be if the sun disappeared and we didn't know what an eclipse was.

Security experts had long warned that traditional manufacturers were not grasping the dangers of adding wireless internet connections to their products, and in 2016 they were proved right, when the Mirai botnet harnessed video recorders, routers, baby monitors, and CCTV cameras to delier monster attacks on internet sites and service providers.

For the last few years, I've called this the invasion of the physical world by cyberspace. The cyber-physical construct of the Internet of Things will pose many more challenges to security, privacy, and data protection law. The systems we are beginning to build will be vastly more complex than the systems of the past, involving many more devices, many more types of devices, and many more service providers. An automated city parking system might have meters, license plate readers, a payment system, middleware gateways to link all these, and a wireless ISP. Understanding who's responsible when such systems go wrong or how to exercise our privacy rights will be difficult. The boundary we can still see is vanishing, as is our control over it.

For example, how do we opt out of physical tracking when there are sensors everywhere? It's clear that the Cookie Directive approach to consent won't work in the physical world (though it would give a new meaning to "no-go areas").

Today's devices are already creating new opportunities to probe previously inaccessible parts of our lives. Police have asked for data from Amazon Echos in a Arkansas murder case. In Germany, investigators used the suspect's Apple Health app while re-enacting the steps they believed he took and compared the results to the data the app collected at the time of the crime to prove his guilt.

A friend who buys and turns on an Amazon Echo is deemed to have accepted its privacy policy. Does visiting their home mean I've accepted it too? What happens to data about me that the Echo has collected if I am not a suspect? And if it controls their whole house, how do I get it to work after they've gone to bed?

At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

Illustrations: Andrea Matwyshyn.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


January 19, 2018

Expressionism

Thumbnail image for discardingimages-escherbackground.jpg"Regulatory oversight is going to be inevitable," Adam Kinsley, Sky's director of policy, predicted on Tuesday. He was not alone in saying this is the internet's direction of travel, and we shouldn't feel too bad about it. "Regulation is not inherently bad," suggested Facebook's UK public policy manager, Karim Palant.

The occasion was the Westminster eForum's seminar on internet regulation (PDF). The discussion focused on the key question, posed at the outset by digital policy consultant Julian Coles: who is responsible, and for what? Free speech fundamentalists find it easy to condemn anything smacking of censorship. Yet even some of them are demanding proactive removal of some types of content.

Two government initiatives sparked this discussion. The first is the UK's Internet Safety Strategy green paper, published last October. Two aspects grabbed initial attention: a levy on social media companies and age verification for pornography sites, now assigned to the British Board of Film Classification to oversee. But there was always more to pick at, as Evelyn Douek helpfully summarized at Lawfare. Coles' question is fundamental, and 2018 may be its defining moment.

The second, noted by Graham Smith, was raised by the European Commission at the December 2017 Global Internet Forum, and aims to force technology companies to take down extremist content within one to two hours of posting. Smith's description: "...act as detective, informant, arresting officer, prosecutor, defense, judge, jury, and prison warder all at once." Open Rights Group executive director Jim Killock added later that it's unreasonable to expect technology companies to do the right thing perfectly within a set period at scale, making no mistakes.

As Coles said - and as Old Net Curmudgeons remember - the present state of the law was largely set in the mid-to-late 1990s, when the goal of fostering innovation led both the US Congress (via Section 230 of the Communications Decency Act, 1996) and the EU (via the Electronic Commerce Directive, 2000) to hold that ISPs are not liable for the content they carry.

However, those decisions also had precedents of their own. The 1991 US case Cubby v. CompuServe ended in CompuServe's favor, holding it not liable for defamatory content posted to one of its online forums. In 2000, the UK's Godfrey v. Demon Internet successfully applied libel law to Usenet postings, ultimately creating the notice and takedown rules we still live by today. Also crucial in shaping those rules was Scientology's actions in 1994-1995 to remove its top-level secret documents from the internet.

In the simpler landscape when these laws were drafted, the distinction between access providers and content providers was cleaner. Before then, the early online services - CompuServe, AOL, and smaller efforts such as the WELL, CIX, and many others were hybrids - social media platforms by a different name - providing access and a platform for content providers, who curated user postings and chat.

Eventually, when social media were "invented" (Coles's term; more correctly, when everything migrated to the web), today's GAFA (or, in the US, FAANG) inherited that freedom from liability. GAFA/FAANG straddle that briefly sharp boundary between pipes and content like the dead body on the Quebec-Ontario boundary sign in the Canadian film Bon Cop, Bad Cop. The vertical integration that is proceeding apace - Verizon buying AOL and Yahoo!; Comcast buying NBC Universal; BT buying TV sports rights - is setting up the antitrust cases of 2030 and ensuring that the biggest companies - especially Amazon - play many roles in the internet ecosystem. They might be too big for governments to regulate on their own (see also: paying taxes), but public and advertisers' opinions are joining in.

All of this history has shaped the status quo that Kinsley seems to perceive as somewhat unfair when he noted that the same video that is regulated for TV broadcast is not for Facebook streaming. Palant noted that Facebook isn't exactly regulation-free. Contrary to popular belief, he said, many aspects of the industry, such as data and advertising, are already "heavily regulated". The present focus, however, is content, a different matter. It was Smith who explained why change is not simple: "No one is saying the internet is not subject to general law. But if [Kinsley] is suggesting TV-like regulation...where it will end up is applying to newspapers online." The Authority for Television on Demand, active from 2010 to 2015, already tested this, he said, and the Sun newspaper got it struck down. TV broadcasting's regulatory regime was the exception, Smith argued, driven by spectrum scarcity and licensing, neither of which applies to the internet.

New independent Internet Watch Foundation chair Andrew Puddephatt listed five key lessons from the IWF's accumulated 21 years of experience: removing content requires clear legal definitions; independence is essential; human analysts should review takedowns, which have to be automated for reasons of scale; outside independent audits are also necessary; companies should be transparent about their content removal processes.

If there is going to be a regulatory system, this list is a good place to start. So far, it's far from the UK's present system. As Killock explained, PIPCU, CTRIU, and Nominet all make censorship decisions - but transparency, accountability, oversight, and the ability to appeal are lacking.


Illustrations: "Escher background" (from Discarding Images, Boccaccio, "Des cleres et nobles femmes" (French version of "De mulieribus claris"), France ca. 1488-1496, BnF, Français 599, fol. 89v).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 10, 2017

Regulatory disruption

Thumbnail image for Northern_Rock_Queue.jpgThe financial revolution due to hit Britain in mid-January has had surprisingly little publicity and has little to do with the money-related things making news headlines over the last few years. In other words, it's not a new technology, not even a cryptocurrency. Instead, this revolution is regulatory: banks will be required to open up access to their accounts to third parties.

The immediate cause of this change is two difficult-to-distinguish pieces of legislation, one UK-specific and one EU-wide. The EU piece is Payment Services Directive 2, which is intended to foster standards and interoperability in payments across Europe. In the UK, Open Banking requires the nine biggest retail banks to create APIs that, given customer consent, will give third parties certified by the Financial Conduct Authority direct access to customer accounts. Account holders have begun getting letters announcing new terms and conditions, although recipients report that the parts that refer to open banking and consent are masterfully vague.

Thumbnail image for rotated-birch-contactlessmonopoly-ttf2016.jpgAs anyone attending the annual Tomorrow's Transactions Forum knows, open banking has been creeping up on us for the last few years. Consult Hyperion's Tim Richards has a good explanation of the story so far. At this year's event, Dave Birch, who has a blog posting outlining PSD2's background and context, noted that in China, where the majority of non-cash payments are executed via mobile, Alipay and Tencent are already executing billions of transactions a year, bypassing banks entirely. While the banks aren't thrilled about losing the transactions and their associated (dropping) revenue, the bigger issue is that they are losing the data and insight into their customers that traditionally has been exclusively theirs.

We could pick an analogy from myriad internet-disrupted sectors, but arguably the best fit is telecoms deregulation, which saw AT&T (in the US) and BT (in the UK) forced to open up their networks to competitors. Long distance revenues plummeted and all sorts of newcomers began leaching away their customers.

For banks, this story began the day Elon Musk's x.com merged with Peter Thiel's money transfer business to create the first iteration of Paypal so that anyone with an email address could send and receive money. Even then, the different approach of cryptocurrencies was the subject of experiments, but for most people the rhetoric of escaping government was less a selling point than being able to trade small sums with strangers who didn't take credit cards. Today's mobile payment users similarly don't care whether a bank is involved or not as long as they get their money.

Part of the point is to open up competition. In the UK, consumer-bank relationships tend to be lifelong, partly because so much of banking here has been automated for decades. For most people, moving their account involves not only changing arrangements for inbound payments like salary, but also also all the outbound payments that make up a financial life. The upshot is to give the banks impressive customer lock-in, which the Competition and Markets Authority began trying to break with better account portability.

The larger point of Open Banking, however, is to drive innovation in financial services. Why, the reasoning goes, shouldn't it be easier to aggregate data from many sources - bank and other financial accounts, local transport, government benefits - and provide a dashboard to streamline management or automatically switch to the cheapest supplier of unavoidable services? At Wired, Rowland Manthorpe has a thorough outline of the situation and its many uncertainties. Among these are the impact on the banks themselves - will they become, as the project's leader and the telecoms analogy suggest, plumbing for the financial sector or will they become innovators themselves? Or, despite the talk of fintech startups, will the big winners be Google and Facebook?

The obvious concerns in all this are security and privacy. Few outside the technology sector understand what an API is; how do we explain it to the broad range of the population so they understand how to protect themselves? Assuming that start-ups emerge, what mechanisms will we have to test how well our data is secured or trace how it's being used? What about the potential for spoof apps that steal people's data and money?

It's also easy to imagine that "consent" may be more than ordinarily mangled, a problem a friend calls the "tendency to mandatory". It's easy to imagine that the companies to whom we apply for insurance, a loan, or a job may demand an opened gateway to account data as part of the approvals process, which is extortion rather than consent.

This is also another situation where almost all of "my" data inevitably involves exposing third parties, the other halves of our transactions who have never given consent for that to happen. Given access to a large enough percentage of the population's banking data, triangulation should make it possible to fill in a fair bit of the rest. Amazon already has plenty of this kind of data from its own customers; for Facebook and Google this must be an exciting new vista.

Understanding what this will all mean will take time. But it represents a profound change, not only in the landscape of financial services but in the area of technical innovation. This time, those fusty old government regulators are the ones driving disruption.


Illustrations: Northern Rock in 2007 (Dominic Alves); Dave Birch.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 3, 2017

Life forms

Thumbnail image for Cephalopod_barnstar.pngWould you rather be killed by a human or a machine?

At this week's Royal Society meeting on AI and Society, Chris Reed recounted asking this question of an audience in Singapore. They all picked the human, even though they knew it was irrational, because they thought at least they'd know *why*.

A friend to whom I related this had another theory: maybe they thought there was a chance they could talk the the human killer out of it, whereas the machine would be implacable. It's possible.

My own theory pins this distaste for machine killing on a different, crucial underlying factor: a sense of shared understanding. The human standing over you with the axe or driving the oncoming bus may be a professional paid to dispatch you, a serial killer, an angry ex, or mentally ill, but they all have a personal understanding of what a human life means because they all have one they know they, too, will one day lose. The meaning of removing someone else's life is thoroughly embedded in all of us. Not having that is more or less the definition of a machine, or was until Philip K. Dick and his replicants. But there is no reason to assume that every respondent had the same reason.

Similarly, a commenter in the audience found similar responses to an Accenture poll he encountered on Twitter that inquired whether he would be in favor of AI making health decisions. When he checked the voting results, 69% had said no. Here again, the death of a patient by medical mistake keeps a human doctor awake at night (if television is to be believed), while to a machine it's a statistic, no matter how heavily weighted in its inner backpropagating neural networks.

Marion-Oswald-in-template.jpgThese two anecdotes resonated because earlier, Marion Oswald had opened her talk by asking whether, like Peter Godfrey-Smith's observation of cephalopods, interacting with AI was the closest we can come to interacting with an intelligent alien. Arguably, unless the aliens are immortal, on issues of life and death we can actually expect to have more shared understanding with them, as per above, than with machines.

The primary focus of Oswald's talk was actually to discuss her work studying HART, an algorithmic model used by Durham Constabulary to decide whether offenders qualified for deferred prosecution and help with their problems. The study raises all sorts of questions we're going to have to consider over the coming years about the role of police in society.

These issues were somewhat taken up later by Mireille Hildebrandt, who warned of the risks of transforming text-driven law - the messy stuff centuries of court cases have contested and interpreted - to data-driven law Allowing that to happen, she argued, transforms law into administration. "Contestability is the heart of the rule of law," she said. "There is more to the law that predictability and expedience." A crucial part of that is being able to test the system, and here Hildebrandt was particularly gloomy, in that although legal systems that comb the legal corpus are currently being marketed as aids for lawyers, she views it as inevitable that at some point they will become replacements. Some time after that, the skills necessary to test the inner workings of these systems will have vanished from the systems' human owners' firms.

At the annual We Robot conference, a recurring theme is the hard edges of computer systems, an aspect Ellen Ullman examined closely in her 1997 book, Close to the Machine. In Bill Smart's example, the difference between 59.99 miles an hour and 60.01 miles an hour is indistinguishable, but to a computer fitted with the right sensors the difference is a speeding ticket. An aspect of this that is insufficiently discussed is that all biological beings have some level of unpredictability. Robots and AI with far greater sensing precision than is available to humans will respond to changes we can't detect, making them appear less predictable, and therefore more intelligent, than they actually are. This is a deception we will have to learn to decode.

Already, machines that are billed as tools to aid human judgement are often much more trusted than they should be. Danielle Citron's 2006 paper Technological Due Process studied this in connection with benefits scoring systems in Texas and California, and found two problems. First, humans tended to trust the machine's decisions rather than apply their own judgement, a problem Hildebrandt referred to as "judgemental atrophy". Second, computer programmers are not trained lawyers, and are therefore not good at accurately translating legal text into decision-making systems. How do you express a fuzzy but widely understood and often-used standard like the UK's "reasonable person" in computer code? You'd have to precisely define the attopoint at which "reasonable" abruptly flicks to "unreasonable".

Ultimately, Oswald came down against the "intelligent alien" idea: "These are people-made, and it's up to us to find the benefits and tackle the risks," she said. "Ignorance of mathematics is no excuse."

That determination rests on the notion that the people building AI systems and the people using them have shared values. We already know that's not true, but even so: I vote less alien than a cephalopod on everything but the fear of death.

Illustrations: Cephalopod (via Obsidian Soul; Marion Oswald.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 13, 2017

Cost basis

Thumbnail image for Social_Network_Diagram_(segment).svg.pngThere's plenty to fret about in the green paper released this week outlining the government's Internet Safety Strategy (PDF) under the Digital Economy Act (2017). The technical working group is predominantly made up of child protection folks, with just one technical expert and no representatives of civil society or consumer groups. It lacks definitions: what qualifies as "social media"? And issues discussed here before persist, such as age verification and the mechanisms to implement it. Plus there's picky details, like requiring parental consent for the use of information services by children under 13, which apparently fails to recognize how often parents help their kids lie about their ages. However.

The attention-getting item we hadn't noticed before is the proposal of an "industry-wide levy which could in the future be underpinned with legislation" in order to "combat online harms". This levy is not, the paper says, "a new tax on social media" but instead "a way of improving online safety that helps businesses grow in a sustainable way while serving the wider public good".

The manifesto commitment on which this proposal is based compares this levy to those in the gambling and alcohol industries. The The Gambling Act 2005 provides for legislation to support such a levy, though to date the industry's contributions, most of which go to GambleAware to help problem gamblers, are still voluntary. Similarly, the alcohol industry funds the Drinkaware Trust.

The problem is that these industries aren't comparable in business model terms. Alcohol producers and retailers make and sell a physical product. The gambling industry's licensed retailers also sell a product, whether it's physical (lottery tickets or slot machine rolls) or virtual (online poker). Either way, people pay up front and the businesses pay their costs out of revenues. When the government raises taxes or adds a levy or new restriction that has to be implemented, the costs are passed on directly to consumers.

No such business model applies in social media. Granted, the profits accruing to Facebook and Google (that is, Alphabet) look enormous to us, especially given the comparatively small amounts of tax they pay to the UK - 5% of UK profits for Facebook and a controversial but unclear percentage for Alphabet. But no public company adds costs without planning how to recoup them, so then the question is: how do companies that offer consumers a pay-with-data service do that, given that they can't raise prices?

The first alternative is to reduce costs. The problem is how. Reducing staff won't help with the kinds of problems we're complaining about, such as fake news and bad behavior, which require humans to solve. Machine learning and AI are not likely to improve enough to provide a substitute in the near term, though no doubt the companies hope they will in the longer term.

The second is to increase revenues, which would mean either raising prices to advertisers or finding new ways to exploit our data. The need to police user behavior doesn't seem like a hot selling point to convince advertisers that it's worth paying more. That leaves the likelihood that applying a levy will create a perverse incentive to gather and crunch yet more user data. That does not represent a win; nor does it represent "taking back control" in any sense.

It's even more unclear who would be paying the levy. The green paper says the intention is to make it "proportionate" and ensure that it "does not stifle growth or innovation, particularly for smaller companies and start-ups". It's not clear, however, that the government understands just how vast and varied "social media" are. The term includes everything from the services people feel they have little choice about using (primarily Facebook, but also Google to some extent) to the web boards on news and niche sites, to the comments pages on personal blogs, to long-forgotten precursors of the web like Usenet and IRC. Designing a levy to take account of all business models and none while not causing collateral damage is complex.

Overall, there's sense in the principle that industries should pay for the wider social damage they cause to others. It's a long-standing approach for polluters, for example, and some have suggested there's a useful comparison to make between privacy and the environment. The Equifax breach will be polluting the privacy waters for years to come as the leaked data feeds into more sophisticated phishing attacks, identity fraud, and other widespread security problems. Treating Equifax the way we treat polluters makes sense.

It's less clear how to apply that principle to sites that vary from self-expression to publisher to broadcaster to giant data miners. Since the dawn of the internet any time someone's created a space for free expression someone else has come along and colonized a corner of it where people could vent and be mean and unacceptable; 4chan has many ancestors. In 1994, Wired captured an early example: The War Between alt.tasteless and rec.pets.cats. Those Usenet newsgroups created revenue for no one, while Facebook and Google have enough money to be the envy of major governments.

Nonetheless, that doesn't make them fair targets for every social problem the government would like to dump off onto someone else. What the green paper needs most is a clear threat model, because it's only after you have one that you can determine the right tools for solving it.


Illustrations:: Social network diagram.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 6, 2017

Send lawyers, guns, and money

Thumbnail image for Las_Vegas_strip.jpgThere are many reasons why, Bryan Schatz finds at Mother Jones, people around Las Vegas disagree with President Donald Trump's claim that now is not the time to talk about gun control. The National Rifle Association probably agrees; in the past, it's been criticized for saving its public statements for proposed legislation and staying out of the post-shooting - you should excuse the expression - crossfire.

Gun control doesn't usually fit into net.wars' run of computers, freedom, and privacy subjects. There are two reasons for making an exception now. First, the discovery of the Firearm Owners Protection Act, which prohibits the creation of *any* searchable registry of firearms in the US. Second, the rhetoric surrounding gun control debates.

To take the second first, in a civil conversation on the subject, it was striking that the arguments we typically use to protest knee-jerk demands for ramped-up surveillance legislation to atrocious incidents are the same ones used to oppose gun control legislation. Namely: don't pass bad laws out of fear that do not make us safer; tackle underlying causes such as mental illness and inequality; put more resources into law enforcement/intelligence. In the 1990s crypto wars, John Perry Barlow deliberately and consciously adapted the NRA' slogan to create "You can have my encryption algorithm...when you pry it from my cold, dead fingers from my private key".

Using the same rhetoric doesn't mean both are right or both are wrong: we must decide on evidence. Public debates over surveillance do typically feature evidence about the mathematical underpinnings of how encryption works, day-to-day realities of intelligence work, and so on. The problem with gun control debates in the US is that evidence from other countries is automatically written off as irrelevant, and, more like the subject of copyright reform, lobbying money hugely distorts the debate.

Thumbnail image for Atf_ffl_check-licensed-gun-dealer.jpgThe second issue touches directly on privacy. Soon after the news of the Las Vegas shooting broke, a friend posted a link to the 2016 GQ article Inside the Federal Bureau of Way Too Many Guns. In it, writer and author Jeanne Marie Laskas pays a comprehensive visit to Martinsburg, West Virginia, where she finds a "low, flat, boring building" with a load of shipping containers kept out in the parking lot so the building's floors don't collapse under the weight of the millions of gun license records they contain. These are copies of federal form 4473, which is filled out at the time of gun purchases and retained by the retailer. If a retailer goes out of business, the forms it holds are shipped to the tracing center. When a law enforcement officer anywhere in the US finds a gun at a crime scene, this is where they call to trace it. The kicker: all those records are eventually photographed and stored on microfilm. Miles and miles of microfilm. Charlie Houser, the tracing center's head, has put enormous effort into making his human-paper-microfilm system as effective and efficient as possible; it's an amazing story of what humans can do.

Why microfilm? Gun control began in 1968, five years after the shooting of President John F. Kennedy. Even at that moment of national grief and outrage, the only way President Lyndon B. Johnson could get the Gun Control Act passed was to agree not to include a clause he wanted that would have set up a national gun registry to enable speedy tracing. In 1986, the NRA successfully lobbied for the Firearm Owners Protection Act, which prohibits the creation of *any* registry of firearms. What you register can be found and confiscated, the reasoning apparently goes. So, while all the rest of us engaged in every other activity - getting health care, buying homes, opening bank accounts, seeking employment - were being captured, collected, profiled, and targeted, the one group whose activities are made as difficult to trace as possible is...gun owners?

It is to boggle.

That said, the reasons why the American gun problem will likely never be solved include the already noted effect of lobbying money and, as E.J. Dionne Jr., Norman J. Ornstein and Thomas E. Mann discuss in the Washington Post, the non-majoritarian democracy the US has become. Even though majorities in both major parties favor universal background checks and most Americans want greater gun control, Congress "vastly overrepresents the interests of rural areas and small states". In the Senate that's by design to ensure nationwide balance: the smallest and most thinly populated states have the same number of senators - two - as the biggest, most populous states. In Congress, the story is more about gerrymandering and redistricting. Our institutions, they conclude, are not adapting to rising urbanization: 63% in 1960, 84% in 2010.

Besides those reasons, the identification of guns and personal safety endures, chiefly in states where at one time it was true.

A month and a half ago, one of my many conversations around Nashville went like this, after an opening exchange of mundane pleasantries:

"I live in London."

"Oh, I wouldn't want to live there."

"Why?"

"Too much terrorism." (When you recount this in London, people laugh.)

"If you live there, it actually feels like a very safe city." Then, deliberately provocative, "For one thing, there are practically no guns."

"Oh, that would make me feel *un"safe."

Illustrations: Las Vegas strip, featuring the Mandelay Bay; an ATF inspector checks up on a gun retailer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 29, 2017

Ubersicht

London_Skyline.jpgIf it keeps growing, every company eventually reaches a moment where this message arrives: it's time to grow up. For Microsoft, IBM, and Intel it was antitrust suits. Google's had the EU's €2.4 billion fine. For Facebook and Twitter, it may be abuse and fake news.

This week, it was Uber's turn, when Transport for London declined to renew Uber's license to operate. Uber's response was to apologize and promise to "do more" while urging customers to sign its change.org petition. At this writing, 824,000 have complied.

Travis_Kalanick_at_DLD_Munich_2015_(cropped).jpgI can't see the company as a victim here. The "sharing economy" rhetoric of evil protectionist taxi regulators has taken knocks from the messy reality of the company's behavior and the Grade A jerkishness of its (now former) founding CEO, the controversial Travis Kalanick. The tone-deaf "Rides of Glory" blog post. The safety-related incidents that TfL complains the company failed to report because: PR. Finally, the clashes with myriad city regulators the company would prefer to bypass: currently, it's threatening to pull out of Quebec. Previously, both Uber and Lyft quit Austin, Texas for a year rather than comply with a law requiring driver fingerprinting. In a second London case, Uber is arguing that its drivers are not employees; SumOfUs begs to differ.

People who use Uber love Uber, and many speak highly of drivers they use regularly. In one part of their brains, Uber-loving friends advocate for social justice, privacy, and fair wages and working conditions; in the other, Uber is so cool, cheap, convenient, and clean, and the app tracks the cab in real time...and city transport is old, grubby, and slow. But we're not at the beginning of this internet thing any more, and we know a lot about what happens when a cute, cuddly company people love grows into a winner-takes-all behemoth the size of a nation-state.

A consideration beyond TfL's pay grade is that transport doesn't really scale, as Hubert Horan explains in his detailed analysis of the company's business model. As Horan explains, Uber can't achieve new levels of cost savings and efficiency (as Amazon and eBay did) because neither the fixed costs of providing the service nor network externalities create them. More simply, predatory competition - that is, venture capitalists providing the large sums that allow Uber to undercut and put out of business existing cab firms (and potentially public transport) - is not sustainable until all other options have been killed and Uber can raise its prices.

Black_London_Cab.jpgEarlier this year, at a conference on autonomous vehicles, TfL's representative explained the problems it faces. London will grow from 8.6 million to 10 million people by 2025. On the tube, central zone trains are already running at near the safe frequency limit and space prohibits both wider and longer trains. Congestion will increase: trucks, cars, cabs, buses, bicycles, and pedestrians. All these interests - plus the thousands of necessary staff - need to be balanced, something self-interested companies by definition do not do. In Silicon Valley, where public transport is relatively weak, it may not be clearly understood how deeply a city like London depends on it.

At Wired UK, Matt Burgess says Uber will be back. When Uber and Lyft exited Austin, Texas rather than submit to a new law requiring them to fingerprint drivers, within a year state legislators had intervened. But that was several scandals ago, which is why I think that this once SorryWatch has it wrong: Uber's apology may be adequately drafted (as they suggest, minus the first paragraph), but the company's behaviour has been egregious enough to require clear evidence of active change. Uber needs a plan, not a PR campaign - and urging its customers to lobby for it does not suggest it's understood that.

At London Reconnections, John Bull explains the ins and outs of London's taxi regulation in fascinating detail. Bull argues that in TfL Uber has met a tech-savvy and forward-thinking regulator that is its own boss and too big to bully. Given that almost the only cost the company can squeeze is its drivers' compensation, what protections need to be in place? How does increasing hail-by-app taxi use fit into overall traffic congestion?

Uber is one of the very first of the new hybrid breed of cyber-physical companies. Bypassing regulators - asking forgiveness rather than permission - may have flown when the consequences were purely economic, but it can't be tolerated in the new era of convergence, in which the risks are. My iPhone can't stab me in my bed, (as Bill Smart has memorably observed, but that's not true of these hybrids..

TfL will presumably focus on rectifying the four areas in its announcement. Beyond that, though I'd like to see Uber pressed for some additional concessions. In particular, I think the company - and others like it - should be required to share their aggregate ride pattern data (not individual user accounts) with TfL to aid the authority to make better decisions for the benefit of all Londoners. As Tom Slee, the author of What's Yours Is Mine: Against the Sharing Economy, has put it, "Uber is not 'the future', it's 'a future'".


Illustrations: London skyline (by Mewiki); London black cab (Jimmy Barrett; Travis Kalanick (Dan Taylor).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 29, 2014

Shared space

What difference does the Internet make? This is the modern policy maker's equivalent of "To be, or not to be?" This question has underlain so many net.wars as politicians and activists have wrangled over whether and how the same laws should apply online as offline. Transposing offline law to the cyberworld is fraught with approximately the same dilemmas as transposing a novel to film. What do you keep? What do you leave out? What whole chapter can be conveyed in a single shot? In some cases it's obvious: consumer protection for purchases looks about the same. But the impact of changing connections and the democratization of worldwide distribution? Frightened people whose formerly safe, familiar world is slipping out of control often fail to make rational decisions.

This week's inaugural VOX-Pol conference, kept circling around this question. Funded under the EU's FP-7, the organizing group is meant to be an "academic research network focused on researching the prevalence, contours, functions, and impacts of Violent Online Political Extremism and responses to it". Attendees included researchers from a wide variety of disciplines from computer science to social science. If there was any group that was lacking, I'd say it was computer security practitioners and researchers, many of whose on-the-ground experience studying cyberattacks and investigating the criminal underground could be helpfully emulated by this group.

Some help could also perhaps be provided by journalists with investigative experience. In considering SOCMINT, for example - social media intelligence - people wondered how far to go in interacting with the extremists being studied. Are fake profiles OK? And can you be sure whether you're studying them...or they're studying us? The most impressive presentation on this sort of topic came from Aaron Zelin who, among other things, runs a Web-based clearinghouse for jihadi primary source material.

It's not clear that what Zelin does would be legal, or even possible in the UK. The "lone wolf" theory holds that someone alone in his house can be radicalized simply by accessing Web-based material; if you believe that, the obvious response is to block the dangerous material. Which, TJ McIntyre explained, is exactly what the UK does, unknown to most of its population.

McIntyre knows because he spent three years filing freedom of information requests to find out. So now we know: approximately 1,000 full URLs are blocked under this program, based on criteria derived from Sections 57 and 58 of the 2000 Terrorism Act and Sections 1 and 2 of the 2006 Terrorism Act. The system is "voluntary" - or rather, voluntary for ISPs, not voluntary for their subscribers. McIntyre's FOI answers have found no impact assessment or study of liability for wrongful blocking, and no review of compliance with the 1998 Human Rights Act. It also seems to contradict the Council of Europe's clear statement that filtering must be necessary and transparent.

This is, as Michael Jablonski commented on Twitter yesterday, one of very few conferences that begins by explaining the etiquette for showing gruesome images. Probably more frightening, though, was the presentations laying out the spread - and even mainstreaming - of interlinked extremist groups across the world. Many among Hungary's and Italy's extremist networks host their domains in the US, where the First Amendment ensures their material is not illegal.

This is why the First Amendment can be hard to love: defending free speech inevitably means defending speech you despise. Repeating that "The best answer to bad speech is more, better speech" is not always consoling. Trying to change the minds of the already committed is frustrating and thankless. Jihadi Trending(PDF), a report produced by the Quilliam Foundation, which describes itself as "the world's first counter-extremism think tank", reminds us that's not the piont. Released a few months ago and a fount of good sense, Nick Cohen reminds us in the foreword: "The true goal of debate, however, is not to change the minds of your opponents, but the minds of the watching audience."

Among the report's conclusions:
- The vast majority of radicalized individuals make contact first through offline socialization.
- Negative measures - censorship and filtering - are ineffective and potentially counter-productive.
- There are not enough positive measures - the "better speech" above to challenge extremist ideologies.
- Better ideas are to improve digital literacy and critical consumption skills and debunk propaganda.

So: what difference does the Internet make? It lets extremists use Twitter to tell each other what they had for breakfast. It lets them use YouTube to post videos of their cats. It lets them connect to others with similar views on Facebook, on Web forums, in chat rooms, virtual worlds, and dating sites, and run tabloid news sites that draw in large audiences. Just like everyone else, in fact. And, like the rest of us, they do not own the infrastructure.

The best answer came late on the second day, when someone commented that in the physical world neo-Nazi groups do not hang out with street gangs; extreme right hate groups don't go to the same conferences as jihadis; and Guantanamo detainees don't share the same physical space with white supremacists or teach other tactics. "But they will online."


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

September 7, 2012

Robocops

Great, anguished howls were heard on Twitter last Sunday when Ustream silenced Neil Gaiman's acceptance speech at the Hugo awards, presented at the World Science Fiction Convention. On Tuesday, something similar happened when, Slate explains, YouTube blocked access to Michelle Obama's speech at the Democratic National Convention once the live broadcast had concluded. Yes, both one of our premier fantasy writers and the First Lady of the United States were silenced by over-eager, petty functionaries. Only, because said petty functionaries were automated copyright robots, there was no immediately available way for the organizers to point out that the content identified as copyrighted had been cleared for use.

TV can be smug here: this didn't happen when broadcasters were in charge. And no, it didn't: because a large broadcaster clears the rights and assumes the risks itself. By opening up broadcasting to the unwashed millions, intermediaries like Google (YouTube) and UStream have to find a way to lay off the risk of copyright infringement. They cannot trust their users. And they cannot clear - or check - the rights manually for millions of uploads. Even rights holder organizations like the RIAA, MPAA, and FACT, who are the ones making most of the fuss, can't afford to do that. Frustration breeds market opportunity, and so we have automated software that crawls around looking for material it can identify as belonging to someone who would object. And then it spits out a complaint and down goes the material.

In this case, both the DNC and the Hugo Awards had permission to use the bit of copyrighted material the bots identified. But the bot did not know this; that's above its pay grade.

This is all happening at a key moment in Europe: early next week, the public consultation closes on the notice-and-takedown rules that govern, among other things, what ISPs and other hosts are supposed to do when users upload material that infringes copyright. There's a questionnaire for submitting your opinions; you have until Tuesday, September 11.

Today's notice and takedown rules date to about the mid-1990s and two particular cases. One, largely but not wholly played out in the US, was the several-years fight between the Church of Scientology and a group of activists who believed that the public interest was served by publishing as widely as possible the documents Scientology preserves from the view of all but it3s highest-level adherents, which I chronicled for Wired in 1995. This case - and other early cases of claimed copyright infringement - let to the passage in 1998 of the Digital Millennium Copyright Act, which is the law governing the way today's notice-and-takedown procedures operate in the US and therefore, since many of the Internet's biggest user-generated content sites are American, worldwide.

The other important case was the 1997 British case of Laurence Godfrey, who sued Demon Internet for libel over a series of Internet postings, spoofed to appear as though they came from him, which the service failed to take down despite his requests. At the time, a fair percentage of Internet users believed - or at least argued - that libel law did not apply online; Godfrey, through the Demon case and others, set out to prove them wrong, and succeeded. The Demon case was eventually settled in 2000, and set the precedent that ISPs could be sued for libel if they failed to have procedures in place for dealing with complaints like these. Result: everyone now has procedures and routinely operates notice-and-takedown, just as cyber rights lawyer Yaman Akdeniz predicted in 1999.

A different set of notice-and-takedown regime is operated, of course, by the Internet Watch Foundation, which was founded in 1996 and recommends that ISPs remove material that IWF have staff have examined and believe is potentially illegal. This isn't what we're talking about here: the IWF responds to complaints from the public and at all stages humans are involved in making the decisions.

Granted that it's not unreasonable that there should be some mechanism to enable people to complain about material that infringes their copyrights or is libellous, what doesn't get sufficient attention is that there should also be a means of redress for those who are unjustly accused. Even without this week's incidents we have enough evidence - thanks to the detailed collection of details showing how DMCA notices have been used and abused in the years since the law's passage being continuously complied at Chilling Effects - to be able to see the damage that overbroad, knee-jerk deletion can do.

It's clear that balance needs to be restored. Users should be notified promptly when the content they have posted is removed; there should be a fast turnaround means of redress; and there clearly needs to be a mechanism by which users can say, "This content has been cleared for use".

By those standards, Ustream has actually behaved remarkably well. It hasapologized and is planning to rebroadcast the Hugo Awards on Sunday, September 9. Meanwhile, it's pulled its automated copyright policing system to understand what went wrong. To be fair, the company that supplies the automated copyright policing software, Vobile, argues that its software wasn't at fault: it merely reports what it finds. It's up to the commissioning company to decide how to act on those reports. Like we said: above the bot's pay grade.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 24, 2012

Look and feel

Reading over the accounts of the deliberations in Apple vs Samsung, the voice I keep hearing in my head is that of Philippe Kahn, the former CEO of Borland, one of the very first personal computing software companies, founded in 1981. I hear younger folks scratching their heads at that and saying, "Who?" Until 1992 Borland was one of the top three PC software companies, dominant in areas like programming languages and compilers; it faltered when it tried to compete with Lotus (long since swallowed by IBM) and Microsoft in office suites. In 1995 Kahn was ousted, going on to found three other companies.

What Kahn's voice is saying is, "Yes, we copied."

The occasion was an interview I did with him in July 1994 for the now-defunct magazine Personal Computer World, then a monthly magazine the size of a phone book. (Oh - phone book. Let's call it two 12.1 inch laptops, stacked, OK?). Among the subjects we rambled through was the lawsuit between Borland and Lotus, one of the first to cover the question of whether and when reverse-engineering infringes copyright. After six years of litigation, the case was finally decided by the Supreme Court in 1996.

The issue was spreadsheet software; Lotus 1-2-3 was the first killer application that made people want - need - to buy PCs. When Borland released its competing Quattro Pro, the software included a mode that copied Lotus's menu structure and a function to run Lotus's macros (this was when you could still record a macro with a few easy keyboard strokes; it was only later that writing macros began to require programming skills). In the district court, Lotus successfully argued that this was copyright infringement. In contrast, Borland, which eventually won the case on appeal, argued that the menu structure constituted a system. Kahn felt so strongly about pursuing the case that he called it a crusade and the company spent tens of millions of dollars on it.

"We don't believe anyone ever organized menus because they were expressive, or because the looked good," Kahn said at the time. "Print is next to Load because of functional reasons." Expression can be copyrighted; functionality instead is patented. Secondly, he argued, "In software, innovation is driven fundamentally by compatibility and interoperability." And so companies reverse-engineer: someone goes in a room by themselves and deconstructs the software or hardware and from that produces a functional specification. The product developers then see only that specification and from it create their own implementation. I suppose a writer's equivalent might be if someone read a lot of books (or Joseph Campbell's Hero With a Thousand Faces), broke down the stories to their essential elements, and then handed out pieces of paper that specified, "Entertaining and successful story in English about an apparently ordinary guy who finds out he's special and is drawn into adventures that make him uncomfortable but change his life." Depending on whether the writer you hand that to is Neil Gaiman, JRR Tolkien, or JK Rowling, you get a completely different finished product.

The value to the public of the Lotus versus Borland decision is that it enabled standards. Imagine if every piece of software had to implement a different keystroke to summon online help, for example (or pay a license fee to use F1). Or think of the many identical commands shared among Internet Explorer, Firefox, Opera, and Chrome: would users really benefit if each browser had to be completely different, or if Mosaic had been able to copyright the lot and lock out all other comers? This was the argument that As the EFF made in its amicus brief, that allowing the first developer of a new type of software to copyright its interface could lock up that technology and its market or 75 years or more.

In the mid 1990s, Apple - in a case that, as Harvard Business Review highlights, was very similar to this one - sued Microsoft over the "look and feel" of Windows. (That took a particular kind of hubris, given that everyone knows that Apple copied what it saw at Xerox to make that interface in the first place.) Like that case (and unlike Lotus versus Borland), Apple versus Samsung revolves around patents (functionality) rather than copyright (expression). But the fundamental questions in all three cases are the same: what is a unique innovation, what builds on prior art, and what is dictated by such externalities as human anatomy and psychology and the expectations we have developed over decades of phone and computer use?

What matters to Apple and Samsung is who gets to sell what in which markets. We, however, have a lot more important skin in this game: what is the best way to foster innovation and serve consumers? In Apple's presentation on Samsung's copying, Apple makes the same tired argument as the music industry: that if others can come along and copy its work it won't have any incentive to spend five years coming up with stuff like the iPad. Really? As Allworth notes, is that what they did after losing the Microsoft case? If Apple had won then and owned the entire desktop market, do you think they'd have ever had the incentive to develop the iPad? We have to hope that copying wins.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 17, 2012

Bottom dwellers

This week Google announced it would downgrade in its search results sites with an exceptionally high number of valid copyright notices filed against them. As the EFF points out, the details of exactly how this will work are scarce and there is likely to be a big, big problem with false positives - that is, sites that are downgraded unfairly. You have only to look at the recent authorial pile-on that took down the legitimate ebook lending site LendInk for what can happen when someone gets hold of the wrong side of the copyright stick.

Unless we know how the inclusion of Google's copyright notice stats will work, how do we know what will be affected, how, and for how long? There is no transparency to let a site know what's happening to it, and no appeals process. Given the many abuses of the Digital Millennium Copyright Act, under which such copyright notices are issued, it's hard to know how fair such a system will be. Though, granted: the company could have simply done it and not told us. How would we know?

The timing of this move is interesting because it comes only a few months after Google began advocating for the notion that search engine results are, like newspaper editorial matter, a form of free speech under the First Amendment. The company went as far as to commission the legal scholar Eugene Volokh to write a white paper outlining the legal arguments. These basically revolve around the idea that a search algorithm is merely a new form of editorial judgment; Google returns search results in the order in which, in its opinion, they will be most helpful to users.

In response, Tim Wu, author of The Master Switch, argued in the New York Times that conceding the right of free speech to computerized decisions brings serious problems with it in the long run. Supposing, for example, that antitrust authorities want to regulate Google to ensure that it doesn't use its dominance in search to unfairly advantage its other online properties - YouTube, Google Books, Google Maps, and so on. If search results are free speech, that type of regulation becomes unconstitutional. On BoingBoing, Cory Doctorow responded that one should regulate the bad speech without denying it is speech. Earlier, in the Guardian Doctorow argued that Google's best gambit was making the argument about editorial integrity; publications make esthetic judgments, but Google famously loves to live by numbers.

This part of the argument is one that we're going to be seeing a lot of over the next few decades, because it boils down to this bit of Philip K. Dick territory: should machines programmed by humans have free speech rights? And if so, under what circumstances? If Google search results are free speech, is the same true of the output of credit-scoring algorithms or speed cameras? A magazine editor can, if asked, explain the reasoning process by which material was commissioned for, placed in, or rejected by her magazine; Google is notoriously secretive about the workings of its algorithms. We do not even know the criteria Google uses to judge the quality of its search results.

These are all questions we're going to have to answer as a society; and they are questions that may be answered very differently in countries without a First Amendment. My own first inclination is to require some kind of transparency in return: for every generation of separation between human and result, there must be an additional layer of explanation detailing how the system is supposed to work. The more people the results affect, the bigger the requirement for transparency. Something like that.

The more immediate question, of course, is, whether Google's move will have an impact on curbing unauthorized file-sharing. My guess is not that much; few file-sharers of my acquaintance use Google for the purpose of finding files to download.

Yet, in an otherwise sensible piece about the sentencing of Surfthechannel.com owner Anton Vickerman to four years in prison in the Guardian, Dan Sabbagh winds up praising Google's decision with a bunch of errors. First of all, he blames the music industry's problems on mistakes "such as failing to introduce copy protection". As the rest of us know, the music industry only finally dropped copy protection in 2009 - because consumers hate it. Arguably, copy protection delayed the adoption of legal, paid services by years. He also calls the decision to sell all-you-can-eat subscriptions to music back catalogues a mistake; on what grounds is not made clear.

Finally, he argues, "Had Google [relegated pirate sites' results] a decade ago, it might not have been worthwhile for Vickerman to set up his site at all."

Ten years ago? In 2002, Napster had been gone for less than a year. Gnutella and BitTorrent were measuring their age in months. iTunes was a year old. The Pirate Bay wouldn't exist for some months more. Google was two years away from going public. The mistake then wasn't downgrading sites oft accused of copyright infringement. The mistake then was not building legal, paid downloading services and getting them up and running as fast as possible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


July 6, 2012

The license, the judge, and the wardrobe

A lot of people have wondered for a long time whether the licensing conditions imposed by software publishers really would stand up in a court of law. And now we know: this week the Court of Justice of the European Union ruled (PDF) that people who buy downloaded software cannot be prohibited from selling on their used licenses.

The case: the German company UsedSoft advertises and sells, among others, licenses to Oracle software. These it acquires from Oracle customers who either are no longer using them or bought group licenses (sold in blocks of 25) and don't need all of the seats. The customers then download the software from Oracle's Web site. The license you buy from UsedSoft includes the remaining portion of the maintenance contract with Oracle, which marks its licenses "non-transferable". Oracle sued to stop this; the German regional court upheld the complaint. UsedSoft appealed to the German Federal Court of Justice, which referred the case to the EU.

With physical objects we take for granted the concept the US calls "first sale doctrine". That is, the person or company who manufactures the object only gets to sell it the first time. Thereafter, it's yours to do with what you like - trash it, recycle it, loan it out, sell it on to someone else, even burn it, all without owing anything to the person who made it and/or sold it to you. Software manufacturers, however, have emulated the publishers of books, music, film, and other media by unbundling the right to distribute the physical object and the right to make copies of the content embedded in it. When you buy a book, you gain the rights to that one copy of the book; but you don't gain the right to scan in the contents and give away or sell new copies of the contents. Or at least, if you do such a thing you would be wise to be Google Books rather than a 22-year-old college student with broadband and a personal Web site.

Usedsoft v Oracle revolves around the interactions of several pieces of EU law covering copyright and the distribution of goods, but ultimately the court's decision is clear enough. The purpose of the "exhaustion" of the manufacturer's distribution rights after the first sale was, in the ruling's argument to ensure that the original manufacturer should not be responsible for damage to the physical object that takes place between the first and second sales. Digitally distributed copies (especially from the original site) don't have this problem. Hence the ECJ's decision: first sale doctrine applies to software. The one caveat in all this: the original license-holder must delete or render unusable his original licensed copy of the software, even though it's difficult to prove he's done it.

The conditions of software licenses have never seemed fair. For one thing, back when software was primarily distributed in shrink-wrapped packages, you couldn't read the license to agree to it until you'd rendered the software unreturnable by opening the package. "Clickwrap" more or less ended that issue.

For another thing, the terms are contrary to the way humans normally think about the objects they acquire. In England, as the retired solicitor and fellow Open Rights Group advisory council member Nicholas Bohm explained to me for the Guardian in 2008, this has always seemed particularly dubious; precedents have established that valid terms and conditions are a contract set at the point of sale. In his example, a notice in a hotel room the wardrobe warning that you leave items there at your own risk has no legal weight because the contract of was made at the reception desk.

Finally, with physical objects we take it for granted that we have the right to demand satisfaction - repair, replacement, or refund - if the item we buy is flawed. Obviously, this right has its limits. We can reasonably expect a refund or replacement for a piece of clothing that rips badly or discolors on first washing (assuming we haven't done something dumb). And we can reasonably expect the manufacturer to pay for repairs to a new car that turns left when you steer right, unstoppably leaks fluids, or whose battery overheats to the point of bursting into flames. With software, we are pretty much stuck with the bugs and security holes, and software licenses pretty much universally disclaim liability for anything that happens when you install and use the software. This was the subject of a failed attempt in the around 2000, to modify the Uniform Commercial Code to both hold software publishers liable for defects - but in return allow them to impose any restrictions they wanted.

The impact of this week's judgment will be interesting. How will it affect music, ebooks, DRM, movies, games? That's a question for the lawyers and judges in future cases.

We can just say this: what an amazing week. First this ruling. Then the news that the Anti-Counterfeiting Trade Agreement was finally and truly rejected by the European Parliament. And a British man will play the Wimbledon final for the first time in 74 years. I don't know which of the three was less likely.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 28, 2012

Interview with Lawrence Lessig

This interview was originally intended for a different publication; I only discovered recently that it hadn't run. Lessig and I spoke in late January, while the fate of the Research Works Act was still unknown (it's since been killed.

"This will be the grossest money election we've seen since Nixon," says the law professor Lawrence Lessig, looking ahead to the US Presidential election in November. "As John McCain said, this kind of spending level is certain to inspire a kind of scandal. What's needed is scandals."

It's not that Lessig wants electoral disaster; it's that scandals are what he thinks it might take to wake Americans up to the co-option of the country's political system. The key is the vast, escalating sums of money politicians need to stay in the game. In his latest book, Republic, Lost, Lessig charts this: in 1982 aggregate campaign spending for all House and Senate candidates was $343 million; in 2008 it was $1.8 billion. Another big bump upward is expected this year: the McCain quote he references was in response to the 2010 Supreme Court decision in Citizens United legalising Super-PACs. These can raise unlimited campaign funds as long as they have no official contact with the candidates. But as Lessig details in Republic, Lost, money-hungry politicians don't need things spelled out.

Anyone campaigning against the seemingly endless stream of anti-open Internet, pro-copyright-tightening policies and legislation in the US, EU, and UK - think the recent protests against the US's Stop Internet Piracy (SOPA) and Protect Intellectual Property (PIPA) Acts and the controversy over the Digital Economy Act and the just-signed Anti-Counterfeiting Trade Agreement (ACTA) treaty - has experienced the blinkered conviction among many politicians that there is only one point of view on these issues. Years of trying to teach them otherwise helped convince Lessig that it was vital to get at the root cause, at least in the US: the constant, relentless need to raise escalating sums of money to fund their election campaigns.

"The anti-open access bill is such a great example of the money story," he says, referring to the Research Works Act (H.R. 3699), which would bar government agencies from mandating that the results of publicly funded research be made accessible to the public. The target is the National Institutes of Health, which adopted such a policy in 2008; the backers are journal publishers.

"It was introduced by a Democrat from New York and a Republican from California and the single most important thing explaining what they're doing is the money. Forty percent of the contributions that Elsevier and its senior executives have made have gone to this one Democrat." There is also, he adds, "a lot to be done to document the way money is blocking community broadband projects".

Lessig, a constitutional scholar, came to public attention in 1998, when he briefly served as a special master in Microsoft's antitrust case. In 2000, he wrote the frequently cited book Code and Other Laws of Cyberspace, following up by founding Creative Commons to provide a simple way to licence work on the Internet. In 2002, he argued Eldred v. Ashcroft against copyright term extension in front of the Supreme Court, a loss that still haunts him. Several books later - The Future of Ideas, Free Culture, and Remix - in 2008, at the Emerging Technology conference, he changed course into his present direction, "coding against corruption". The discovery that he was writing a book about corruption led Harvard to invite him to run the Edmond J. Safra Foundation Center for Ethics, where he fosters RootStrikers, a network of activists.

Of the Harvard centre, he says, "It's a bigger project than just being focused on Congress. It's a pretty general frame for thinking about corruption and trying to think in many different contexts." Given the amount of energy and research, "I hope we will be able to demonstrate something useful for people trying to remedy it." And yet, as he admits, although corruption - and similar copyright policies - can be found everywhere his book and research are resolutely limited to the US: "I don't know enough about different political environments."

Lessig sees his own role as a purveyor of ideas rather than an activist.

"A division of labour is sensible," he says. "Others are better at organising and creating a movement." For similar reasons, despite a brief flirtation with the notion in early 2008, he rules out running for office.

"It's very hard to be a reformer with idealistic ideas about how the system should change while trying to be part of the system," he says. "You have to raise money to be part of the system and engage in the behaviour you're trying to attack."

Getting others - distinguished non-politicians - to run on a platform of campaign finance reform is one of four strategies he proposes for reclaiming the republic for the people.

"I've had a bunch of people contact me about becoming super-candidates, but I don't have the infrastructure to support them. We're talking about how to build that infrastructure." Lessig is about to publish a short book mapping out strategy; later this year he will update incorporating contributions made on a related wiki.

The failure of Obama, a colleague at the University of Illinois at Chicago in the mid-1990s, to fulfil his campaign promises in this area is a significant disappointment.

"I thought he had a chance to correct it and the fact that he seemed not to pay attention to it at all made me despair," he says.

Discussion is also growing around the most radical of the four proposals, a constitutional convention under Article V to force through an amendment; to make it happen 34 state legislatures would have to apply.

"The hard problem is how you motivate a political movement that could actually be strong enough to respond to this corruption," he says. "I'm doing everything I can to try to do that. We'll see if I can succeed. That's the objective."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this seriesand one of other interviews.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 16, 2012

The end of the beginning

The coming months could see significant boosts to freedom of expression in the UK. Last night, the Libel Reform Campaign launched its report on alternatives to libel litigation at an event filled with hope that the Defamation Bill will form part of the Queen's speech in May. A day or two earlier, Consumer Focus hosted an event at the House of Commons to discuss responses to the consultation on copyright following the Hargreaves Review, which are due March 21. Dare we hope that a year or two from now the twin chilling towers of libel law and copyright might be a little shorter?

It's actually a good sign, said the former judge Sir Stephen Sedley last night, that the draft defamation bill doesn't contain everything reform campaigners want: all bills change considerably in the process of Parliamentary scrutiny and passage. There are some other favorable signs: the defamation bill is not locked to any particular party. Instead, there's something of a consensus that libel law needs to be reformed for the 21st century - after all, the multiple publication rule that causes Internet users so much trouble was created by the 1849 court case Duke of Bunswick v Harmer, in which the Duke of Brunswick managed to get the 17-year limit overridden on the basis that his manservant, sent from Paris to London, was able to buy copies of the magazine he believed had defamed him. These new purchases, he argued successfully, constituted a new publication of the libel. Well, you know the Internet: nothing ever really completely dies, and so that law, applied today, means liability in perpetuity. Ain't new technology grand?

The same is, of course, true in spades of copyright law, even though it's been updated much more recently; the Copyright, Designs, and Patents Act only dates to 1988 (and was then a revision of laws as recent as 1956). At the Consumer Focus event, Saskia Walzel argued that it's appropriate to expect to reform copyright law every ten to 15 years, but that the law should be based on principles, not technologies. The clauses that allow consumers to record TV programs on video recorders, for example, did not have to be updated for PVRs.

The two have something else in common: both are being brought into disrepute by the Internet because both were formulated in a time when publishers were relatively few in number and relatively powerful and needed to be kept in check. Libel law was intended to curb their power to damage the reputations of individuals with little ability to fight back. Copyright law kept them from stealing artists' and creators' work - and each other's.

Sedley's comment last night about libel reform could, with a little adaptation, apply equally well to copyright: "The law has to apply to both the wealthy bully and the small individual needing redress from a large media organization." Sedley went on to argue that it is in the procedures that the playing field can be leveled; hence the recommendation for options to speed up dispute resolutions and lower costs.

Of course, publishers are not what they were. Even as recently as 1988 the landscape of rightsholders was much more diverse. Many more independent record labels jostled for market share with somewhat more larger ones; scores of independent book publishers and bookshops were thriving; and photographers, probably the creators being damaged the most in the present situation, still relied for their livelihood on the services of a large ecology of small agencies who understood them and cared about their work. Compare that to now, when cross-media ownership is the order of the day, and we may soon be down to just two giant music companies.

It is for this reason that I have long argued (as Walzel also said on Tuesday) that if you really want to help artists and other creators, they will be better served by improving contract law so they can't be bullied into unfair terms than by tightening and aggressively enforcing copyright law.

Libel law can't be so easily mitigated, but in both cases we can greatly improve matters by allowing exceptions that serve the public interest. In the case of libel law, that means scientific criticism: if someone claims abilities that are contrary to our best understanding of science, critique on that basis should be allowed to proceed. Similarly, there is clearly no economic loss to rightsholders from allowing exceptions for parody, disabled access, and archiving.

It was Lord McNally, the Minister of Justice who called this moment in the work on libel law reform the end of the beginning, reminding those present that now is to use whatever influence campaigners have with Parliamentarians to get through the changes that are needed. He probably wouldn't think of it this way, but his comment reminded me of the 1970s and 1980s tennis champion Chris Evert, who commented that many (lesser) players focused on reaching the finals of tournaments and forgot, once there, that there was a step further to go to win the title.

So enjoy that celebratory drink - and then get back to work!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 24, 2012

Copyright U

"You cannot have democracy without a public domain," says Tracy Mitrano. She clarifies: "Where the issues that matter are part of what people think about every day and we express them to our representatives in a representative democracy."

As commentators, campaigners, and observers keep pointing out, copyright policy hasn't been like that. A key part of the street protests over the Anti-Counterfeiting Trade Agreement (ACTA) was the secrecy of the negotiations over its contents. Similarly, even if there had been widespread content with the provisions of the Digital Economy Act, the way it was passed would be disturbing: on the nod, revised at the last minute with no debate, in the wash-up before the election with many MPs already on the road to their constituencies. If these are such good policies, why do they need to be agreed and passed in such anti-democratic ways?

My conversation with Mitrano is partly an accident of geography: when you're in Ithaca, NY, and interested in the Internet and copyright she's the person you visit. Mitrano is the director of IT policy at Cornell University, one of the first academic institutions where the Internet took hold. As such, she has been on the front lines of the copyright battles of the last 15 years, trying to balance academic values and student privacy against the demands of copyright enforcement, much like a testbed for the wider population. She also convenes an annual computer policy and law conference on Internet culture in the academy.

"Higher education was the canary in the coal mine for the enforcement of copyright and intellectual property on the Internet," she says.

We don't generally think of universities as ISPs, but, particularly in the US where so many students live in dorms, that is one of their functions: to provide high-speed, campus-wide access for tens of thousands of users of all types, from students to staff to researchers, plus serving hundreds of thousands of alumni wanting those prestigious-sounding email addresses. In 2004, Cornell was one of the leaders of discussions with the music industry regarding student subscription fees.

"To have picked on us was to pick on an easy target in the sense that we're fish in a barrel given our dependence on federal funding," she says, "and we're an easily caricatured representation of the problem because of the demographic of students, who care about culture, don't have a lot of money, are interested in new technology, and it all seemed to be flowing to them so easily. And the last reason: we were a patsy, because given that we care about education and we're not competing with the content industry for profits or market share, we wanted to help."

The result: "The content industry paid for and got, through lobbying, legislation that places greater demands on higher education ISPs than on commercial ISPs." The relevant legislation is the Higher Education Act 2008. "They wanted filtering devices on all our networks," Mitrano says, "completely antithetical to all our values." Still, the industry got a clause whose language is very like what's being pushed for now in the UK, the EU, and, in fact, everywhere else.

"After they got what they wanted there, they started in Europe on "three strikes"," she says. "Not they've come back with SOPA, ACTA, and PIPA."

Higher education in the US is still paying the price for that early focus.

"Even under the least strict test of the equal protection clause, the rational basis test, there is no rational basis for why higher education as an ISP has to do anything more or less than a commercial ISP in terms of being a virtual agent of enforcement of the content industry. Their numbers prove to be wrong in every field - how much they're losing, how many alleged offenders, what percentage of offenders the students are alleged to be in the whole world in copyright infringement."

Every mid-career lawyer with an interest in Internet policy tells the story of how tiny and arcane a field intellectual property was 20 years ago. Mitrano's version is that of the 15 students in her intellectual property class, most were engineers wishing to learn about patents; two were English students who wanted to know why J.D. Salinger's biography had been pulled before publication. By the time she finished law school in 1995, the Internet had been opened up to commercial traffic, though few still saw the significance.

"Copyright, at that moment, went from backwater area to front and center in US politics, but you couldn't prove that," she says. "The day it became apparent to most people in American society was the day last month when Wikipedia went black."

Unusually for someone in the US, Mitrano thinks loosening the US's grip on Internet governance is a good idea.

"I'm not really willing to give up US control entirely," she admits, "it's in the US's interests to be thinking about Internet governance much more internationally and much more collaboratively than we do today. And there's nothing more representative than issues around copyright and its enforcement globally."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 17, 2012

Foul play

You could have been excused for thinking you'd woken up in a foreign country on Wednesday, when the news broke about a new and deliberately terrifying notice replacing the front page of a previously little-known music site, RnBXclusive.

ZDNet has a nice screenshot of it; it's gone from the RnBXclusive site now, replaced by a more modest advisory.

It will be a while before the whole story is pieced together - and tested in court - but the gist so far seems to be that the takedown of this particular music site was under the fraud laws rather than the copyright laws. As far as I'm aware - and I don't say this often - this is the first time in the history of the Net that the owner of a music site has been arrested on suspicion of conspiracy to defraud (instead of copyright infringement ). It seems to me this is a marked escalation of the copyright wars.

Bearing in mind that at this stage these are only allegations, it's still possible to do some thinking about the principles involved.

The site is accused of making available, without the permission of the artists or recording companies, pre-release versions of new music. I have argued for years that file-sharing is not the economic enemy of the music industry and that the proper answer to it is legal, fast, reliable download services. (And there is increasing evidence bearing this out.) But material that has not yet been officially released is a different matter.

The notion that artists and creators should control the first publication of new material is a long-held principle and intuitively correct (unlike much else in copyright law). This was the stated purpose of copyright: to grant artists and creators a period of exclusivity in which to exploit their ideas. Absolutely fundamental to that is time in which to complete those ideas and shape them into their final form. So if the site was in fact distributing unreleased music as claimed, especially if, as is also alleged, the site's copies of that music were acquired by illegally hacking into servers, no one is going to defend either the site or its owner.

That said, I still think artists are missing a good bet here. The kind of rabid fan who can't wait for the official release of new music is exactly the kind of rabid fan who would be interested in subscribing to a feed from the studio while that music is being recorded. They would also, as a friend commented a few years ago, be willing to subscribe to a live feed from the musicians' rehearsal studio. Imagine, for example, being able to listen to great guitarists practice. How do they learn to play with such confidence and authority? What do they find hard? How long does it take to work out and learn something like Dave van Ronk's rendition, on guitar, of Scott Joplin rags with the original piano scoring intact?

I know why this doesn't happen: an artist learning a piece is like a dog with a wound (or maybe a bone): you want to go off in a forest by yourself until it's fixed. (Plus, it drives everyone around you mad.) The whole point of practicing is that it isn't performance. But musicians aren't magicians, and I find it hard to believe that showing the nuts and bolts of how the trick of playing music is worked would ruin the effect. For other types of artists - well, writers with works in progress really don't do much worth watching, but sculptors and painters surely do, as do dance troupes and theatrical companies.

However, none of that excuses the site if the allegations are true: artists and creators control the first release.

But also clearly wrong was the notice SOCA placed on the site, which displayed visitors' IP address, warned that downloading music from the site was a crime bearing a maximum penaltde y of up to ten years in prison, and claimed that SOCA has the capacity to monitor and investigate you with no mention of due process or court orders. Copyright infringement is a civil offense, not a criminal one; fraud is a criminal offense, but it's hard to see how the claim that downloading music is part of a conspiracy to commit fraud could be made to stick. (A day later, SOCA replaced the notice.) Someone browsing to The Pirate Bay and clicking on a magnet link is not conspiring to steal TV shows any more than someone buying a plane ticket is conspiring to destroy the ozone layer. That millions of people do both things is a contributing factor to the existence of the site and the airline, but if you accuse millions of people the term "organized crime" loses all meaning.

This was a bad, bad blunder on the part of authorities wishing to eliminate file-sharing. Today's unworkable laws against file-sharing are bringing the law into contempt already. Trying to scare people by misrepresenting what the law actually says at the behest of a single industry simply exacerbates the effect. First they're scared, then they're mad, and then they ignore you. Not a winning strategy - for anyone.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 27, 2012

Principle failure

The right to access, correct, and delete personal information held about you and the right to bar data collected for one purpose from being reused for another are basic principles of the data protection laws that have been the norm in Europe since the EU adopted the Privacy Directive in 1995. This is the Privacy Directive that is currently being updated; the European Commission's proposals seem, inevitably, to please no one. Businesses are already complaining compliance will be unworkable or too expensive (hey, fines of up to 2 percent of global income!). I'm not sure consumers should be all that happy either; I'd rather have the right to be anonymous than to be forgotten (which I believe will prove technically unworkable), and the jurisdiction for legal disputes with a company to be set to my country rather than theirs. Much debate lies ahead.

In the meantime, the importance of the data protection laws has been enhanced by Google's announcement this week that it will revise and consolidate the more than 60 privacy policies covering its various services "to create one beautifully simple and intuitive experience across Google". It will, the press release continues, be "Tailored for you". Not the privacy policy, of course, which is a one-size-fits-all piece of corporate lawyer ass-covering, but the services you use, which, after the fragmented data Google holds about you has been pooled into one giant liquid metal Terminator, will be transformed into so-much-more personal helpfulness. Which would sound better if 2011 hadn't seen loud warnings about the danger that personalization will disappear stuff we really need to know: see Eli Pariser's filter bubble and Jeff Chester's worries about the future of democracy.

Google is right that streamlining and consolidating its myriad privacy policies is a user-friendly thing to do. Yes, let's have a single policy we can read once and understand. We hate reading even one privacy policy, let alone 60 of them.

But the furore isn't about that, it's about the single pool of data. People do not use Google Docs in order to improve their search results; they don't put up Google+ pages and join circles in order to improve the targeting of ads on YouTube. This is everything privacy advocates worried about when Gmail was launched.

Australian privacy campaigner Roger Clarke's discussion document sets out the principles that the decision violates: no consultation, retroactive application; no opt out.

Are we evil yet?

In his 2011 book, In the Plex, Steven Levy traces the beginnings of a shift in Google's views on how and when it implements advertising to the company's controversial purchase of the DoubleClick advertising network, which relied on cookies and tracking to create targeted ads based on Net users' browsing history. This $3.1 billion purchase was huge enough to set off anti-trust alarms. Rightly so. Levy writes, "...sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match." Between DoubleClick's dominance in display advertising on large, commercial Web sites and Google AdSense's presence on millions of smaller sites, the company could track pretty much all Web users. "No law prevented it from combining all that information into one file," Levy writes, adding that Google imposed limits, in that it didn't use blog postings, email, or search behavior in building those cookies.

Levy notes that Google spends a lot of time thinking about privacy, but quotes founder Larry Page as saying that the particular issues the public chooses to get upset about seem randomly chosen, the reaction determined most often by the first published headline about a particular product. This could well be true - or it may also be a sign that Page and Brin, like Facebook's Mark Zuckberg and some other Silicon Valley technology company leaders, are simply out of step with the public. Maybe the reactions only seem random because Page and Brin can't identify the underlying principles.

In blending its services, the issue isn't solely privacy, but also the long-simmering complaint that Google is increasingly favoring its own services in its search results - which would be a clear anti-trust violation. There, the traditional principle is that dominance in one market (search engines) should not be leveraged to achieve dominance in another (social networking, video watching, cloud services, email).

SearchEngineLand has a great analysis of why Google's Search Plus is such a departure for the company and what it could have done had it chosen to be consistent with its historical approach to search results. Building on the "Don't Be Evil" tool built by Twitter, Facebook, and MySpace, among others, SEL demonstrates the gaps that result from Google's choices here, and also how the company could have vastly improved its service to its search customers.

What really strikes me in all this is that the answer to both the EU issues and the Google problem may be the same: the personal data store that William Heath has been proposing for three years. Data portability and interoperability, check; user control, check. But that is as far from the Web 2.0 business model as file-sharing is from that of the entertainment industry.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 13, 2012

Pot pourri

You have to think that 2012 so far has been orchestrated by someone with a truly strange sense of humor. To wit:

- EMI Records is suing the Irish government for failing to pass laws to block "pirate sites". The way PC Pro tells it, Ireland ought to have implemented site blocking laws to harmonize with European law and one of its own judges has agreed it failed to do so. I'm not surprised, personally: Ireland has a lot of other things on its mind, like the collapse of the Catholic church that dominated Irish politics, education, and health for so long, and the economic situation post-tech boom.

- The US Congress and Senate are, respectively, about to vote on SOPA (Stop Online Piracy Act) and PIPA (Protect Intellectual Property Act), laws to give the US site blocking, search engine de-listing, and other goodies. (Who names these things? SOPA and PIPA sound like they escaped from Anna Russell's La Cantatrice Squelante.) Senator Ron Wyden (D-OR) and Representative Darrell Issa (R-CA) have proposed an alternative, the OPEN Act (PDF), which aims to treat copyright violations as a trade issue rather than a criminal one.

- Issa and Representative Carolyn Maloney (D-NY) have introduced the Research Works Act to give science journal publishers exclusive rights over the taxpayer-funded research they publish. The primary beneficiary would be Elsevier (which also publishes Infosecurity, which I write for), whose campaign contributions have been funding Maloney.

- Google is mixing Google+ with its search engine results because, see, when you're looking up impetigo, as previously noted, what you really want is to know which of your friends has it.

- Privacy International has accused Facebook of destroying someone's life through its automated targeted advertising, an accusation the company disputes.

- And finally, a British judge has ruled that a Sheffield student Richard O'Dwyer can be extradited to the US to face charges of copyright infringement; he owned the now-removed TVShack.net site, which hosted links to unauthorized copies of US movies and TV shows.

So many net.wars, so little time...

The eek!-Facebook-knows-I'm-gay story seems overblown. I'm sure the situation is utterly horrible for the young man in question, whom PI's now-removed blog posting said was instantly banished from his parents' home, but I still would like to observe that the ads were placed on his page by a robot (one without the Asimov Three Laws programmed into it). On this occasion the robot apparently guessed right but that's not always true. Remember 2002, when several TiVos thought their owners were gay? These are emotive issues and, as Forbes concludes in the article linked above, the more targeting gets good and online behavioral advertising spreads the more you have to think about what someone looking over your shoulder will see. Perhaps that's a new-economy job for 2012: the digital image consultant who knows how to game the system so the ads appearing on your personalized pages will send the "right" messages about you. Except...

It was predicted - I forget by whom - that search generally would need to incorporate social networking to make its search results more "relevant" and "personal". I can see the appeal if I'm looking for a movie to see, a book to read, or a place to travel to: why wouldn't I want to see first the recommendations of my friends, whom I trust and who likely have tastes similar to mine? But if I'm looking to understand what campaigners are saying about American hate radio (PDF), I'm more interested in the National Hispanic Media Coalition's new report than in collectively condemning Rush Limbaugh. Google Plus Search makes sense in terms of competing with Facebook and Twitter, but mix it up with the story above, and you have a bigger mess in sight. By their search results shall ye know their innermost secrets.

Besides proving Larry Lessig's point about the way campaign funding destroys our trust in our elected representatives, the Research Works Act is a terrible violation of principle. It's taken years of campaigning - by the Guardian as well as individuals pushing open standards - to get the UK government to open up its data coffers. And just at the moment when they finally do it, the US, which until now has been the model of taxpayers-paid-for-it-they-own-the-data, is thinking about going all protectionist and proprietary?

The copyright wars were always kind of ridiculous (and, says Cory Doctorow, only an opening skirmish), but there's something that's just wrong - lopsided, disproportionate, arrogant, take your pick - about a company suing a national government over it. Similarly, there's something that seems disproportionate about extraditing a British student for running a Web site on the basis that it was registered in .net, which is controlled by a US-based registry (and has now been removed from same). Granted, I'm no expert on extradition law, and must wait for either Lilian Edwards or David Allen Green to explain the details of the 2003 law. That law was and remains controversial, that much I know.

And this is only the second week. Happy new year, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


October 14, 2011

Think of the children

Give me smut and nothing but! - Tom Lehrer

Sex always sells, which is presumably why this week's British headlines have been dominated by the news that the UK's ISPs are to operate an opt-in system for porn. The imaginary sales conversations alone are worth any amount of flawed reporting:

ISP Customer service: Would you like porn with that?

Customer: Supersize me!

Sadly, the reporting was indeed flawed. Cameron, it turns out was merely saying that new customers signing up with the four major consumer ISPs would be asked if they want parental filtering. So much less embarrassing. So much less fun.

Even so, it gave reporters such as Violet Blue, at ZDNet UK, a chance to complain about the lack of transparency and accountability of filtering systems.

Still, the fact that so many people could imagine that it's technically possible to turn "Internet porn" on and off as if operated a switch is alarming. If it were that easy, someone would have a nice business by now selling strap-on subscriptions the way cable operators do for "adult" TV channels. Instead, filtering is just one of several options for which ISPs, Web sites, and mobile phone operators do not charge.

One of the great myths of our time is that it's easy to stumble accidentally upon porn on the Internet. That, again, is television, where idly changing channels on a set-top box can indeed land you on the kind of smut that pleased Tom Lehrer. On the Internet, even with safe search turned off, it's relatively difficult to find porn accidentally - though very easy to find on purpose. (Especially since the advent of the .xxx top-level domain.)

It is, however, very easy for filtering systems to remove non-porn sites from view, which is why I generally turn off filters like "Safe search" or anything else that will interfere with my unfettered access to the Internet. I need to know that legitimate sources of information aren't being hidden by overactive filters. Plus, if it's easy to stumble over pornography accidentally I think that as a journalist writing about the Net and in general opposing censorship I think I should know that. I am better than average at constraining my searches so that they will retrieve only the information I really want, which is a definite bias in this minuscule sample of one. But I can safely say that the only time I encounter unwanted anything-like-porn is in display ads on some sites that assume their primary audience is young men.

Eli Pariser, whose The Filter Bubble: What the Internet is Hiding From You I reviewed recently for ZDNet UK, does not talk in his book about filtering systems intended to block "inappropriate" material. But surely porn filtering is a broad-brush subcase of exactly what he's talking about: automated systems that personalize the Net based on your known preferences by displaying content they already "think" you like at the expense of content they think you don't want. If the technology companies were as good at this as the filtering people would like us to think, this weekend's Singularity Summit would be celebrating the success of artificial intelligence instead of still looking 20 to 40 years out.

If I had kids now, would I want "parental controls"? No, for a variety of reasons. For one thing, I don't really believe the controls keep them safe. What keeps them safe is knowing they can ask their parents about material and people's behavior that upsets them so they can learn how to deal with it. The real world they will inhabit someday will not obligingly hide everything that might disturb their equanimity.

But more important, our children's survival in the future will depend on being able to find the choices and information that are hidden from view. Just as the children of 25 years ago should have been taught touch typing, today's children should be learning the intricacies of using search to find the unknown. If today's filters have any usefulness at all, it's as a way of testing kids' ability to think ingeniously about how to bypass them.

Because: although it's very hard to filter out only *exactly* the material that matches your individual definition of "inappropriate", it's very easy to block indiscriminately according to an agenda that cares only about what doesn't appear. Pariser worries about the control that can be exercised over us as consumers, citizens, voters, and taxpayers if the Internet is the main source of news and personalization removes the less popular but more important stories of the day from view. I worry that as people read and access only the material they already agree with our societies will grow more and more polarized with little agreement even on basic facts. Northern Ireland, where for a long time children went to Catholic or Protestant-owned schools and were taught that the other group was inevitably going to Hell, is a good example of the consequences of this kind of intellectual segregation. Or, sadly, today's American political debates, where the right and left have so little common basis for reasoning that the nation seems too polarized to solve any of its very real problems.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 9, 2011

The final countdown

The we-thought-it-was-dead specter of copyright term extension in sound recordings has done a Diabolique maneuver and been voted alive by the European Council. In a few days, the Council of Ministers could make it EU law because, as can happen under the inscrutable government structures of the EU, opposition has melted away.

At stake is the extension of copyright in sound recordings from 50 years to 70, something the Open Rights Group has been fighting since it was born. The push to extend it above 50 years has been with us for at least five years; originally the proposal was to take it to 95 years. An extension from 50 to 70 years is modest by comparison, but given the way these things have been going over the last 50 years, that would buy the recording industry 20 years in which to lobby for the 95 years they originally wanted, and then 25 years to lobby for the line to be moved further. Why now? A great tranche of commercially popular recordings is up for entry into the public domain: Elvis Presley's earliest recordings date to 1956, and The Beatles' first album came out in 1963; their first singles are 50 years old this year. It's not long after that to all the great rock records of the 1970s.

My fellow Open Rights Group advisory council member Paul Sanders, has up a concise little analysis about what's wrong here. Basically, it's never jam today for the artists, but jam yesterday, today, and tomorrow for the recording companies. I have commented frequently on the fact that the more record companies are able to make nearly pure profit on their back catalogues whose sunk costs have long ago been paid, the more new, young artists are required to compete for their attention with an ever-expanding back catalogue. I like Sanders' language on this: "redistributive, from younger artists to older and dead ones".

In recent years, we've heard a lof of the mantra "evidence-based policy" from the UK government. So, in the interests of ensuring this evidence-based policy the UK government is so keen on, here is some. The good news is they commissioned it themselves, so it ought to carry a lot of weight with them. Right? Right.

There have been two major British government reports studying the future of copyright and intellectual property law generally in the last five years: the Gowers Review, published in 2006, and the Hargreaves report was commissioned in November 2010 and released in May 2011.

From Hargreaves:

Economic evidence is clear that the likely deadweight loss to the economy exceeds any additional incentivising effect which might result from the extension of copyright term beyond its present levels.14 This is doubly clear for retrospective extension to copyright term, given the impossibility of incentivising the creation of already existing works, or work from artists already dead.

Despite this, there are frequent proposals to increase term, such as the current proposal to extend protection for sound recordings in Europe from 50 to 70 or even 95 years. The UK Government assessment found it to be economically detrimental. An international study found term extension to have no impact on output.

And further:

Such an extension was opposed by the Gowers Review and by published studies commissioned by the European Commission.

Ah, yes, Gowers and its 54 recommendations, many or most of which have been largely ignored. (Government policy seems to have embraced "strengthening of IP rights, whether through clamping down on piracy" to the exclusion of things like "improving the balance and flexibility of IP rights to allow individuals, businesses, and institutions to use content in ways consistent with the digital age".

To Gowers:

Recommendation 3: The European Commission should retain the length of protection on sound recordings and performers' rights at 50 years.

And:

Recommendation 4: Policy makers should adopt the principle that the term and scope of protection for IP rights should not be altered retrospectively.

I'd use the word "retroactive", myself, but the point is the same. Copyright is a contract with society: you get the right to exploit your intellectual property for some number of years, and in return after that number of years your work belongs to the society whose culture helped produce it. Trying to change an agreed contract retroactively usually requires you to show that the contract was not concluded in good faith, or that someone is in breach. Neither of those situations applies here, and I don't think these large companies with their in-house lawyers, many of whom participated in drafting prior copyright law, can realistically argue that they didn't understand the provisions. Of course, this recommendation cuts both ways: if we can't put Elvis's earliest recordings back into copyright, thereby robbing the public domain, we also can't shorten the copyright protection that applies to recordings created with the promise of 50 years' worth of protection.

This whole mess is a fine example of policy laundering: shopping the thing around until you either wear out the opposition or find sufficient champions. The EU, with its Hampton Court maze of interrelated institutions, could have been deliberately designed to facilitate this. You can write to your MP, or even your MEP - but the sad fact is that the shiny, new EU government is doing all this in old-style backroom deals.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 12, 2011

"Phony concerns about human rights"

Why can't you both condemn violent rioting and looting *and* care about civil liberties?

One comment of David Cameron's yesterday in the Commons hit a nerve: that "phony" (or "phoney", if you're British) human rights concerns would not get in the way of publishing CCTV images in the interests of bringing the looters and rioters to justice. Here's why it bothers me: even the most radical pro-privacy campaigner is not suggesting that using these images in this way is wrong. But in saying it, Cameron placed human rights on the side of lawlessness. One can oppose the privacy invasiveness of embedding crowdsourced facial recognition into Facebook and still support the use of the same techniques by law enforcement to identify criminals.

It may seem picky to focus on one phrase in a long speech in a crisis, but this kind of thinking is endemic - and, when it's coupled with bad things happening and a need for politicians to respond quickly and decisively, dangerous. Cameron shortly followed it with the suggestion that it might be appropriate to shut down access to social media sites when they are being used to plan "violence, disorder and criminality".

Consider the logic there: given the size of the population, there are probably people right now planning crimes over pints of beer in pubs, over the phone, and sitting in top-level corporate boardrooms. Fellow ORG advisory council member Kevin Marks blogs a neat comparison by Douglas Adams to cups of tea. But no, let's focus on social media.

Louise Mensch, MP and novelist, was impressove during the phone hacking hearings aside from her big gaffe about Piers Morgan. But she's made another mistake here in suggesting that taking Twitter and/or Facebook down for an hour during an emergency is about like shutting down a road or a railway station.

First of all, shutting down the tube in the affected areas has costs: innocent bystanders were left with no means to escape their violent surroundings. (This is the same thinking that wanted to shut down the tube on New Year's Eve 1999 to keep people out of central London.)

But more important, the comparison is wrong. Shutting down social networks is the modern equivalent of shutting down radio, TV, and telephones, not transport. The comparison suggests that Mensch is someone who uses social media for self-promotion rather than, like many of us, as a real-time news source and connector to friends and family. This is someone for whom social media are a late add-on to an already-structured life; in 1992 an Internet outage was regarded as a non-issue, too. The ability to use social media in an emergency surely takes pressure off the telephone network by helping people reassure friends and family, avoid trouble areas, find ways home, and so on. Are there rumors and misinformation? Sure. That's why journalists check stuff out before publishing it (we hope). But those are vastly overshadowed by the amount of useful and timely updates.

Is barring access is even possible? As Ben Rooney writes in the Wall Street Journal Europe, it's hard enough to ground one teenager these days, let alone a countryful. But let's say they decide to try. What approaches can they take?

One: The 95 percent approach. Shut down access to the biggest social media sites and hope that the crimes aren't being planned on the ones you haven't touched. Like the network that the Guardian finds was really used - Blackberry messaging.

Two: The Minority Report approach. Develop natural language processing and artificial intelligence technology to the point where it can interact on the social networks, spot prospective troublemakers, and turn them in before they commit crimes.

Three: The passive approach. Revive all the net.wars of the past two decades. Reinstate the real-world policing. One of the most important drawbacks to relying on mass surveillance technologies is that they encourage a reactive, almost passive, style of law enforcement. Knowing that the police can catch the crooks later is no comfort when your shop is being smashed up. It's a curious, schizophrenic mindset politicians have: blame social ills on new technology while imagining that other new technology can solve them.

The riots have ended - at least for now, but we will have to live for a long time with the decisions we make about what comes next. Let's not be hasty. Think of the PATRIOT Act, which will be ten years old soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 15, 2011

Dirty digging

The late, great Molly Ivins warns (in Molly Ivins Can't Say That, Can She?) about the risk to journalists of becoming "power groupies" who identify more with the people they cover than with their readers. In the culture being exposed by the escalating phone hacking scandals the opposite happened: politicians and police became "publicity groupies" who feared tabloid wrath to such an extent that they identified with the interests of press barons more than those of the constituents they are sworn to protect. I put the apparent inconsistency between politicians' former acquiescence and their current baying for blood down to Stockholm syndrome: this is what happens when you hold people hostage through fear and intimidation for a few decades. When they can break free, oh, do they want revenge.

The consequences are many and varied, and won't be entirely clear for a decade or two. But surely one casualty must have been the balanced view of copyright frequently argued for in this column. Murdoch's media interests are broad-ranging. What kind of copyright regime do you suppose he'd like?

But the desire for revenge is a really bad way to plan the future, as I said (briefly) on Monday at the Westminster Skeptics.

For one thing, it's clearly wrong to focus on News International as if Rupert Murdoch and his hired help were the only contaminating apple. In the 2006 report What price privacy now? the Information Commissioner listed 30 publications caught in the illegal trade in confidential information. News of the World was only fifth; number one, by a considerable way, was the Daily Mail (the Observer was number nine). The ICO wanted jail sentences for those convicted of trading in data illegally, and called on private investigators' professional bodies to revoke or refuse licenses to PIs who breach the rules. Five years later, these are still good proposals.

Changing the culture of the press is another matter.
When I first began visiting Britain in the late 1970s, I found the tabloid press absolutely staggering. I began asking the people I met how the papers could do it.

"That's because *we* have a free press," I was told in multiple locations around the country. "Unlike the US." This was only a few years after The Washington Post backed Bob Woodward and Carl Bernstein's investigation of Watergate, so it was doubly baffling.

Tom Stoppard's 1978 play Night and Day explained a lot. It dropped competing British journalists into an escalating conflict in a fictitious African country. Over the course of the play, Stoppard's characters both attack and defend the tabloid culture.

"Junk journalism is the evidence of a society that has got at least one thing right, that there should be nobody with power to dictate where responsible journalism begins," says the naïve and idealistic new journalist on the block.

"The populace and the popular press. What a grubby symbiosis it is," complains the play's only female character, whose second marriage - "sex, money, and a title, and the parrots didn't harm it, either" - had been tabloid fodder.

The standards of that time now seem almost quaint. In the movie Starsuckers, filmmaker Chris Atkins fed fabricated celebrity stories to a range of tabloids. All were published. That documentary also showed in action illegal methods of obtaining information. In 2009, right around the time The Press Complaints Commission was publishing a report concluding, "there is no evidence that the practice of phone message tapping is ongoing".

Someone on Monday asked why US newspapers are better behaved despite First Amendment protection and less constraint by onerous libel laws. My best guess is fear of lawsuits. Conversely, Time magazine argues that Britain's libel laws have encouraged illegal information gathering: publication requires indisputable evidence. I'm not completely convinced: the libel laws are not new, and economics and new media are forcing change on press culture.

A lot of dangers lurk in the calls for greater press regulation. Phone hacking is illegal. Breaking into other people's computers is illegal. Enforce those laws. Send those responsible to jail. That is likely to be a better deterrent than any regulator could manage.

It is extremely hard to devise press regulations that don't enable cover-ups. For example, on Wednesday's Newsnight, the MP Louise Mensch, head of the DCMS committee conducting the hearings, called for a requirement that politicians disclose all meetings with the press. I get it: expose too-cosy relationships. But whistleblowers depend on confidentiality, and the last thing we want is for politicians to become as difficult to access as tennis stars and have their contact with the press limited to formal press conferences.

Two other lessons can be derived from the last couple of weeks. The first is that you cannot assume that confidential data can be protected simply by access rules. The second is the importance of alternatives to commercial, corporate journalism. Tom Watson has criticized the BBC for not taking the phone hacking allegations seriously. But it's no accident that the trust-owned Guardian was the organization willing to take on the tabloids. There's a lesson there for the US, as the FBI and others prepare to investigate Murdoch and News Corp: keep funding PBS.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 1, 2011

Free speech, not data

Congress shall make no law...abridging the freedom of speech...

Is data mining speech? This week, in issuing its ruling in the case of IMS Health v Sorrell, the Supreme Court of the United States took the view that it can be. The majority (6-3) opinion struck down a Vermont law that prohibited drug companies from mining physicians' prescription data for marketing purposes. While the ruling of course has no legal effect outside the US, the primary issue in the case - the use of aggregated patient data - is being considered in many countries, including the UK, and the key technical debate is relevant everywhere.

IMS Health is a new species of medical organization: it collects aggregated medical data and mines it for client pharmaceutical companies, who use the results to determine their strategies for marketing to doctors. Vermont's goal was to save money by encouraging doctors to prescribe lower-cost generic medications. The pharmaceutical companies know, however, that marketing to doctors is effective. IMS Health accordingly sued to get the law struck down, claiming that the law abrogated the company's free speech rights. NGOs from the digital - EFF and EPIC - to the not-so-digital - AARP, - along with a host of medical organizations, filed amicus briefs arguing that patient information is confidential data that has never before been considered to fall within "free speech". The medical groups were concerned about the threat to trust between doctors and patients; EPIC and EFF added the more technical objection that the deidentification measures taken by IMS Health are inadequate.

At first glance, the SCOTUS ruling is pretty shocking. Why can't a state protect its population's privacy by limiting access to prescription data? How do marketers have free speech?

The court's objection - or rather, the majority opinion - was that the Vermont law is selective: it prohibits the particular use of this data for marketing but not other uses. That, to the six-judge majority, made the law censorship. The three remaining judges dissented, partly on privacy grounds, but mostly on the well-established basis that commercial speech typically enjoys a lower level of First Amendment protection than non-commercial speech.

When you are talking about traditional speech, censorship means selectively banning a type or source of content. Let's take Usenet in the early 1990s as an example. When spam became a problem, a group of community-minded volunteers devised cancellation practices that took note of this principle and defined spam according to the behavior involved in posting it. Deciding a particular posting was spam requires no subjective judgments about who posted the message or whether it was a commercial ad. Instead, postings are scored against a bunch of published, objective criteria: x number of copies, posted to y number of newsgroups, over z amount of time., or off-topic for that particular newsgroup, or a binary file posted to a text-only newsgroup. In the Vermont case, if you can accept the argument that data mining is speech, as SCOTUS did, then the various uses of the data are content and therefore a law that bans only one of many possible uses or bans use by specified parties is censorship.

The decision still seems intuitively wrong to me, as it apparently also did to the three remaining judges, who wrote a dissenting opinion that instead viewed the Vermont law as an attempt to regulate commercial activity, something that has never been covered by the First Amendment.

But note this: the concern for patient privacy that animated much of the interest in this case was only a bystander (which must surely have pleased the plaintiffs).

Obscured by this case, however, is the technical question that should be at the heart of such disputes (several other states have passed Vermont-style laws): how effectively can data be deidentified? If it can be easily reidentified and linked to specific patients, making it available for data mining ends medical privacy. If it can be effectively anonymized, then the objections go away.

At this year's Computers, Freedom, and Privacy there was some discussion of this issue; an IMS Health representative and several of the experts EPIC cited in its brief were present and disagreeing. Khaled El Emam, from the University of Ottawa, filed a brief (PDF) opposing EPIC's analysis; Latanya Sweeney, who did the seminal work in this area in the early 2000s, followed with a rebuttal. From these, my non-expert conclusion is that just as you cannot trust today's secure cryptographic system to remain unbreakable for the future as computing power continues to increase in speed and decrease in price, you cannot trust today's deidentification to remain robust against the increasing masses of data available for matching to it.

But it seems the technical and privacy issues raised by the Vermont case are yet to be decided. Vermont is free to try again to frame a law that has the effect the state wants but takes a different approach. As for the future of free speech, it seems clear that it will encompass many technological artefacts still being invented - and that it will be quite a fight to keep it protecting individuals instead of, increasingly, commercial enterprises.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 14, 2011

Untrusted systems

Why does no one trust patients?

On the TV series House, the eponymous sort-of-hero has a simple answer: "Everybody lies." Because he believes this, and because no one appears able to stop him, he sends his minions to search his patients' homes hoping they will find clues to the obscure ailments he's trying to diagnose.

Today's Health Privacy Summit in Washington, DC, the zeroth day of this year's Computers, Freedom, and Privacy conference, pulled together, in the best Computers, Freedom, and Privacy tradition, speakers from all aspects of health care privacy. Yet many of them agreed on one thing: health data is complex, decisions about health data are complex, and it's demanding too much of patients to expect them to be able to navigate these complex waters. And this is in the US, where to a much larger extent than in Europe the patient is the customer. In the UK, by contrast, the customer is really the GP and the patient has far less direct control. (Just try looking up a specialist in the phone book.)

The reality is, however, as several speakers pointed out, that doctors are not going to surrender control of their data either. Both physicians and patients have an interest in medical records. Patients need to know about their care; doctors need records both for patient care and for billing and administrative purposes. But beyond these two parties are many other interests who would like access to the intimate information doctors and patients originate: insurers, researchers, marketers, governments, epidemiologists. Yet no one really trusts patients to agree to hand over their data; if they did, these decisions would be a lot simpler. But if patients can't trust their doctor's confidentiality, they will avoid seeking health care until they're in a crisis. In some situations - say, cancer - that can end their lives much sooner than is necessary.

The loss of trust, said lawyer Jim Pyles, could bring on an insurance crisis, since the cost of electronic privacy breaches could be infinite, unlike the ability of insurers to insure those breaches. "If you cannot get insurance for these systems you cannot use them."

If this all (except for the insurance concerns) sounds familiar to UK folk, it's not surprising. As Ross Anderson pointed out, greatly to the Americans' surprise, the UK is way ahead on this particular debate. Nationalized medicine meant that discussions began in the UK as long ago as 1992.

One of Anderson's repeated points is that the notion of the electronic patient record has little to do with the day-to-day reality of patient care. Clinicians, particularly in emergency situations, want to look at the patient. As you want them to do: they might have the wrong record, but you know they haven't got the wrong patient.

"The record is not the patient," said Westley Clarke, and he was so right that this statement was repeated by several subsequent speakers.

One thing that apparently hasn't helped much is the Health Insurance Portability and Accountability Act, which one of the breakout sessions considered scrapping. Is HIPAA a failure or, as long-time Canadian privacy activist Stephanie Perrin would prefer it, a first step? The distinction is important: if HIPPA is seen as an expensive failure it might be scrapped and not replaced. First steps can be succeeded by further, better steps.

Perhaps the first of those should be another of Perrin's suggestions: a map of where your data goes, much like Barbara Garson's book Money Makes the World Go Around? followed her bank deposit as it was loaned out across the world. Most of us would like to believe that what we tell our doctors remains cosily tucked away in their files. These days, not so much.

For more detail see Andy Oram's blog.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 3, 2011

A forgotten man and a bowl of Japanese goldfish

"I'm the forgotten man," Godfrey (William Powell) explains in the 1936 film My Man Godfrey.

Godfrey was speaking during the Great Depression, when prosperity was just around the corner ("Yes, it's been there a long time," says one of Godfrey's fellow city dump dwellers) but the reality for many people was unemployment, poverty, and a general sense that they had ceased to exist except, perhaps, as curiosities to be collected by the rich in a scavenger hunt. Today the rich in question would record their visit to the city dump in an increasingly drunken stream of Tweets and Facebook postings, and people in Nepal would be viewing photographs and video clips even if Godfrey didn't use a library computer to create his own Facebook page.

The EU's push for a right to be forgotten is a logical outgrowth of today's data protection principles, which revolve around the idea that you have rights over your data even when someone else has paid to collect it. EU law grants the right to inspect and correct the data held about us and to prevent its use in unwanted marketing. The idea that we should also have the right to delete data we ourselves have posted seems simple and fair, especially given the widely reported difficulty of leaving social networks.

But reality is complicated. Godfrey was fictional; take a real case, from Pennsylvania. A radiology trainee, unsure what to do when she wanted a reality check whether the radiologist she was shadowing was behaving inappropriately, sought advice from her sister, also a health care worker before reporting the incident. The sister told a co-worker about the call, who told others, and someone in that widening ripple posted the story on Facebook, from where it was reported back to the student's program director. Result: the not-on-Facebook trainee was expelled on the grounds that she had discussed a confidential issue on a cell phone. Lawsuit.

So many things had to go wrong for that story to rebound and hit that trainee in the ass. No one - except presumably the radiologist under scrutiny - did anything actually wrong, though the incident illustrates the point that than people think. Preventing this kind of thing is hard. No contract can bar unrelated, third-hand gossipers from posting information that comes their way. There's nothing to invoke libel law. The worst you can say is that the sister was indiscreet and that the program administrator misunderstood and overreacted. But the key point for our purposes here is: which data belongs to whom?

Lilian Edwards has a nice analysis of the conflict between privacy and freedom of expression that is raised by the right to forget. The comments and photographs I post seem to me to belong to me, though they may be about a dozen other people. But on a social network your circle of friends are also stakeholders in what you post; you become part of their library. Howard Rheingold, writing in his 1992 book The Virtual Community, noted the ripped and gaping fabric of conversations on The Well when early member Blair Newman deleted all his messages. Photographs and today's far more pervasive, faster-paced technology make such holes deeper and multi-dimensional. How far do we need to go in granting deletion rights?

The short history of the Net suggests that complete withdrawal is roughly impossible. In the 1980s, Usenet was thought of as an ephemeral medium. People posted in the - they thought - safe assumption that anything they wrote would expire off the world's servers in a couple of weeks. And as long as everyone read live online that was probably true. But along came offline readers and people with large hard disks and Deja News, and Usenet messages written in 1981 with no thought of any future context are a few search terms away.

"It's a mistake to only have this conversation about absolutes," said Google's Alma Whitten at the Big Tent event two weeks ago, arguing that it's impossible to delete every scrap about anyone. Whitten favors a "reasonable effort" approach and a user dashboard to enable that so users can see and control the data that's being held. But we all know the problem with market forces: it is unlikely that any of the large corporations will come up with really effective tools unless forced. For one thing, there is a cultural clash here between the EU and the US, the home of many of these companies. But more important, it's just not in their interests to enable deletion: mining that data is how those companies make a living and in return we get free stuff.

Finding the right balance between freedom of expression (my right to post about my own life) and privacy, including the right to delete, will require a mix of answers as complex as the questions: technology (such as William Heath's Mydex), community standards, and, yes, law, applied carefully. We don't want to replace Britain's chilling libel laws with a DMCA-like deletion law.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 27, 2011

Mixed media

In a fight between technology and the law, who wins? This question has been debated since Net immemorial. Techies often seem to be sure that law can't win against practical action. And often this has been true: the release of PGP defeated the International Traffic in Arms Regulations that banned the export of strong cryptography; TOR lets people all over the world bypass local Net censorship rules; and, in the UK, over the last few weeks Twitter has been causing superinjunctions to collapse.

On the other hand, technology by itself is often not enough. The final defeat of the ITAR had at least as much to do with the expansion of ecommerce and the consequent need for secured connections as it did with PGP. TOR is a fine project, but it is not a mainstream technology. And Twitter is a commercial company that can be compelled to disclose what information it has about its users (though granted, this may be minimal) or close down accounts.

Last week, two events took complementary approaches to this question. The first, Big Tent UK, hosted by Google, Privacy International, and Index on Censorship, featured panels and discussions loosely focused on how law can control technology. The second, OpenTech loosely focused on how technology can change our understanding of the world, if not up-end the law itself. At the latter event, projects like Lisa Evans' effort to understand government spending relied on government-published data, while others, such as OpenStreetMap and OpenCorporates seek to create open-source alternatives to existing proprietary services.

There's no question that doing things - or, in my case, egging on people who are doing things - is more fun than purely intellectual debate. I particularly liked the open-source hardware projects presented at OpenTech, some of which are, as presenter Paul Downey said, trying to disrupt a closed market. See for example, River Simple's effort to offer an open-source design for a haydrogen-powered car. Downey whipped through perhaps a dozen projects, all based on the notion that if something can be represented by lines on a PowerPoint slide you can send it to a laser cutter.

But here again I suspect the law will interfere at some point. Not only will open-source cars have to obey safety regulations, but all hardware designs will come up against the same intellectual property issues that have been dogging the Net from all directions. We've noted before Simon Bradshaw's work showing that copyright as applied to three-dimensional objects will be even more of a rat's nest than it has been when applied to "simple" things like books, music, and movies.

At BigTentUK, copyright was given a rest for once in favor of discussions of privacy, the limits of free speech, and revolution. As is so often the case with this type of discussion, it wasn't long before someone - British TV producer Peter Bazalgette - invoked George Orwell. Bizarrely, he aimed "Orwellian" at Privacy International executive director Simon Davies, who a minute before had proposed that the solution to at least some of the world's ongoing privacy woes would be for regulators internationally to collaborate on doing their jobs. Oddly, in an audience full of leading digital rights activists and entrepreneurs, no one admitted to representing the Information Commissioner's office.

Yet given these policy discussions as his prelude, the MP Jeremy Hunt (Con-South West Surry), the secretary of state for Culture, Olympics, Media, and Sport, focused instead on technical progress. We need two things for the future, he said: speed and mobility. Here he cited Bazalgette's great-great-grandfather's contribution to building the sewer system as a helpful model for today. Tasked with deciding the size of pipes to specify for London's then-new sewer system, Joseph Bazalgette doubled the size of pipe necessary to serve the area of London with the biggest demand; we still use those same pipes. We should, said Hunt, build bandwidth in the same foresighted way.

The modern-day Bazalgette, instead, wants the right to be forgotten: people, he said, should have the right to delete any information that they voluntarily surrender. Much like Justine Roberts, the founder of Mumsnet, who participated in the free speech panel, he seemed not to understand the consequences of what he was asking for. Roberts complained that the "slightly hysterical response" to any suggestion of moderating free speech in the interests of child safety inhibits real discussion; the right to delete is not easily implemented when people are embedded in a three-dimensional web of information.

The Big Tent panels on revolution and conflict would have fit either event, including href="http://en.wikipedia.org/wiki/Wael_Ghonim">Wael Ghonim who ran a Facebook page that fomented pro-democracy demonstrations in Egypt and respresentatives of PAX and Unitar, projects to use the postings of "citizen journalists" and public image streams respectively to provide early warnings of developing conflict.

In the end, we need both technology and law, a viewpoint best encapsulated by Index on Censorship chief executive John Kampfner, who said he was worried by claims that the Internet is a force for good. "The Internet is a medium, a tool," he said. "You can choose to use it for moral good or moral ill."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 20, 2011

The world we thought we lived in

If one thing is more annoying than another, it's the fantasy technology on display in so many TV shows. "Enhance that for me!" barks an investigator. And, obediently, his subordinate geek/squint/nerd pushes a button or few, a line washes over the blurry image on screen, and now he can read the maker's mark on a pill in the hand of the target subject that was captured by a distant CCTV camera. The show 24 ended for me 15 minutes into season one, episode one, when Kiefer Sutherland's Jack Bauer, trying to find his missing daughter, thrust a piece of paper at an underling and shouted, "Get me all the Internet passwords associated with that telephone number!" Um...

But time has moved on, and screenwriters are more likely to have spent their formative years online and playing computer games, and so we have arrived at The Good Wife, which gloriously wrapped up its second season on Tuesday night (in the US; in the UK the season is still winding to a close on Channel 4). The show is a lot of things: a character study of an archetypal humiliated politician's wife (Alicia Florrick, played by Julianna Margulies) who rebuilds her life after her husband's betrayal and corruption scandal; a legal drama full of moral murk and quirky judges ( Carob chip?); a political drama; and, not least, a romantic comedy. The show is full of interesting, layered men and great, great women - some of them mature, powerful, sexy, brilliant women. It is also the smartest show on television when it comes to life in the time of rapid technological change.

When it was good, in its first season, Gossip Girl cleverly combined high school mean girls with the citizen reportage of TMZ to produce a world in which everyone spied on everyone else by sending tips, photos, and rumors to a Web site, which picks the most damaging moment to publish them and blast them to everyone's mobile phones.

The Good Wife goes further to exploit the fact that most of us, especially those old enough to remember life before CCTV, go on about our lives forgetting that everywhere we leave a trail. Some are, of course, old staples of investigative dramas: phone records, voice messages, ballistics, and the results of a good, old-fashioned break-in-and-search. But some are myth-busting.

One case (S2e15, "Silver Bullet") hinges on the difference between the compressed, digitized video copy and the original analog video footage: dropped frames change everything. A much earlier case (S1e06, "Conjugal") hinges on eyewitness testimony; despite a slightly too-pat resolution (I suspect now, with more confidence, it might have been handled differently), the show does a textbook job of demonstrating the flaws in human memory and their application to police line-ups. In a third case (S1e17, "Heart"), a man faces the loss of his medical insurance because of a single photograph posted to Facebook showing him smoking a cigarette. And the disgraced husband's (Peter Florrick, played by Chris Noth) attempt to clear his own name comes down to a fancy bit of investigative work capped by camera footage from an ATM in the Cayman Islands that the litigator is barely technically able to display in court. As entertaining demonstrations and dramatizations of the stuff net.wars talks about every week and the way technology can be both good and bad - Alicia finds romance in a phone tap! - these could hardly be better. The stuffed lion speaker phone (S2e19, "Wrongful Termination") is just a very satisfying cherry topping of technically clever hilarity.

But there's yet another layer, surrounding the season two campaign mounted to get Florrick elected back into office as State's Attorney: the ways that technology undermines as well as assists today's candidates.

"Do you know what a tracker is?" Peter's campaign manager (Eli Gold, played by Alan Cumming) asks Alicia (S2e01, "Taking Control"). Answer: in this time of cellphones and YouTube, unpaid political operatives follow opposing candidates' family and friends to provoke and then publish anything that might hurt or embarrass the opponent. So now: Peter's daughter (Makenzie Vega) is captured praising his opponent and ham-fistedly trying to defend her father's transgressions ("One prostitute!"). His professor brother-in-law's (Dallas Roberts) in-class joke that the candidate hates gays is live-streamed over the Internet. Peter's son (Graham Phillips) and a manipulative girlfriend (Dreama Walker), unknown to Eli, create embarrassing, fake Facebook pages in the name of the opponent's son. Peter's biggest fan decides to (he thinks) help by posting lame YouTube videos apparently designed to alienate the very voters Eli's polls tell him to attract. (He's going to post one a week; isn't Eli lucky?) Polling is old hat, as are rumors leaked to newspaper reporters; but today's news cycle is 20 minutes and can we have a quote from the candidate? No wonder Eli spends so much time choking and throwing stuff.

All of this fits together because the underlying theme of all parts of the show is control: control of the campaign, the message, the case, the technology, the image, your life. At the beginning of season one, Alicia has lost all control over the life she had; by the end of season two, she's in charge of her new one. Was a camera watching in that elevator? I guess we'll find out next year.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 13, 2011

Lay down the cookie

British Web developers will be spending the next couple of weeks scrambling to meet the May 26 deadline after which new legislation require users to consent before a cookie can be placed on their computers. The Information Commissioner's guidelines allow a narrow exception for cookies that are "strictly necessary for a service requested by the user"; the example given is a cookie used to remember an item the user has chosen to buy so it's there when they go to check out. Won't this be fun?

Normally, net.wars comes down on the side of privacy even when it's inconvenient for companies, but in this case we're prepared to make at least a partial exception. It's always been a little difficult to understand the hatred and fear with which some people regard the cookie. Not the chocolate chip cookie, which of course we know is everything that is good, but the bits of code that reside on your computer to give Web pages the equivalent of memory. Cookies allow a server to assemble a page that remembers what you've looked at, where you've been, and which gewgaw you've put into your shopping basket. At least some of this can be done in other ways such as using a registration scheme. But it's arguably a greater invasion of privacy to require users to form a relationship with a Web site they may only use once.

The single-site use of cookies is, or ought to be, largely uncontroversial. The more contentious usage is third-party cookies, used by advertising agencies to track users from site to site with the goal of serving up targeted, rather than generic, ads. It's this aspect of cookies that has most exercised privacy advocates, and most browsers provide the ability to block cookies - all, third-party, or none, with a provision to make exceptions.

The new rules, however, seem overly broad.

In the EU, the anti-cookie effort began in 2001 (the second-ever net.wars), seemed to go quiet, and then revived in 2009, when I called the legislation "masterfully stupid". That piece goes into some detail about the objections to the anti-cookie legislation, so we won't review that here. At the time, reader email suggested that perhaps making life unpleasant for advertisers would force browser manufacturers to design better privacy controls. 'Tis a consummation devoutly to be wished, but so far it hasn't happened, and in the meantime that legislation has become an EU directive and now UK law.

The chief difference is moving from opt-out to opt-in: users must give consent for cookies to be placed on their machines; the chief flaw is banning a technology instead of regulating undesirable actions and effects. Besides the guidelines above, the ICO refers people to All About Cookies for further information.

Pete Jordan, a Hull-based Web developer, notes that when you focus legislation on a particular technology, "People will find ways around it if they're ingenious enough, and if you ban cookies or make it awkward to use them, then other mechanisms will arise." Besides, he says, "A lot of day-to-day usage is to make users' experience of Web sites easier, more friendly, and more seamless. It's not life-threatening or vital, but from the user's perception it makes a difference if it disappears." Cookies, for example, are what provide the trail of "breadcrumbs" at the top of a Web page to show you the path by which you arrived at that page so you can easily go back to where you were.

"In theory, it should affect everything we do," he says of the legislation. A possible workaround may be to embed tokens in URLs, a strategy he says is difficult to manage and raises the technical barrier for Web developers.

The US, where competing anti-tracking bills are under consideration in both houses of Congress, seems to be taking a somewhat different tack in requiring Web sites to honor the choice if consumers set a "Do Not Track" flag. Expect much more public debate about the US bills than there has been in the EU or UK. See, for example, the strong insistence by What Would Google Do? author Jeff Jarvis that media sites in particular have a right to impose any terms they want in the interests of their own survival. He predicts paywalls everywhere and the collapse of media economics. I think he's wrong.

The thing is, it's not a fair contest between users and Web site owners. It's more or less impossible to browse the Web with all cookies turned off: the complaining pop-ups are just too frequent. But targeting the cookie is not the right approach. There are many other tracking technologies that are invisible to consumers which may have both good and bad effects - even Web bugs are used helpfully some of the time. (The irony is, of course, regulating the cookie but allowing increases in both offline and online surveillance by police and government agencies.)

Requiring companies to behave honestly and transparently toward their customers would have been a better approach for the EU; one hopes it will work better in the US.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 1, 2011

Equal access

It is very, very difficult to understand the reasoning behind the not-so-secret plan to institute Web blocking. In a http://www.openrightsgroup.org/blog/2011/minister-confirms-voluntary-site-blocking-discussionsletter to the Open Rights Group, Ed Vaizey, the minister for culture, communications, and creative industries, confirmed that such a proposal emerged from a workshop to discuss "developing new ways for people to access content online". (Orwell would be so proud.)

We fire up Yes, Minister once again to remind everyone the four characteristics of proposals ministers like: quick, simple, popular, cheap. Providing the underpinnings of Web site blocking is not likely to be very quick, and it's debatable whether it will be cheap. But it certainly sounds simple, and although it's almost certainly not going to be popular among the 7 million people the government claims engage in illegal file-sharing - a number PC Pro has done a nice job of dissecting - it's likely to be popular with the people Vaizey seems to care most about, rights holders.

The four opposing kiss-of-death words are: lengthy, complicated, expensive, and either courageous or controversial, depending how soon the election is. How to convince Vaizey that it's these four words that apply and not the other four?

Well, for one thing, it's not going to be simple, it's going to be complicated. Web site blocking is essentially a security measure. You have decided that you don't want people to have access to a particular source of data, and so you block their access. Security is, as we know, not easy to implement and not easy to maintain. Security, as Bruce Schneier keeps saying, is a process, not a product. It takes a whole organization to implement the much more narrowly defined IWF system. What kind of infrastructure will be required to support the maintenance and implementation of a block list to cover copyright infringement? Self-regulatory, you say? Where will the block list, currently thought to be about 100 sites come from? Who will maintain it? Who will oversee it to ensure that it doesn't include "innocent" sites? ISPs have other things to do, and other than limiting or charging for the bandwidth consumption of their heaviest users (who are not all file sharers by any stretch) they don't have a dog in this race. Who bears the legal liability for mistakes?

The list is most likely to originate with rights holders, who, because they have shown over most of the last 20 years that they care relatively little if they scoop innocent users and sites into the net alongside infringing ones, no one trusts to be accurate. Don't the courts have better things to do than adjudicate what percentage of a given site's traffic is copyright-infringing and whether it should be on a block list? Is this what we should be spending money on in a time of austerity? Mightn't it be...expensive?

Making the whole thing even more complicated is the obvious (to anyone who knows the Internet) fact that such a block list will - according to Torrentfreak already has - start a new arms race.

And yet another wrinkle: among blocking targets are cyberlockers. And yet this is a service that, like search, is going mainstream: Amazon.com has just launched such a service, which it calls Cloud Drive and for which it retains the right to police rather thoroughly. Encrypted files, here we come.

At least one ISP has already called the whole idea expensive, ineffective, and rife with unintended consequences.

There are other obvious arguments, of course. It opens the way to censorship. It penalizes innocent uses of technology as well as infringing ones; torrent search sites typically have a mass of varied material and there are legitimate reasons to use torrenting technology to distribute large files. It will tend to add to calls to spy on Internet users in more intrusive ways (as Web blocking fails to stop the next generation of file-sharing technologies). It will tend to favor large (often American) services and companies over smaller ones. Google, as IsoHunt told the US Court of Appeals two weeks ago, is the largest torrent search engine. (And, of course, Google has other copyright troubles of its own; last week the court rejected the Google Books settlement.)

But the sad fact is that although these arguments are important they're not a good fit if the main push behind Web blocking is an entrenched belief that only way to secure economic growth is to extend and tighten copyright while restricting access to technologies and sites that might be used for infringement. Instead, we need to show that this entrenched belief is wrong.

We do not block the roads leading to car boot sales just because sometimes people sell things at them whose provenance is cloudy (at best). We do not place levies on the purchase of musical instruments because someone might play copyrighted music on them. We should not remake the Internet - a medium to benefit all of society - to serve the interests of one industrial group. It would make more sense to put the same energy and financial resources into supporting the games industry which, as Tom Watson (Lab - Bromwich) has pointed out has great potential to lift the British economy.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 18, 2011

Block party

When last seen in net.wars, the Internet Watch Foundation was going through the most embarrassing moment of its relatively short life: the time it blocked a Wikipedia page. It survived, of course, and on Tuesday this week it handed out copies of its latest annual report (PDF) and its strategic plan for the years 2011 to 2014 (PDF) in the Strangers Dining Room at the House of Commons.

The event was, more or less, the IWF's birthday party: in August it will be 15 years since the suspicious, even hostile first presentation, in 1996, of the first outline of the IWF. It was an uneasy compromise between an industry accused of facilitating child abuse, law enforcement threatening technically inept action, and politicians anxious to be seen to be doing something, all heightened by some of the worst mainstream media reporting I've ever seen.

Suspicious or not, the IWF has achieved traction. It has kept government out of the direct censorship business and politicians and law enforcement reasonably satisfied. Without - as was pointed out - cost to the taxpayer, since the IWF is funded from a mix of grants, donations, and ISPs' subscription fees.

And to be fair, it has been arguably successful at doing what it set out to do, which is to disrupt the online distribution of illegal pornographic images of children within the UK. The IWF has reported for some years now that the percentage of such images hosted within the UK is near zero. On Tuesday, it said the time it takes to get foreign-hosted content taken down has halved. Its forward plan includes more of the same, plus pushing more into international work by promoting the use its URL list abroad and developing partnerships.

Over at The Register Jane Fae Ozniek has done a good job of tallying up the numbers the IWF reported, and also of following up on remarks made by Culture Minister Ed Vaizey and Home Office Minister James Brokenshire that suggested the IWF or its methods might be expanded to cover other categories of material. So I won't rehash either topic here.

Instead, what struck me is the IWF's report that a significant percentage of its work now concerns sexual abuse images and videos that are commercially distributed. This news offered a brief glance into a shadowy world that is illegal for any of us to study since under UK law (and the laws of many other countries) it's illegal to access such material. If this is a correct assessment, it certainly follows the same pattern as the world of malware writing, which has progressed from the giggling, maladjusted teenager writing a bit of disruptive code in his bedroom to a highly organized, criminal, upside-down image of the commercial software world (complete, I'm told by experts from companies like Symantec and Sophos, with product trials, customer support, and update patches). Similarly, our, or at least my, image was always of like-minded amateurs exchanging copies of the things they managed to pick up rather like twisted stamp collectors.

The IWF report says it has identified 715 such commercial sources, 321 of which were active in 2010. At least 47.7 percent of the commercially branded material is produced by the top ten, and the most prolific of these brands used 862 URLs. The IWF has attempted to analyze these brands, and believes that they are operated in clusters by criminals. To quote the report:

Each of the webpages or websites is a gateway to hundreds or even thousands of individual images or videos of children being sexually abused, supported by layers of payment mechanisms, content sores, membership systems, and advertising frames. Payment systems may include pre-pay cards, credit cards, "virtual money" or e-payment systems, and may be carried out across secure webpages, text, or email.

This is not what people predicted when they warned at the original meeting that blocking access to content would drive it underground into locations that were harder to police. I don't recall anyone saying: it will be like Prohibition and create a new Mafia. How big a problem this is and how it relates to events like yesterday's shutdown of boylovers.net remains to be seen. But there's logic to it: anything that's scarce attracts a high price and anything high-priced and illegal attracts dedicated criminals. So we have to ask: would our children be safer if the IWF were less successful?

The IWF will, I think always be a compromise. Civil libertarians will always be rightly suspicious of any organization that has the authority and power to shut down access to content, online or off. Still, the IWF's ten-person board now includes, alongside the representatives of ISPs, top content sites, and academics, a consumer representative, and seems to be less dominated by repressive law enforcement interests. There's an independent audit in the offing, and while the IWF publishes no details of its block list for researchers to examine, it advocates transparency in the form of a splash screen that tells users a site that is blocked and why. They learned, the IWF's departing head, Peter Robbins, said in conversation, a lot from the Wikipedia incident.

My summary: the organization will know it has its balance exactly right when everyone on all sides has something to complain about.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 30, 2010

Three-legged race

"If you are going to do this damn silly thing, don't do it in this damn silly way," Sir Humphrey Appleby tells Jim Hacker in a fit of unaccustomed straight talking.

We think of this often these days, largely because it seems as though lawmakers, having been belittled by impatient and malcontent geeks throughout the 1990s for being too slow to keep up with Internet time, are trying to speed through the process of creating legislation by eliminating thought, deliberation, and careful drafting. You can see why they'd want to get rid of so many civil servants, who might slow this process down.

In that particular episode of Yes, Minister, "The Writing on the Wall" (S1e05), Appleby and Hacker butt heads over who will get the final say over the wording of a draft proposal on phased Civil Service reductions (today's civil servants and ministers might want to watch episode S1e03, "The Economy Drive", for what their lives will soon be like). Hacker wins that part of the battle only to discover that his version, if implemented, will shut down his own department. Oops.

Much of the Digital Economy Act (2010) was like this: redrafted at the last minute in all sorts of unhelpful ways. But the devil is always in the details, and it was not unreasonable to hope that Ofcom, charging with defining and consulting on those details, would operate in a more measured fashion. But apparently not, and so we have a draft code of practice that's so incomplete that it could be a teenager's homework.

Both Consumer Focus and the Open Rights Group have analyses of the code's non-compliance with the act and a helpful <"a href=http://e-activist.com/ea-campaign/clientcampaign.do?ea.client.id=1422&ea.campaign.id=7268">online form should you wish to submit your opinions. The consultation closes today, so run, do not walk, to add your comments.

What's more notable is when it opened: May 28, only three days after the State Opening of the post-election parliamentary session, three weeks after the election, and six weeks after the day that Gordon Brown called the election. Granted, civil servants do not down pencils while the election is proceeding. But given that the act went through last-second changes and then was nodded through the House of Commons in the frantic dash to get home to start campaigning, the most time Ofcom can have had to draft this mish-mash was about six weeks. Which may explain the holes and inadequacies, but then you have to ask: why didn't they take their time and do it properly?

The Freedom bill, which is to repeal so many of the items on our wish list, is mute on the subject of the Digital Economy Act, despite a number of appearances on the Freedom bill's ideas site. (Big Brother Watch has some additional wish list items.)

The big difficulty for anyone who hates the copyright protectionist provisions in the act - the threat to open wi-fi, the disconnection or speed-limitation of Internet access ("technical measures") to be applied to anyone who is accused of copyright infringement three times ("three-strikes", or HADOPI, after the failed French law attempting to do the same) - is that what you really want is for the act to go away. Preferably back where it came from, some copyright industry lobbyist's brain. A carefully drafted code of practice that pays attention to ensuring that the evidentiary burden on copyright holders is strong enough to deter the kind of abuse seen in the US since the passage of the Digital Millennium Copyright Act (1998) is still not a good scenario, merely a least-worst one.

Still, ORG and Consumer Focus are not alone in their unhappiness. BT and TalkTalk have expressed their opposition, though for different reasons. TalkTalk is largely opposed to the whole letter-writing and copyright infringement elements; but both ISPs are unhappy about Ofcom's decision to limit the code to fixed-line ISPs with more than 400,000 customers. In the entire UK, there are only seven: TalkTalk, BT, Post Office, Virgin, Sky, Orange, and O2. Yet it makes sense to exclude mobile ISPs for now: at today's prices it's safe to guess that no one spends a lot of time downloading music over them. For the rest...these ISPs can only benefit if unauthorised downloading on their services decreases, don't all ISPs want the heaviest downloaders to leech off someone else's service?

LINX, the largest membership organisation for UK Internet service providers has also objected (PDF) to the Act's apportionment of costs: ISPs, LINX's Malcolm Hutty argues, are innocent third parties, so rather than sharing the costs of writing letters and retaining the data necessary to create copyright infringement reports ISPs should be reimbursed for not only the entire cost of implementing the necessary systems but also opportunity costs. It's unclear, LINX points out, how much change Ofcom has time to make to the draft code and still meet its statutory timetable.

So this is law on Internet time: drafted for, if not by, special interests, undemocratically rushed through Parliament, hastily written, poorly thought-out, unfairly and inequitably implemented in direct opposition to the country's longstanding commitment to digital inclusion. Surely we can do better.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 4, 2010

Return to the hacker crackdown

Probably many people had forgotten about the Gary McKinnon case until the new government reversed their decision to intervene in his extradition. Legal analysis is beyond our expertise, but we can outline some of the historical factors at work.

By 2001, when McKinnon did his breaking and entering into US military computers, hacking had been illegal in the UK for just over ten years - the Computer Misuse Act was passed in 1990 after the overturned conviction of Robert Schifreen and Steve Gold for accessing Prince Philip's Prestel mailbox.

Early 1990s hacking (earlier, the word meant technological cleverness) was far more benign than today's flat-out crimes of identity fraud, money laundering, and raiding bank accounts. The hackers of the era - most famously Kevin Mitnick were more the cyberspace equivalent of teenaged joyriders: they wandered around the Net rattling doorknobs and playing tricks to get passwords, and occasionally copied some bit of trophy software for bragging rights. Mitnick, despite spending four and a half years in jail awaiting trial, was not known to profit from his forays.

McKinnon's claim that he was looking for evidence that the US government was covering up information about alternative energy and alien visitations seems to me wholly credible. There was and is a definite streak of conspiracy theorists - particularly about UFOs - among the hacker community.

People seemed more alarmed by those early-stage hackers than they are by today's cybercriminals: the fear of new technology was projected onto those who seemed to be its masters. The series of 1990 "Operation Sundown" raids in the US, documented in Bruce Sterling's book , inspired the creation of the Electronic Frontier Foundation. Among other egregious confusions, law enforcement seized game manuals from Steven Jackson Games in Austin, Texas, calling them hacking instruction books.

The raids came alongside a controversial push to make hacking illegal around the world. It didn't help when police burst in at the crack of dawn to arrest bright teenagers and hold them and their families (including younger children) at gunpoint while their computers and notebooks were seized and their homes ransacked for evidence.

"I think that in the years to come this will be recognized as the time of a witch hunt approximately equivalent to McCarthyism - that some of our best and brightest were made to suffer this kind of persecution for the fact that they dared to be creative in a way that society didn't understand," 21-year-old convicted hacker Mark Abene ("Phiber Optik") told filmmaker Annaliza Savage for her 1994 documentary, Unauthorized Access (YouTube).

Phiber Optik was an early 1990s cause célèbre. A member of the hacker groups Legion of Doom and Masters of Deception, he had an exceptionally high media profile. In January 1990, he and other MoD members were raided on suspicion of having caused the AT&T crash of January 15, 1990, when more than half of the telephone network ceased functioning for nine hours. Abene and others were eventually charged in 1991, with law enforcement demanding $2.5 million in fines and 59 years in jail. Plea agreements reduced that a year in prison and 600 hours of community service. The company eventually admitted the crash was due to its own flawed software upgrade.

There are many parallels between these early days of hacking and today's copyright wars. Entrenched large businesses (then AT&T; now RIAA, MPAA, BPI, et al) perceive mostly young, smart Net users as dangerous enemies and pursue them with the full force of the law claiming exaggeratedly large-figure sums in damages. Isolated, often young, targets were threatened with jail and/or huge sums in damages to make examples of them to deter others. The upshot in the 1990s was an entrenched distrust of and contempt for law enforcement on the part of the hacker community, exacerbated by the fact that back then so few law enforcement officers understood anything about the technology they were dealing with. The equivalent now may be a permanent contempt for copyright law.

In his 1990 essay Crime and Puzzlement examining the issues raised by hacking, EFF co-founder John Perry Barlow wrote of Phiber Optik, whom he met on the WELL: "His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves."

When McKinnon was first arrested in March 2002 and then indicted in a Virginia court in October 2002 for cracking into various US military computers - with damage estimated at $800,000 - all this history will still fresh. Meanwhile, the sympathy and good will toward the US engendered by the 9/11 attacks had been dissipated by the Bush administration's reaction: the PATRIOT Act (passed October 2001) expanded US government powers to detain and deport foreign citizens, and the first prisoners arrived at Guantanamo in January 2002. Since then, the US has begun fingerprinting all foreign visitors and has seen many erosions to civil liberties. The 2005 changes to British law that made hacking into an extraditable offense were controversial for precisely these reasons.

As McKinnon's case has dragged on through extradition appeals this emotional background has not changed. McKinnon's diagnosis with Asperger's Syndrome in 2008 made him into a more fragile and sympathetic figure. Meanwhile, the really dangerous cybercriminals continue committing fraud, theft, and real damage, apparently safe from prosecution.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 2, 2010

Not bogus!


"If I lose £1 million it's worth it for libel law reform," the science writer Simon Singh was widely reported as saying this week. That was even before yesterday's ruling in the libel case brought against him by the British Chiropractic Association.

Going through litigation, I was told once, is like having cancer. It is a grim, grueling, rollercoaster process that takes over your life and may leave you permanently damaged. In the first gleeful WE-WON! moments following yesterday's ruling it's easy to forget that. It's also easy to forget that this is only one stage in a complex series.

Yesterday's judgment was the ruling in Singh's appeal (heard on February 22) against the ruling of Justice David Eady last May, which itself was only a preliminary ruling on the meaning of the passage in dispute, with the dispute itself to be resolved in a later trial. In October Singh won leave to appeal Eady's ruling; February's hearing and today's judgment constituted that appeal and its results. It is now two years since the original article appeared, and the real case is yet to be tried. Are we at the beginning of Jarndyce and Jarndyce or SCO versus Everyone?

The time and costs of all this are why we need libel law reform. English libel cases, as Singh frequently reminds us, cost 144 times as much as similar cases in the rest of the EU.

But the most likely scenario is that Singh will lose more than that million pounds. It's not just that he will have to pay the costs of both sides if he loses whatever the final round of this case eventually turns out to be (even if he wins the costs awarded will not cover all his expenses). We must also count what businesses call "opportunity costs".

A couple of weeks ago, Singh resigned from his Guardian column because the libel case is consuming all his time. And, he says, he should have started writing his next book a year ago but can't develop a proposal and make commitments to publishers because of the uncertainty. These withdrawals are not just his loss; we all lose by not getting to read what he'd write next. At a time when politicians can be confused enough to worry that an island can tip over and capsize, we need our best popular science educators to be working. Today's adults can wait, perhaps; but I did some of my best science reading as a teenager: The Microbe Hunters; The Double Helix (despite its treatment of Rosalind Franklin); Isaac Asimov's The Human Body: Its Structure and Operation; and the pre-House true medical detection stories of Berton Roueché. If Singh v BCA takes five years that's an entire generation of teenagers.

Still, yesterday's ruling, in which three of the most powerful judicial figures in the land agreed - eloquently! - with what we all thought from the beginning deserves to be celebrated, not least for its respect for scientific evidence,

Some favorite quotes from the judgment, which makes fine reading:

Accordingly this litigation has almost certainly had a chilling effect on public debate which might otherwise have assisted potential patients to make informed choices about the possible use of chiropractic.

A similar situation, of course, applies to two other recent cases that pitted libel law against the public interest in scientific criticism. First, Swedish academic Francisco Lacerda, who criticized the voice risk analysis principles embedded in lie detector systems (including one bought by the Department of Work and Pensions at a cost of £2.4 million). Second, British cardiologist Peter Wilmshurst is defending charges of libel and slander over comments he made regarding a clinical trial in which he served as a principal investigator. In all three cases, the public interest is suffering. Ensuring that there is a public interest defense is accordingly a key element of the libel law reform campaign's platform.

The opinion may be mistaken, but to allow the party which has been denounced on the basis of it to compel its author to prove in court what he has asserted by way of argument is to invite the court to become an Orwellian ministry of truth.

This was in fact the gist of Eady's ruling: he categorized Singh's words as fact rather than comment and would have required Singh to defend a meaning his article went on to say explicitly was not what he was saying. We must leave it for someone more English than I am to say whether that is a judicial rebuke.

We would respectfully adopt what Judge Easterbrook, now Chief Judge of the US Seventh Circuit Court of Appeals, said in a libel a2ction over a scientific controversy, Underwager v Salter: "[Plaintiffs] cannot, by simply filing suit and crying 'character assassination!', silence those who hold divergent views, no matter how adverse those views may be to plaintiffs' interests. Scientific controversies must be settled by the methods of science rather than by the methods of litigation.

What they said.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 19, 2010

Digital exclusion: the bill

The workings of British politics are nearly as clear to foreigners as cricket; and unlike the US there's no user manual. (Although we can recommend Anthony Trollope's Palliser novels and the TV series Yes, Minister as good sources of enlightenment on the subject.) But what it all boils down to in the case of the Digital Economy Bill is that the rights of an entire nation of Internet users are about to get squeezed between a rock and an election unless something dramatic happens.

The deal is this: the bill has completed all the stages in the House of Lords, and is awaiting its second reading in the House of Commons. Best guesses are that this will happen on or about March 29 or 30. Everyone expects the election to be called around April 8, at which point Parliament disbands and everyone goes home to spend three weeks intensively disrupting the lives of their constituency's voters when they're just sitting down to dinner. Just before Parliament dissolves there's a mad dash to wind up whatever unfinished business there is, universally known as the "wash-up". The Digital Economy Bill is one of those pieces of unfinished business. The fun part: anyone who's actually standing for election is of course in a hurry to get home and start canvassing. So the people actually in the chamber during the wash-up while the front benches are hastily agreeing to pass stuff thought on the nod are likely to be retiring MPs and others who don't have urgent election business.

"What we need," I was told last night, "is a huge, angry crowd." The Open Rights Group is trying to organize exactly that for this Wednesday, March 24.

The bill would enshrine three strikes and disconnection into law. Since the Lords' involvement, it provides Web censorship. It arguably up-ends at least 15 years of government policy promoting the Internet as an engine of economic growth to benefit one single economic sector. How would the disconnected vote, pay taxes, or engage in community politics? What happened to digital inclusion? More haste, less sense.

Last night's occasion was the 20th anniversary of Privacy International (Twitter: @privacyint), where most people were polite to speakers David Blunkett and Nick Clegg. Blunkett, who was such a front-runner for a second Lifetime Menace Big Brother Award that PI renamed the award after him, was an awfully good sport when razzed; you could tell that having his personal life hauled through the tabloid press in some detail has changed many of his views about privacy. Though the conversion is not quite complete: he's willing to dump the ID card, but only because it makes so much more sense just to make passports mandatory for everyone over 16.

But Blunkett's nearly deranged passion for the ID card was at least his own. The Digital Economy Bill, on the other hand, seems to be the result of expert lobbying by the entertainment industry, most especially the British Phonographic Industry. There's a new bit of it out this week in the form of the Building a Digital Economy report, which threatens the loss of 250,000 jobs in the UK alone (1.2 million in the EU, enough to scare any politician right before an election). Techdirt has a nice debunking summary.

A perennial problem, of course, is that bills are notoriously difficult to read. Anyone who's tried knows these days they're largely made up of amendments to previous bills, and therefore cannot be read on their own; and while they can be marked up in hypertext for intelligent Internet perusal this is not a service Parliament provides. You would almost think they don't really want us to read these things.

Speaking at the PI event, Clegg deplored the database state that has been built up over the last ten to 15 years, the resulting change in the relationship between citizen and state, and especially the omission that, "No one ever asked people to vote on giant databases." Such a profound infrastructure change, he argued, should have been a matter for public debate and consideration - and wasn't. Even Blunkett, who attributed some of his change in views to his involvement in the movie Erasing David (opening on UK cinema screens April 29), while still mostly defending the DNA database, said that "We have to operate in a democratic framework and not believe we can do whatever we want."

And here we are again with the Digital Economy Bill. There is plenty of back and forth among industry representatives. ISPs estimate the cost of the DEB's Web censorship provisions at up to £500 million. The BPI disagrees. But where is the public discussion?

But the kind of thoughtful debate that's needed cannot take place in the present circumstances with everyone gunning their car engines hoping for a quick getaway. So if you think the DEB is just about Internet freedoms, think again; the way it's been handled is an abrogation of much older, much broader freedoms. Are you angry yet?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 26, 2010

The community delusion

The court clerk - if that's the right term - seemed slightly baffled by the number of people who showed up for Tuesday's hearing in Simon Singh v. British Chiropractic Association. There was much rearrangement, as the principals asked permission to move forward a row to make an extra row of public seating and then someone magically produced eight or ten folding chairs to line up along the side. Standing was not allowed. (I'm not sure why, but I guess something to do with keeping order and control.)

It was impossible to listen to the arguments without feeling a part of history. Someday - ten, 50, 150 years from now - a different group of litigants will be sitting in that same court room or one very like it in the same building and will cite "our" case, just as counsel cited precedents such as Reynolds and Branson v Bower. If Singh's books don't survive, his legal case will, as may the effects of the campaign to reform libel law (sign the petition!) it has inspired and the Culture, Media, and Sport report (Scribd) that was published on Wednesday. And the sheer stature of the three judges listening to the appeal - Lord Chief Justice Lord Judge (to Americans: I am not making this up!), Master of the Rolls Lord Neuberger, and Lord Justice Sedley - ensures it will be taken seriously.

There are plenty of write-ups of what happened in court and better-informed analyses than I can muster to explain what it means. The gist, however: it's too soon to tell which pieces of law will be the crucial bits on which the judges make their decision. They certainly seemed to me to be sympathetic to the arguments Singh's counsel, Adrienne Page QC, made and much less so to the arguments the BCA's counsel, Heather Rogers QC. But the case will not be decided on the basis of sympathy; it will be decided on the basis of legal analysis. "You can't read judges," David Allen Green (aka jackofkent) said to me over lunch. So we wait.
But the interesting thing about the case is that this may be the first important British legal case to be socially networked: here is a libel case featuring no pop stars or movie idols, and yet they had to turn some 20 or 30 people away from the courtroom. Do judges read Twitter?

Beginning with Howard Rheingold's 1993 book The Virtual Community, it was clear that the Net's defining characteristic as a medium is its enablement of many-to-many communication. Television, publishing, and radio are all one-to-many (if you can consider a broadcaster/publisher a single gatekeeper voice). Telephones and letters are one-to-one, by and large. By 1997, business minds, most notably John Hagel III and Arthur Armstrong in net.gain, had begun saying that the networked future of businesses would require them to build communities around themselves. I doubt that Singh thinks of his libel case in that light, but today's social networks (which are a reworking of earlier systems such as Usenet and online conferencing systems) are enabling him to do just that. The leverage he's gained from that support is what is really behind both the challenge to English libel law and the increasing demand for chiropractors generally to provide better evidence or shut up.

Given the value everyone else, from businesses to cause organizations to individual writers and artists, places on building an energetic, dedicated, and active fan base, it's surprising to see Richard Dawkins, whose supporters have apparently spent thousands of unpaid hours curating his forums for him, toss away what by all accounts was an extraordinarily successful community supporting his ideas and his work. The more so because apparently Dawkins has managed to attract that community without ever noticing what it meant to the participants. He also apparently has failed to notice that some people on the Net, some of the time, are just the teeniest bit rude and abusive to each other. He must lead a very sheltered life, and, of course, never have moderated his own forums.

What anyone who builds, attracts, or aspires to such a community has to understand from the outset is that if you are successful your users will believe they own it. In some cases, they will be right. It sounds - without having spend a lot of time poring over Dawkins' forums myself - as though in this case in fact the users, or at least the moderators, had every right to feel they owned the place because they did all the (unpaid) work. This situation is as old as the Net - in the days of per-minute connection charges CompuServe's most successful (and economically rewarding to their owners) forums were built on the backs of volunteers who traded their time for free access. And it's always tough when users rediscover the fact that in each individual virtual community, unlike real-world ones, there is always a god who can pull the plug without notice.

Fortunately for the causes of libel law reform and requiring better evidence, Singh's support base is not a single community; instead, it's a group of communities who share the same goals. And, thankfully, those goals are bigger than all of us.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. I would love to hear (net.wars@skeptic.demon.co.uk) from someone who could help me figure out why this blog vapes all non-spam comments without posting them.

January 22, 2010

Music night

Most corporate annual reports seek to paint a glowing picture of the business's doings for the previous year. By law they have to disclose anything really unfortunate - financial losses, management malfeasance, a change in the regulatory landscape. The International Federation of the Phonographic Industry was caught in a bind writing its Digital Music Report 2010 (PDF) (or see the press release). Paint too glowing a picture of the music business, and politicians might conclude no further legislation is needed to bolster the sector. Paint too gloomy a picture, and ministers might conclude theirs is a lost cause, and better to let dying business models die.

So IFPI's annual report veers between complaining about "competing in a rigged market" (by which they mean a market in which file-sharing exists) and stressing the popularity of music and the burgeoning success of legally sanctioned services. Yay, Spotify! Yay, Sky Songs! Yay, iTunes! You would have to be the most curmudgeonly of commentators to point out that none of these are services begun by music companies; they are services begun by others that music companies have been grudgingly persuaded to make deals with. (I say grudgingly; naturally, I was not present at contract negotiations. Perhaps the music companies were hopping up and down like Easter bunnies in their eagerness to have their product included. If they were, I'd argue that the existence of free file-sharing drove them to it. Without file-sharing there would very likely be no paid subscription services now; the music industry would still be selling everyone CDs and insisting that this was the consumer's choice.)

The basic numbers showed that song downloads increased by 10 percent - but total revenue including CDs fell by 12 percent in the first half of 2009. The top song download: Lady Gaga's "Poker Face".

All this is fair enough - an industry's gotta eat! - and it's just possible to read it without becoming unreasonable. And then you hit this gem:

Illegal file-sharing has also had a very significant, and sometimes disastrous, impact on investment in artists and local repertoire. With their revenues eroded by piracy, music companies have far less to plough back into local artist development. Much has been made of the idea that growing live music revenues can compensate for the fall-off in recorded music sales, but this is, in reality, a myth. Live performance earnings are generally more to the benefit of veteran, established acts, while it is the younger developing acts, without lucrative careers, who do not have the chance to develop their reputation through recorded music sales.
So: digital music is ramping up (mostly through the efforts of non-music industry companies and investors). Investment in local acts and new musicians is down. And overall sales are down. And we're blaming file-sharing? How about blaming at least the last year or so of declining revenues on the recession? How about blaming bean counters at record companies who see a higher profit margin in selling yet more copies of back catalogue tried-and-tested, pure-profit standards like Frank Sinatra and Elvis Presley than in taking risks on new music? At some point, won't everyone have all the copies of the Beatles albums they can possibly use? Er, excuse me, "consume". (The report has a disturbing tendency to talk about "consuming" music; I don't think people have the same relationship with music that they do with food. I'd also question IFPI's whine about live music revenues: all young artists start by playing live gigs, that's how they learn; *radio play* gets audiences in; live gigs *and radio play* sell albums, which help sell live gigs in a virtuous circle, but that's a topic for another day.)

It is a truth rarely acknowledged that all new artists - and all old artists producing new work - are competing with the accumulated back catalogue of the past decades and centuries.

IFPI of course also warns that TV, book publishing, and all other media are about to suffer the same fate as music. The not-so-subtle underlying message: this is why we must implement ferocious anti-file-sharing measures in the Digital Economy Bill, amendments to which, I'm sure coincidentally, were discussed in committee this week, with more to come next Tuesday, January 26.

But this isn't true, or not exactly. As a Dutch report on file-sharing (original in Dutch) pointed out last year, file-sharing, which it noted goes hand-in-hand with buying, does not have the same impact on all sectors. People listen to music over and over again; they watch TV shows fewer but still multiple times; if they don't reread books they do at least often refer back to them; they see most movies only once. If you want to say that file-sharing displaces sales, which is debatable, then clearly music is the least under threat. If you want to say that file-sharing displaces traditional radio listening, well, I'm with you there. But IFPI does not make that argument.

Still, some progress has been made. Look what IFPI says here, on page 4 in the executive summary right up front: "Recent innovations in the à-la-carte sector include...the rollout of DRM-free downloads internationally." Wha-hey! That's what we told them people wanted five years ago. Maybe five years from now they'll be writing how file-sharing helps promote artists who, otherwise, would never find an audience because no one would ever hear their work.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

November 20, 2009

Thou shalt not steal

As we're so fond of saying, technology moves fast, and law moves slowly. What we say far less often is that law should move slowly. It is not a sign of weakness to deliberate carefully about laws that affect millions of people's lives and will stay on the books for a long, long time. It's always seemed to me that the Founding Fathers very deliberately devised the US system to slow things down - and to ensure that the further-reaching the change the more difficult it is to enact.

Cut to today's Britain. The Internet may perceive censorship as damage and route around it, but politicians seem increasingly to view due and accountable legal process as an unnecessary waste of time and try to avoid it. Preventing this is, of course, what we have constitutions for; democracy is a relatively mature technology.

Today's Digital Economy bill is loaded with provisions for enough statutory instruments to satisfy the most frustrated politician's desire to avoid all that fuss and bother of public debate and research. Where legislation requires draft bills, public consultations, and committee work, a statutory instrument can pass both houses of Parliament on the nod. For minor regulatory changes - such as, for example, the way money is paid to pensioners (1987) - limiting the process to expert discussion and a quick vote makes sense. But when it comes to allowing the Secretary of State to change something as profound and far-reaching in impact as copyright law with a minimum of public scrutiny, it's an outrageous hijack of the democratic process.

Here is the relevant quote from the bill, talking about the Copyright, Designs, and Patents Act 1988:

The Secretary of State may by order amend Part 1 or this Part for the purpose of preventing or reducing the infringement of copyright by means of the internet, if it appears to the Secretary of State appropriate to do so having regard to technological developments that have occurred or are likely to occur.

Lower down, the bill does add that:

Before making any order under this section the Secretary of State must consult such persons who the Secretary of State thinks likely to be affected by the order, or who represent any of those persons, as the Secretary of State thinks fit.

Does that say he (usually) has to consult the public? I don't think so; until very recently it was widely held that the only people affected by copyright law were creators and rights holders - these days rarely the same people even though rights holders like, for public consumption, to pretend otherwise (come contract time, it's a whole different story). We would say that everyone now has a stake in copyright law, given the enormously expanded access to the means to create and distribute all sorts of media, but it isn't at all clear that the Secretary of State would agree or what means would be available to force him to do so. What we do know is that the copyright policies being pushed in this bill come directly from the rights holders.

Stephen Timms, talking to the Guardian, attempted to defend this provision this way:

The way that this clause is formed there would be a clear requirement for full public consultation [before any change] followed by a vote in favour by both houses of Parliament."

This is, put politely, disingenuous: this government has, especially lately - see also ID cards - a terrible record of flatly ignoring what public consultations are telling them, even when the testimony submitted in response to such consultations comes from internationally recognized experts.

Timms' comments are a very bad joke to anyone who's followed the consultations on this particular bill's provisions on file-sharing and copyright, given that everyone from Gowers to Dutch economists are finding that loosening copyright restrictions has society-wide benefits, while Finland has made 1Mb broadband access a legal right and even France's courts see Internet access as a fundamental human right (especially ironic given that France was the first place three strikes actually made it into law).

In creating the Digital Economy bill, not only did this government ignore consultation testimony from everyone but rights holders, it even changed its own consultation mid-stream, bringing back such pernicious provisions as three-strikes-and-you're-disconnected even after agreeing they were gone. This government is, in fact, a perfect advertisement for the principle that laws that are enacted should be reviewed with an eye toward what their effect will be should a government hostile to its citizenry come to power.

Here is some relevant outrage from an appropriately native British lawyer specializing in Net issues, Lilian Edwards:

So clearly every time things happen fast and the law might struggle to keep up with them, in future, well we should just junk ordinary democratic safeguards before anyone notices, and bow instead to the partisan interests who pay lobbyists the most to shout the loudest?

Tell me to "go home if you don't like it here" because I wasn't born in the UK if you want to, but she's a native. And it's the natives who feel betrayed that you've got to watch out for.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

November 13, 2009

Cookie cutters

Sometimes laws sneak up on you while you're looking the other way. One of the best examples was the American Telecommunications Act of 1996: we were so busy obsessing about the freedom of speech-suppressing Communications Decency Act amendment that we failed to pay attention to the implications of the bill itself, which allowed the regional Baby Bells to enter the long distance market and changed a number of other rules regarding competition.

We now have a shiny, new example: we have spent so much time and electrons over the nasty three-strikes-and-you're offline provisions that we, along with almost everyone else, utterly failed to notice that the package contains a cookie-killing provision last seen menacing online advertisers in 2001 (our very second net.wars).

The gist: Web sites cannot place cookies on users' computers unless said users have agreed to receive them unless the cookies are strictly necessary - as, for example, when you select something to buy and then head for the shopping cart to check out.

As the Out-Law blog points out this proposal - now to become law unless the whole package is thrown out - is absurd. We said it was in 2001 - and made the stupid assumption that because nothing more had been heard about it the idea had been nixed by an outbreak of sanity at the EU level.

Apparently not. Apparently MEPs and others at EU level spend no more time on the Web than they did eight years ago. Apparently none of them have any idea what such a proposal would mean. Well, I've turned off cookies in my browser, and I know: without cookies, browsing the Web is as non-functional as a psychic being tested by James Randi.

But it's worse than that. Imagine browsing with every site asking you to opt in every - pop-up - time - pop-up - it - pop-up - wants - pop-up - to - pop-up - send - pop-up - you - a - cookie - pop-up. Now imagine the same thing, only you're blind and using the screen reader JAWS.

This soon-to-be-law is not just absurd, it's evil.

Here are some of the likely consequences.

As already noted, it will make Web use nearly impossible for the blind and visually impaired.

It will, because such is the human response to barriers, direct ever more traffic toward those sites - aggregators, ecommerce, Web bulletin boards, and social networks - that, like Facebook, can write a single privacy policy for the entire service to which users consent when they join (and later at scattered intervals when the policy changes) that includes consent to accepting cookies.

According to Out-Law, the law will trap everyone who uses Google Analytics, visitor counters, and the like. I assume it will also kill AdSense at a stroke: how many small DIY Web site owners would have any idea how to implement an opt-in form? Both econsultancy.com and BigMouthMedia think affiliate networks generally will bear the brunt of this legislation. BigMouthMedia goes on to note a couple of efforts - HTTP.ETags and Flash cookies - intended to give affiliate networks more reliable tracking that may also fall afoul of the legislation. These, as those sources note, are difficult or impossible for users to delete.

It will presumably also disproportionately catch EU businesses compared to non-EU sites. Most users probably won't understand why particular sites are so annoying; they will simply shift to sites that aren't annoying. The net effect will be to divert Web browsing to sites outside the EU - surely the exact opposite of what MEPs would like to see happen.

And, I suppose, inevitably, someone will write plug-ins for the popular browsers that can be set to respond automatically to cookie opt-in requests and that include provisions for users to include or exclude specific sites. Whether that will offer sites a safe harbour remains to be seen.

The people it will hurt most, of course, are the sites - like newspapers and other publications - that depend on online advertising to stay afloat. It's hard to understand how the publishers missed it; but one presumes they, too, were distracted by the need to defend music and video from evil pirates.

The sad thing is that the goal behind this masterfully stupid piece of legislation is a reasonably noble one: to protect Internet users from monitoring and behavioural targeting to which they have not consented. But regulating cookies is precisely the wrong way to go about achieving this goal, not just because it disables Web browsing but because technology is continuing to evolve. The EU would be better to regulate by specifying allowable actions and consequences rather than specifying technology. Cookies are not in and of themselves inherently evil; it's how they're used.

Eight years ago, when the cookie proposals first surfaced, they, logically enough, formed part of a consumer privacy bill. That they're now part of the telecoms package suggests they've been banging around inside Parliament looking for something to attach themselves to ever since.

I probably exaggerate slightly, since Out-Law also notes that in fact the EU did pass a law regarding cookies that required sites to offer visitors a way to opt out. This law is little-known, largely ignored, and unenforced. At this point the Net's best hope looks to be that the new version is treated the same way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter or by email to netwars@skeptic.demon.co.uk).

October 23, 2009

The power of Twitter

It was the best of mobs, it was the worst of mobs.

The last couple of weeks have really seen the British side of Twitter flex its 140-character muscles. First, there was the next chapter of the British Chiropractic Association's ongoing legal action against science writer Simon Singh. Then there was the case of Jan Moir, who wrote a more than ordinarily Daily Mailish piece for the Daily Mail about the death of Boyzone's Stephen Gately. And finally, the shocking court injunction that briefly prevented the Guardian from reporting on a Parliamentary question for the first time in British history.

I am on record as supporting Singh, and I, too, cheered when, ten days ago, Singh was granted leave to appeal Justice Eady's ruling on the meaning of Singh's use of the word "bogus". Like everyone, I was agog when the BCA's press release called Singh "malicious". I can see the point in filing complaints with the Advertising Standards Authority over chiropractors' persistent claims, unsupported by the evidence, to be able to treat childhood illnesses like colic and ear infections.

What seemed to edge closer to a witch hunt was the gleeful take-up of George Monbiot's piece attacking the "hanging judge", Justice Eady. Disagree with Eady's ruling all you want, but it isn't hard to find libel lawyers who think his ruling was correct under the law. If you don't like his ruling, your correct target is the law. Attacking the judge won't help Singh.

The same is not true of Twitter's take-up of the available clues in the Guardian's original story about the gag to identify the Parliamentary Question concerned and unmask Carter-Ruck, the lawyers who served it and their client, Trafigura. Fueled by righteous and legitimate anger at the abrogation of a thousand years of democracy, Twitterers had the PQ found and published thousands of times in practically seconds. Yeah!

Of course, this phenomenon (as I'm so fond of saying) is not new. Every online social medium, going all the way back to early text-based conferencing systems like CIX, the WELL, and, of course, Usenet, when it was the Internet's town square (the function in fact that Twitter now occupies) has been able to mount this kind of challenge. Scientology versus the Net was probably the best and earliest example; for me it was the original net.war. The story was at heart pretty simple (and the skirmishes continue, in various translations into newer media, to this day). Scientology has a bunch of super-secrets that only the initiate, who have spent many hours in expensive Scientology training, are allowed to see. Scientology's attempts to keep those secrets off the Net resulted in their being published everywhere. The dust has never completely settled.

Three people can keep a secret if two of them are dead, said Mark Twain. That was before the Internet. Scientology was the first to learn - nearly 15 years ago - that the best way to ensure the maximum publicity for something is to try to suppress it. It should not have been any surprise to the BCA, Trafigura, or Trafigura's lawyers. Had the BCA ignored Singh's article, far fewer people would know now about science's dim view of chiropractic. Trafigura might have hoped that a written PQ would get lost in the vastness that is Hansard; but they probably wouldn't have succeeded in any case.

The Jan Moir case, and the demonstration outside Carter-Ruck's offices are, however rather different. These are simply not the right targets. As David Allen Green (Jack of Kent) explains, there's no point in blaming the lawyers; show your anger to the client (Trafigura) or to Parliament.

The enraged tweets and Facebook postings about Moir's article helped send a record number of over 25,000 complaints to the Press Complaints Commission, whose Web site melted down under the strain. Yes, the piece was badly reasoned and loathsome, but isn't that what the Daily Mail lives for? Tweets and links create hits and discussion. The paper can only benefit. In fact, it's reasonable to suppose that in the Trafigura and Moir cases both the Guardian and the Daily Mail manipulated the Net perfectly to get what they wanted.

But the stupid part about let's-get-Moir is that she does not *matter*. Leave aside emotional reactions, and what you're left with is someone's opinion, however distasteful.

This concerted force would be more usefully turned to opposing the truly dangerous. See for example, the AIDS denialism on parade by Fraser Nelson at The Spectator. The "come-get-us" tone e suggests that they saw attention New Humanist got for Caspar Melville's mistaken - and quickly corrected - endorsement of the film House of Numbers and said, "Let's get us some of that." There is no more scientific dispute about whether HIV causes AIDS than there is about climate change or evolutionary theory.

If we're going to behave like a mob, let's stick to targets that matter. Jan Moir's column isn't going to kill anybody. AIDS denialism will. So: we'll call Trafigura a win, chiropractic a half-win, and Moir a loser.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

October 9, 2009

Phantom tollbooths

This was supposed to be the week that the future of Google Books became clear or at least started to; instead, the court ordered everyone to go away and come up with a new settlement (registration required). The revised settlement is due by November 9; the judge will hear objections probably around the turn of the year.

Instead this turned into the Week of the Postcode, after the Royal Mail issued cease-and-desist letters to the postcode API service Ernest Marples (built by Richard Pope and Open Rights Group advisory council member Harry Metcalfe). Marples' sin: giving away postcode data without a license (PDF).

At heart, the Postcode spat and the Google Books suit are the same issue: information that used to be expensive can now be made available on the Internet for free, and people who make money from the data object.

We all expect books to be copyrighted; but postcodes? When I wrote about it, astonished, in 1993 for Personal Computer World, the spokesperson explained that as an invention of the Royal Mail of course they were the Royal Mail's property (they've now just turned 50). There are two licensed services, the Postcode Address File (automates filling in addresses) and PostZon, the geolocator database useful for Web mashups. The Royal Mail says it's currently reviewing its terms and licensing conditions for PostZon; based on the recent similar exercise for PAF (PDF) we'll guess that the biggest objections to giving it away will come from people who are already paying for it and want to lock out competitors.

There's just a faint hint that postcodes could become a separate business; the Royal Mail does not allow the postcode database and mail delivery to cross-subsidize (to mollify competitors who use the database). Still, Charles Arthur, in the Guardian, estimates that licensing the postcode database costs us more than it makes.

This is the other sense in which postcodes are like Google Books: it costs money to create and maintain the database. But where postcodes are an operational database for the Royal Mail, books may not be for Google Wired UK has shown what happens when Google loses economic interest in a database, in this case Google Groups (aka, the Usenet archive).

But in the analogy Google plays the parts of both the Royal Mail (investing in creating a database from which it hopes to profit) and the geeks seeking to liberate the data (locked-up, out-of-print books, now on the Web! Yeah!). The publishers are merely an intervening toll booth. This is one reason reactions to Google Books have been so mixed and so confusing: everyone's inner author says, "Google will make money. I want some," while their inner geek says, "Wow! That is so *cool*! I want that!".

The second reason everyone's so confused, of course, is that the settlement is 141 pages of dense legalese with 15 appendices, and nobody can stand to read it. (I'm reliably told that the entire basis for handling non-US authors' works is one single word: "If".) This situation is crying out for a wiki where intellectual property lawyers, when they have a moment, can annotate and explain. The American Library Association has bravely managed a two-page summary (PDF).

What's really at stake, as digital library expert Karen Coyle explained to me this week, is orphan works, which could have long ago been handled by legislation if everyone hadn't gotten all wrapped up in the Google Books settlement. Public domain works are public domain (and you will find many of those Google has scanned in quietly available at the Internet Archive, where someone has been diligently uploading them. Works whose authorship is known have authors and publishers to take charge. But orphan works...the settlement would give a Book Rights Registry two-thirds of the money Google pays out to distribute to authors of orphan works. This would be run by the publishers, who I'm sure would put as much effort into finding authors to pay as, as, as...the MPAA@@. It was on this basis that the Department of Justice objected to the settlement.

The current situation with postcodes shows us something very important: when the Royal Mail invented them, 50 years ago, no one had any idea what use they might have outside of more efficiently delivering the mail. In the intervening time, postcodes have enabled the Royal Mail to automate sorting and slim down its work force (while mysteriously always raising postage); but they have also become key data points on which to hang services that have nothing to do with mail but everything to do with location: job seeking, political protest, property search, and quick access to local maps.

Similarly: we do not know what the future might hold for a giant database of books. But the postcode situation reminds us what happens when one or two stakeholders are allowed to own something that has broader uses than they ever imagined. Meanwhile, if you'd like to demand a change in the postcode situation this petition is going like gangbusters.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

August 21, 2009

This means law

You probably aren't aware of this, but there's a consultation going on right now about what to do about illegal peer-to-peer file-sharing; send in comments by September 15. Tom Watson, the former minister for digital engagement, has made some sensible suggestions for how to respond in print and blog.

This topic has been covered pretty regularly in net.wars, but this is different and urgent: this means law.

Among the helpful background material provided with the consultation document are an impact assessment and a financial summary. The first of these explains that there were two policy options under consideration: 1) Do nothing. 2) (Preferred) legislate to reduce illegal downloading "by making it easier and cheaper for rightsholders to bring civil actions against suspected illegal file-sharers". Implementing that requires ISPs to cooperate by notifying their subscribers. There will be a code of practice (less harsh than this one, we trust) including options such as bandwidth capping and traffic shaping, which Ofcom will supervise, at least for now (there may yet be a digital rights agency).

The document is remarkably open about who it's meant to benefit - and it's not artists.

Government intervention is being proposed to address the rise in unlawful P2P file-sharing which can reduce the incentive for the creative industries to invest in the development, production and distribution of new content. Implementation of the proposed policy will allow right [sic] holders to better appropriate returns on their investment.

The included financial assessment, which in this case is the justification for the entire exercise (p 40), lays out the expected benefits: BERR expects rightsholders to pick up £1,700 million by "recovering displaced sales", at a cost to ISPs and mobile network operators of £250 to £500 million over ten years. Net benefit: £1.2 billion. Wha-hey!

My favorite justification for all this is the note that because that are an estimated 6.5 million file-sharers in the UK there are *too many* of us to take us all to court, rightsholders' preferred deterrence method up until now. Rightsholders have marketing experts working for them; shouldn't they be getting some message from these numbers?

There are some things that are legitimately classed as piracy and that definitely cost sales. Printing and selling counterfeit CDs and DVDs is one such. Another is posting unreleased material online without the artist's or rightsholder's permission; that is pre-empting their product launch, and whether you wind up having done them a favor or not, there's no question that it's simply wrong. The answer to the first of these is to shut down pirate pressing operations; the answer to the second is to get the industry to police its own personnel and raise the penalties for insider leaks. Neither can be solved by harassing file-sharers.

It's highly questionable whether file-sharing costs sales; the experience of most of us who have put our work online for free is that sales increase. However, there is no doubt in my mind that there are industries file-sharing hurts. Two good examples in film are the movie rental business and the pay TV broadcasters, especially the premium TV movie channels.

As against that, however, the consultation notes but dismisses the cost to consumers: it estimates that ISPs' costs, when passed on to consumers, will reduce the demand for broadband by 10,000 to 40,000 subscribers, representing lost revenue to ISPs of between £2 and £9 million a year (p50). The consultatation goes on to note that some consumers will cease consuming content altogether and that therefore the policy will exacerbate existing inequality since those on the lowest incomes will likely lose the most.

It is not possible to estimate such welfare loss with current data availability, but estimates for the US show that this welfare loss could be twice as large as the benefit derived from reducing the displacement effect to industry revenues.

Shouldn't this be incorporated into the financial analysis?

We must pause to admire the way the questions are phrased. Sir Bonar would be proud: ask if your proposals are implementing what you want to do in the right way. In other words, ask if three is the right number of warning letters to send infringers before taking stronger action (question 9), or whether it's a good idea to leave exactly how costs are to be shared between rightsholders and ISPs flexible rather than specifying (question 6). The question I'd ask, which has not figured in any of the consultations I've seen would be: is this the best way to help artists navigate the new business models of the digital age?

Like Watson, my answer would be no.

Worse, the figures do not take into account the cost to the public, analyzed last year in the Netherlands.

And the assumptions seem wrong. The consultation document claims that research shows that approximately 70 percent of infringers stop when they receive a warning letter, at least in the short term. But do they actually stop? Or do they move their file-sharing to different technologies? Does it just become invisible to their ISP?

So far, file-sharers have responded to threats by developing new technologies better at obfuscating users' activities. Napster...Gnutella...eDonkey...BitTorrent. Next: encrypted traffic that looks just like a VPN connection.

I remain convinced that if the industry really wants to deter file-sharing it should spend its time and effort on creating legal, reliable alternatives. Nothing less will save it. Oh, yeah, and it would be a really good idea for them to be nice to artists, too. Without artists, rightsholders are nothing.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

May 29, 2009

Three blind governments

I spent my formative adult years as a musician. And even so, if I were forced to choose to sacrifice one of my senses as a practical matter pick sight over hearing: as awful and isolating as it would be to be deaf it would be far, far worse to be blind.

Lack of access to information and therefore both employment and entertainment is the key reason. How can anyone participate in the "knowledge economy" if you can't read?

Years ago, when I was writing a piece about disabled access to the Net, the Royal National Institute for the Blind put me in touch with Peter Brasher, a consultant who was particularly articulate on the subject of disabled access to computing.

People tend to make the assumption - as I did - that the existence of Braille editions and talking books meant that blind and partially sighted people were catered for reasonably well. In fact, he said, only 8 percent of the blind population can read Braille; its use is generally confined to those who are blind from childhood (although see here for a counterexample). But by far and away the majority of vision loss comes later in life. It's entirely possible that the percentage of Braille readers is now considerably less; today's kids are more likely to be taught to rely on technology - text-to-speech readers, audio books, and so on. From 50 percent in the 1950s, the percentage of blind American children learning Braille has dropped to 10 percent.

There's a lot of concern about this which can be summed up by this question: if text-to-speech technology and audio books are so great, why aren't sighted kids told to use them instead of bothering to learn to read?

But the bigger issue Brasher raised was one of independence. Typically, he said, the availability of books in Braille depends on someone with an agenda, often a church. The result for an inquisitive reader is a constant sense of limits. Then computers arrived, and it became possible to read anything you wanted of your own choice. And then graphical interfaces arrived and threatened to take it all away again; I wrote here about what it's like to surf the Web using the leading text-to-speech reader, JAWS. It's deeply unpleasant, difficult, tiring, and time-consuming.

When we talk about people with limited ability to access books - blind, partially sighted; in other cases fully sighted but physically disabled - we are talking about an already deeply marginalized and underserved population. Some of the links above cite studies that show that unemployment among the Braille-reading blind population is 44 percent - and 77 percent among blind non-Braille readers. Others make the point that inability to access printed information interferes with every aspect of education and employment.

And this is the group that this week's meeting of the Standing Committee on Copyright and Related Rights at the World Intellectual Property Office has convened to consider. Should there be a blanket exception to allow the production of alternative formats of books for the visually impaired and disabled?

The proposal, introduced by Brazil, Paraguay, and Ecuador, seems simple enough, and the cause unarguable. The World Blind Union estimates that 95 percent of books never become available in alternative formats and when they do it's after some delay. As Brasher said nearly 15 years ago, such arrangements depend on the agendas ofcharitable organizations.

The culprit, as in so many net.wars, is copyright law. The WBU published arguments for copyright reform (DOC) in 2004. Amazon's Kindle is a perfect example of the problem: bowing to the demands of publishers, text-to-speech can be - and is being - turned off in the Kindle. The Kindle - any ebook reader with speech capabilities - ought to have been a huge step forward for disabled access to books.

And now, according to Twits present, at WIPO, the US, Canada, and the EU are arguing against the idea of this exemption. (They're not the only ones; elsewhere, the Authors Guild has argued that exemptions should be granted by special license and registration, something I'd certainly be unhappy about if I were blind.)

Governments, particularly democratic ones, are supposed to be about ensuring equal opportunities for all. They are supposed to be about ensuring fair play. What about the American Disabilities Act, the EU's charter of fundamental human rights, and Canada's human rights act? Can any of these countries seriously argue that the rights of publishers and copyright holders trump the needs of a seriously disadvantaged group of people that every single one of us is at risk of joining?

While it's clear that text-to-speech and audio books don't solve every problem, and while the US is correct to argue that copyright is only one of a number of problems confronting the blind, when the WBU argues that copyright poses a significant barrier to access shouldn't everyone listen? Or are publishers confused by the stereotypical image of the pirate with the patch over one eye?

If governments and rightsholders want us to listen to them about other aspects of copyright law, they need to be on the right side of this issue. Maybe they should listen to their own marketing departments about the way it looks when rich folks kick people who are already disadvantaged - and then charge for the privilege.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk (but please turn off HTML).

May 15, 2009

"Bogus"

There is a basic principle that ought to go like this: if someone is making a claim that a treatment has an impact on someone's health it should be possible to critique the treatment and the claim without being sued for libel. The efficacy of treatments that can cost people their lives - even if only by omission rather than commission - should be a case where the only thing that matters is the scientific evidence.

I refer, of course, to the terrible, terrible judgement in the case of British Chiropractic Association v. Simon Singh. In brief: the judge ruled that Singh's use of the word "bogus" in commentary that appeared in the Guardian (on its comments pages) and which he went on to explain in the following paragraph 1) was a statement of fact rather than opinion and 2) meant that the BCA's members engaged in deliberately deceiving their patients. The excellent legal blogger Jack of Kent (in real life, the London solicitor specialising in technology, communications, and media law David Allen Green) wrote up the day in court and also an assessment of the judgement and Singh's options for discussion.

None of it is good news for anyone who works in this area. Singh could settle; he could proceed to trial to prove something he didn't say and for which under the English system his lawyers may not be allowed to make a case for anyway; or he could appeal this ruling on meaning, with very little likelihood of success. Singh will announce his decision on Monday evening at a public support meeting (Facebook link).

A little about the judge, David Eady (b. 1943). Wikipedia has him called to the bar in 1966 and specializing in media law until 1997, when he was appointed a High Court judge. Eady has presided over a number of libel cases and also high-profile media privacy cases.

Speaking as a foreigner, this whole case has seemed to me bizarre. For one thing, there's the instinctive American reaction: English libel law reverses the burden of proof so that it rests on the defendant. Surely this is wrong. But more than that, I don't understand how it is possible to libel an organisation. The BCA isn't a person, even if its members supply personal services, and Singh named no specific members or officers. I note that it's sufficiently bizarre to British commenters that publications that normally would never reprint the text of a libel - like The Economist - are doing so in this case and analysing every word. Particularly, of course, the word "bogus", on which so much of the judgement depends. The fact that Singh explained what he meant by bogus in the paragraph after the one in dispute apparently did not matter in court.

We talk about the chilling effects of the Digital Millennium Copyright Act, but the chilling effects of English libel law are far older and much more deeply entrenched. Discussions about changing it are as perennial and unproductive as the annual discussions about how it would be a really good idea to add another week between the French Open and Wimbledon. And this should be of concern throughout the English-publishing world: in the age of the Internet English courts seem to recognise no geographical boundaries. The New York author Rachel Ehrenfeld was successfully sued in Britain over allegations made in her book on funding terrorism despite the fact that neither she, the person who sued, nor the publisher were based in the UK. The judge was...David Eady.

Ehrenfeld asked the New York courts to promise not to enforce the judgement against her. When they couldn't (because no suit had been filed in New York), the state passed a law barring courts from enforcing foreign libel judgements if the speech in question would not be libellous under US law. Other states and the federal government are following to stop "libel tourism".

None of that, however, will help Simon Singh or anyone else who wants to critically examine the claims of pseudoscientists. The Skeptic, which I founded and edited some years (look for our Best Of book, soon), routinely censors itself, as does every other publication in this country. There are certain individuals and organisations who are known to be extremely litigious, and they get discussed as little as possible. Libel law is supposed to encourage responsible reporting and provide redress to wronged individuals, but at this virulent a level libel law is actually preventing responsible reporting of contentious matters of science and the individuals who are wronged are the public who are at risk of being deprived of the knowledge they need to make informed decisions. David Allen Green, writing in New Scientist, provides an excellent summary of cases in point.

It will be understandable if Singh decides to settle. I've seen an estimate that doing so now could cost him £100,000 - and continuing will be vastly more expensive. Lawsuits are, I'm told, like having cancer: miserable, roller-coaster affairs that consume your waking life and that of everyone around you. I have no idea what decision he will or should make. But he has my sympathy and my support.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow on Twitter, post here, or reply by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 3, 2009

Copyright encounters of the third dimension

Somewhere around 2002, it occurred to me that the copyright wars we're seeing over digitised intellectual property - music, movies, books, photographs - might, in the not-unimaginable future be repeated, this time with physical goods. Even if you don't believe that molecular manufacturing will ever happen, 3D printing and rapid prototyping machines offer the possibility of being able to make a large number of identical copies of physical goods that until now were difficult to replicate without investing in and opening a large manufacturing facility.

Lots of people see this as a good thing. Although: Chris Phoenix, co-founder of the Center for Responsible Nanotechnology, likes to ask, "Will we be retired or unemployed?"

In any case, I spent some years writing a book proposal that never went anywhere, and then let the idea hang around uselessly, like a human in a world where robots have all the jobs.

Last week, at the University of Edinburgh's conference on governance of new technologies (which I am very unhappy to have missed), RAF engineer turned law student Simon Bradshaw presented a paper on the intellectual property consequences of "low-cost rapid prototyping". If only I'd been a legal scholar...

It turns out that as a legal question rapid prototyping has barely been examined. Bradshaw found nary a reference in a literature search. Probably most lawyers think this stuff is all still just science fiction. But, as Bradshaw does, make some modest assumptions, and you find that perhaps three to five years from now we could well be having discussions about whether Obama was within the intellectual property laws to give the Queen a printed-out, personalized iPod case designed to look like Elvis, whose likeness and name are trademarked in the US. Today's copyright wars are going to seem so *simple*.

Bradshaw makes some fairly reasonable assumptions about this timeframe. Until recently, you could pay anywhere from $20,000 to $1.5 million for a fabricator/3D printer/rapid prototyping machine. But prices and sizes are dropping and functionality is going up. Bradshaw puts today's situation on a par with the state of personal computers in the late 1970s, the days of the Commodore PET and the Apple II and home kids like the Sinclair MK14. Let's imagine, he says, the world of the second generation fabricator: the size of a color laser printer, cost $1,000 or less, fed with readily available plastic, better than 0.1mm resolution (and in color), 20cm cube maximum size, and programmable by enthusiasts.

As the UK Intellectual Property Office will gladly tell you, there are four kinds of IP law: copyright, patent, trademark, and design. Of these, design is by far the least known; it's used to protect what the US likes to call "trade dress", that is, the physical look and feel of a particular item. Apple, for example, which rarely misses a trick when it comes to design, applied for a trademark on the iPhone's design in the US, and most likely registered it under the UK's design right as well. Why not? Registration is cheap (around £200), and the iPhone design was genuinely innovative.

As Bradshaw analyzes it, all four of these types of IP law could apply to objects created using 3D printing, rapid prototyping, fabricating...whatever you want to call it. And those types of law will interact in bizarre and unexpected ways - and, of course, differently in different countries.

For example: in the UK, a registered design can be copied if it's done privately and for non-commercial use. So you could, in the privacy of your home, print out copies of a test-tube stand (in Bradshaw's example) whose design is registered. You could not do it in a school to avoid purchasing them.

Parts of the design right are drafted so as to prevent manufacturers from using the right to block third-parties from making spare parts. So using your RepRap to make a case for your iPod is legal as long as you don't copy any copyrighted material that might be floating around on the surface of the original. Make the case without Elvis.

But when is an object just an object and when is it a "work of artistic merit"? Because if what you just copied is a sculpture, you're in violation of copyright law. And here, Bradshaw says, copyright law is unhelpfully unclear. Some help has come from the recent ruling in Lucasfilm v Ainsworth, the case about the stormtrooper helmets copied from the first Star Wars movie. Is a 3D replica of a 2D image a derivative work?

Unsurprisingly, it looks like US law is less forgiving. In the helmet case, US courts ruled in favor of Lucasfilm; UK courts drew a distinction between objects that had been created for artistic purposes in their own right and those that hadn't.

And that's all without even getting into the thing that if everyone has a fabricator there are whole classes of items that might no longer be worth selling. In that world, what's going to be worth paying for is the designs that drive the fabricators. Think knitted Dr Who puppets, only in 3D.

It's all going to be so much fun, dontcha think?

Update (1/26/2012): Simon Bradshaw's paper is now published here.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 6, 2009

Forty-five years

This week the EU's legal affairs committee, JURI, may vote - again - on term extension in sound recordings. As of today, copyright is still listed on the agenda.

Opposing term extension was a lot simpler at the national level in the UK; the path from proposal to legislation is well-known, well trodden, and well-watched by the national media. At the EU level, JURI is only one of four committees involved in proposing and amending term extension on behalf of the European Parliament - and then even after the Parliament votes it's the Commission who makes the final decision. The whole thing drags on for something close to forever, which pretty much guarantees that only the most obsessed stay in touch through the whole process. If you had designed a system to ensure apathy except among lobbyists who like good food, you'd have done exactly this.

There are many reasons to oppose term extension, most of which we've covered before. Unfortunately, these seem invisible to some politicians. As William Patry blogs, the harm done by term extension is diffuse and hard to quantify while easily calculable benefits accrue to a small but wealthy and vocal set of players.

What's noticeable is how many independent economic reviews agree with what NGOs like the Electronic Frontier Foundation and the Open Rights Group have said all along.

According to a joint report from several European intellectual property law centers (PDF), the Commission itself estimates that 45 extra years of copyright protection will hand the European music industry between €44 million and €843 million - uncertain by a factor of 20! The same report also notes that term extension will not net performers additional broadcast revenue; rather, the same pot will be spread among a larger pool of musicians, benefiting older musicians at the expense of young incomers. The report also notes that performers don't lose control over their music when the term of copyright ends; they lose it when they sign recording contracts (so true).

Other reports are even less favorable. In 2005, for example, the Dutch Institute for Information Law concluded that copyright in sound recordings has more in common with design rights and patents than with other areas of copyright, and it would be more consistent to reduce the term rather than extend it. More recently, an open letter from Bournemouth University's Centre for Intellectual Property Policy Management questioned exactly where those estimated revenues were going to come from, and pointed out the absurdity of the claim that extension would help performers.

And therein is the nub. Estimates are that the average session musician will benefit from term extension in the amount of €4 to €58 (there's that guess-the-number-within-a-factor-of-20 trick again). JURI's draft opinion puts the number of affected musicians at 7,000 per large EU member state, less in the rest. Call it 7,000 in all 27 and give each musician €20; that's €3.78 million, hardly enough for a banker's bonus. We could easily hand that out in cash, if handouts to aging performers are the purpose of the exercise.

Benefiting performers is a lobbyists' red herring that cynically plays on our affection for our favorite music and musicians; what term extension will do, as the Bournemouth letter points out, is benefit recording companies. Of that wackily wide range of estimated revenues in the last paragraph, 90 percent, or between €39 million and €758 million will go to record producers, even according to the EU's own impact assessment (PDF), based on a study carried out by PriceWaterhouseCooper.

If you want to help musicians, the first and most important thing you should do is improve the industry's standard contracts and employment practices. We protect workers in other industries from exploitation; why should we make an exception for musicians? No one is saying - not even Courtney Love - that musicians deserve charity. But we could reform UK bankruptcy law so that companies acquiring defunct labels are required to shoulder ongoing royalty payment obligations as well as the exploitable assets of the back catalogue. We could put limits on what kind of clauses a recording company is allowed to impose on first-time recording artists. We could set minimums for what is owed to session musicians. And we could require the return of rights to the performers in the event of a recording's going out of print. Any or all of those things would make far more difference to the average musician's lifetime income than an extra 45 years of copyright.

Current proposals seem to focus on this last idea as a "use it or lose it" clause that somehow makes the rest of term extension all right. Don Foster, the conservative MP who is shadow minister for the Department of Culture, Media, and Sport, for example, has argued for it repeatedly. But by itself it's not enough of a concession to balance the effect of term extension and the freezing of the public domain.

If you want to try to stop term extension, this is a key moment. Lobby your MEP and the members of the relevant committees. Remind them of the evidence. And remind them that it's not just the record companies and the world's musicians who have an interest in copyright; it's the rest of us, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 30, 2009

Looking backward

Governments move slowly; technology moves fast. That's not a universal truth - witness Obama's first whirlwind week in office - but in the early days of the Net it was the kind of thing people said smugly when they wanted to claim that cyberspace was impervious to regulation. It worked well enough for, say, setting free strong cryptography over the objections of the State Department and ITAR.

This week had two perfect examples. First: Microsoft noted in its 10-Q that the EU may force it to do something about tying Internet Explorer to Windows - remove it, make it one of only several browsers consumers can choose from at setup, or randomly provide different browsers. Still fighting the browser wars? How 1995.

Second: the release of the interim Digital Britain report by the Department for Culture, Media, and Sport. Still proposing Digital Rights Management as a way of protecting rightsholders' interest in content? How 2005.

It probably says something about technology cycles that the DRM of 2005 is currently more quaint and dated than the browser wars of 1995-1998. The advent of cloud computing and Google's release of Chrome last year have reinvigorated the browser "market". After years of apparent stagnation it suddenly matters again that we should have choices and standards to keep the Internet from turning into a series of walled gardens (instead of a series of tubes).

DRM, of course, turns content into a series of walled gardens and causes a load of other problems we've all written about extensively. But the most alarming problem about its inclusion in the government's list of action items is that even the music industry that most wanted it is abandoning it. What year was this written in? Why is a report that isn't even finished proposing to adopt a technological approach that's already a market failure? What's next, a set of taxation rules designed for CompuServe?

The one bit of good, forwarding-thinking news - which came as a separate announcement from Intellectual Property Minister David Lammy, is that apparently the UK government is ready to abandon the "three strikes" idea for punishing file-sharers - it's too complicated (Yes, Minister rules!) to legislate. And sort of icky arresting teenagers in their bedrooms, even if the EU doesn't see anything wrong with that and the Irish have decided to go ahead with it.

The interim report bundles together issues concerning digital networks (broadband, wireless, infrastructure), digital television and radio, and digital content. It's the latter that's most contentious: the report proposes creating a Rights Agency intended to encourage good use (buying content) and discourage bad use (whatever infringes copyright law). The report seems to turn a blind eye to the many discussions of how copyright law should change. And then there's a bunch of stuff about whether Britain should have a second public service broadcaster to compete "for quality" with the BBC. How all these things cohere is muddy.

For a really scathing review of the interim report, see The Guardian , where Charles Arthur attacks not only the report's inclusion of DRM and a "rights agency" to collaborate on developing it, but its dirt path approach to broadband speed and its proposed approach to network neutrality (which it calls "net neutrality", should you want to search the report to find out what it says).

The interim report favors allowing the kind of thing Virgin has talked about: making deals with content providers in which they're paid for guaranteed service levels. That turns the problem of who will pay for high-speed fiber into a game of pass-the-parcel. Most likely, consumers will end up paying, whether that money goes to content providers or ISPs. If the BBC pays for the iPlayer, so do we, through the TV license. If ISPs pay, we pay in higher bandwidth charges. If we're going to pay for it anyway, why shouldn't we have the freedom of the Internet in return?

This is especially true because we do not know what's going to come next or how people will use it. When YouTube became the Next Big Thing, oh, say, three or four years ago, it was logical to assume that all subsequent Next Big Things were going to be bandwidth hogs. The next NBT turned out to be Twitter, which is pretty much your diametrical opposite. Now, everything is social media - but if there's one thing we know about the party on the Internet it's that it keeps on moving on.

There's plenty that's left out of this interim report. There's a discussion of spectrum licensing that doesn't encompass newer ideas about spectrum allocation. It talks about finding new business models for rightsholders without supporting obsolete ones and the "sea of unlawful activity in which they have to swim" and mentions ISPs - but leaves out consumers except as "customers" or illegal copiers. It nods at the notion that almost anyone can be a creator and find distribution, but still persists in talking of customers and rightsholders as if they were never the same people.

No one ever said predicting the future was easy, least of all Niels Bohr, but it does help if you start by noticing the present.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 19, 2008

Backbone

There's a sense in which you haven't really arrived as a skeptic until someone's sued you. I've never had more than a threat, so as founder of The Skeptic, I'm almost a nobody. But by that standard Simon Singh, author with alternative medicine professor Edzard Ernst of the really excellent Trick or Treatment: The Undeniable Facts about Alternative Medicine, has arrived.

I think of Singh as one of the smarter, cooler generation of skeptics, who combine science backgrounds, good writing, and the ability to make their case in the mass media. Along with Ben Goldacre, Singh has proved that I was wrong when I thought, ten years ago, that getting skepticism into the national press on a regular basis was just too unlikely.

It's probably no coincidence that both cover complementary and alternative medicine, one of the biggest consumer issues of our time. We have a government that wants to save money on the health service. We have consumers who believe, after a decade or more of media insistence, that medicine is bad (BSE, childhood vaccinations, mercury fillings) and alternative treatments that defy science (homeopathy, faith healing) are good. We have overworked doctors who barely know their patients and whose understanding of the scientific process is limited. We have patients who expect miraculous cures like the ones they see on the increasingly absurd House. Doctors recommend acupuncture and Prince Charles, possessed of the finest living standards and medical treatment money can buy, promotes everything *else*. And we have medical treatments whose costs spiral every upwards, and constant reports of new medicines that fail their promise in one way or another.

But the trouble with writing for major media in this area is that you run across the litigious, and so has Singh: as Private Eye has apparently reported, he is being sued for libel by the British Chiropractic Association. The original article was published by the Guardian in April; it's been pulled from the site but the BCA's suit has made reposting it a cause celebre. (Have they learned *nothing* about the Net?) This annotated version details the evidence to back Singh's rather critical assessment of chiropractic. And there are many other New Zealand. And people complain about Big Pharma - the people alternative-medicine folks are supposed to be saving us from.

I'm not even sure how much sense it makes as a legal strategy. As the "gimpy" blog's comments point out, most of Singh's criticisms were based on evidence; a few were personal opinion. He mentioned no specific practitioners. Where exactly is the libel? (Non-UK readers may like to glance at the trouble with UK libel laws, recently criticized by the UN as operating against the public interest..

All science requires a certain openness to criticism. The whole basis of the scientific method is that independent researchers should be able to replicate each other's results. You accept a claim on that basis and only that basis - not because someone says it on their Web site and then sues anyone who calls it lacking in evidence. If the BCA has evidence that Singh is wrong, why not publish it? The answer to bad speech, as Mike Godwin, now working at Wikimedia, is so fond of saying, is more speech. Better speech. Or (for people less fond of talking) a dignified silence in the confidence that the evidence you have to offer is beyond argument. But suing people - especially individual authors rather than major media such as national newspapers - smacks of attempted intimidation. Though I couldn't possibly comment.

Ever since science became a big prestige, big money game we've seen angry fights and accusations - consider, for example, the ungracious and inelegant race to the Nobel prize on the part of some early HIV researchers. Scientists are humans, too, with all the ignoble motives that implies.

But many alternative remedies are not backed by scientific evidence, partly because often they are not studied by scientists in any great depth. The question of whether to allocate precious research money and resource to these treatments is controversial. Large pharmaceutical companies are unlikely to do it, for similar reasons to those that led them to research pills to reverse male impotence instead of new antibiotics. Scientists in research areas may prefer to study bigger problems. Medical organizations are cautious. The British Medical Association has long called for complementary therapies to be regulated to the same standards as orthodox medicine or denied NHS funding. As the General Chiropractic Council notes NHS funding is so far not widespread for chiropractic.

If chiropractors want to play with the big boys - the funded treatments, the important cures - they're going to have to take their lumps with the rest of them. And that means subluxing a little backbone and stumping up the evidence, not filing suit.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 12, 2008

Watching the Internet

It is more than ten years since it was possible to express dissent about the rights and wrongs of controlling the material available on the Net without being identified as either protecting child abusers or being one. Even the most radical of civil liberties organisations flinch at the thought of raising a challenge to the Internet Watch Foundation. Last weekend's discovery that the IWF had added a page from Wikipedia to its filtering list was accordingly the best possible thing that could have happened. It is our first chance since 1995 to have a rational debate about whether the IWF is fulfilling successfully the purpose for which it was set up and the near nationwide coverage of BT's Cleanfeed, despite the problems Cambridge researcher Richard Clayton has highlighted (PDF).

The background: the early 1990s was full of media scare stories about the Internet. In 1996, the police circulated a list of 133 Usenet newsgroups they claimed hosted child pornography, and threatened seizures of equipment. The government threatened regulation. And in that very tense climate, Peter Dawe, the founder of Pipex, called a meeting to announce an initiative he had sketched out on the back of an envelope called SafetyNet, aimed at hindering the spread of child pornography over the Internet. He was willing to stump up £500,000 to get it off the ground.

Renamed the IWF, the system still operates largely like he envisioned it would: it operates a hotline to which the public can report the objectionable material they find. If the IWF believes the material is illegal under UK law and it's hosted in the UK, the ISP is advised to remove it and the police are notified. If it's hosted elsewhere, the IWF adds it to the list of addresses that it recommends for blocking. ISPs must pay to join the IWF to subscribe to the list, and the six biggest ISPs, who have 90 to 95 percent of the UK's consumer accounts, all are members. Cleanfeed is BT's implementation of the list. Of course, despite its availability via Google Groups, Usenet hardly matters any more, and ISPs are beginning to drop it quietly from their offerings as a cost with little return.

The IWF's statement when it eventually removed the block is rather entertaining: it says, essentially, "We were right, but we'll remove the block anyway." In other words, the IWF still believes the image is "potentially illegal" - which provides a helpful, previously unavailable, window into their thinking - but it recognises the foolishness of banning a page on the world's fourth biggest Web site, especially given that the same image can be purchased in large, British record shops in situ on the cover of the 32-year-old album for which it was commissioned.

We've also learned that the most thoughtful debate on these issues is actually available on Wikipedia itself, where the presence of the image had been discussed at length from a variety of angles.

At the free speech end of the spectrum, the IWF is an unconscionable form of censorship. It operates a secret blocklist, it does not notify non-UK sites that they are being blocked, and it operates an equally secret appeals process. Some of this is silly. If it's going to exist the blocklist has to be confidential: a list of Internet links is actions, not words and they can be emailed across the world in seconds, and the link targets downloaded in minutes. Plus, it might be committing a crime: under UK law, it is illegal to take, make, distribute, show, or possess indecent images of children; that includes accessing such images.

At the control end of the spectrum, the IWF is probably too limited. There have been calls for it to add hate speech and racial abuse to its mandate, calls that as far as we know it has so far largely resisted. Pornography involving children - or, in the IWF's preferred terminology, "child sexual abuse images" - is the one thing that most people can agree on.

When the furor dies down and people can consider the matter rationally, I think there's no chance that the IWF will be disbanded. The compromise is too convenient for politicians, ISPs, and law enforcement. But some things could usefully change. Here's my laundry list.

First, this is the first mistake that's come to light in the 12 years of the IWF's existence. The way it was caught should concern us: Wikipedia's popularity and technical incompatibilities between the way Wikipedia protects itself from spam edits and the way UK ISPs have implemented the block list. Other false positives may not be so lucky. The IWF has been audited twice in 12 years; this should be done more frequently and the results published.

The IWF board should be rebalanced to include at least one more free speech advocate and a representative of consumer interests. Currently, it is heavily overbalanced in the direction of law enforcement and child protection representatives.

There should be judicial review and/or oversight of the IWF. In other areas of censorship, it's judges who make the call.

The IWF's personnel should have an infusion of common sense.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 28, 2008

Mother love

It will be very easy for people to take away the wrong lessons from the story of Lori Drew, who this week was found guilty of several counts of computer fraud in a case of cyberbullying that drove 13-year-old Missouri native Megan Meier to suicide.

The gist: in 2006, 49-year-old Lori Drew, a neighbor of Meier's who believed that Meier had spread gossip about her own 13-year-old daughter, a former friend. With help from her daughter and her 18-year-old assistant, Drew created a MySpace page belonging to a fictitious 16-year-old boy named Josh Evans. For some weeks Evans sent Meier flirtatious messages, then abruptly dumped her with a stream of messages and bulletings, ending with the message, "The world would be a better place without you." Meier, who had for five years been taking prescription medication for attention deficit disorder and depression, who was overweight and lacked self-esteem, hanged herself.

The story is a horror movie for parents. This is a teen who was, her mother said in court, almost always supervised in her Internet use. In fact, Meier and Drew's daughter had, some months earlier, created a fake MySpace page to talk to boys online, an escapade that caused Meier's mother to close down her MySpace access for some months. On the day of Meier's suicide, her mother was on her way to the orthodontist with her younger daughter when Meier, distraught, reported the stream of unpleasant messages. Her mother told her to sign off. She didn't; when her mother came home there was a brief altercation; they found her 20 minutes later.

The basic elements of the story are not, of course, new. Identity deception is as old as online services; the best-known early case was that of Joan, a CompuServe forum regular who for more than two years in the early 1990s claimed to be a badly disabled former neuropsychologist whose condition made her reluctant to meet people, especially her many online friends. Joan was in fact a fictional character, the increasingly elaborate creation of a male New York psychiatrist named Alex.

Cyberbullying is, of course, also not new. You can go back to the war between alt.tasteless and rec.pets.cats in 1992, if you like, but organized playground behavior seems to flourish in every online medium. Gail Williams, the conference manager at the WELL, said about ten years ago that a lot of online behavior seems to be people working our their high school angst, and nothing has changed in the interim except that a lot of people online now actually still in high school. And unfortunately for them, the people they're working out their high school angst with are bigger, older, more experienced, and a lot savvier about where to stick in the virtual knife. People can be damned unpleasant sometimes.

But let's look at the morals people are finding. EfluxMedia:
The case of Megan Meier calls for boundaries when it comes to cyberbullying and the use of social networking sites in general, but also calls for reason. Social networking sites and the Internet in general have become more than just virtual realities, they are now part of our everyday lives, and they influence us in ways that we cannot ignore. What we must learn from this is that our actions may have unimaginable consequences on other people, even when it comes to the Internet, so think twice before you act.

Boundaries? Meier was far more rigorously supervised online than the average teen. Who's going to supervise the behavior of a 49-year-old woman to make sure she doesn't cross the line?

More to the point, the court's verdict found that Drew had broken federal laws concerning computer fraud. Is it hacking to set up a pseudonymous MySpace page and send fraudulent postings? The MySpace's 2006 terms and conditions required registration information to be truthful and banned harassment and sexual exploitation. Have MySpace's terms become federal law?

The answer is probably that there was no properly applicable law. We've seen that situation before, too - Robert Schifreen and Steve Gold were prosecuted under the laws against wire fraud. The eventual failure of the case on appeal proved the need for the Computer Misuse Act and comparable laws against hacking elsewhere in the world. Ironically, these laws are now showing their limits, too, as the Drew case proves. We can now, I suppose, expect to see a lot of proposals for laws banning cyberbullying under which people like Drew could be more correctly prosecuted.

But the horror movie is only partly about online; online, in this case MySpace, allowed the hoaxers to post "Josh Evans'" bare-chested photo. The same kind of hoax, with hardly less impact, could have been carried out by letter and poster. Wanda Holloway didn't need online to contract to muder her daughter's more successful cheerleading rival.

Ultimately, the lesson we should be learning is the same one we heard at this year's Computers, Freedom, and Privacy conference: just like rape and incest, you are more at risk for harassment and cyberbullying from people you know. Unfortunately, most such law seems to be written with the idea that it's strangers who are dangerous.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 3, 2008

Deprave and corrupt

It's one of the curiosities of being a free speech advocate that you find yourself defending people for saying things you'd never say yourself.

I noticed this last week when a friend, after delivering an impassioned defense of the rights of bloggers to blog about the world around them - say, recounting the Nazi costumes people were wearing to the across-the-street neighbor's party last weekened or detailing the purchases your friend made in the drugstore - and then turned around and said she didn't know why she was defending it because she wouldn't actually put things like that in her blog. (Unless, I suppose, her neighbor was John McCain.)

Probably most bloggers have struggled at one point or another with the collision these tell-the-world-your-private-thoughts technologies create between freedom of speech and privacy. Usually, though, invading your own privacy is reasonably safe, even if that invasion takes the form of revealing your innermost fantasies. Yes, there's a lot of personal information in them thar hills, and the enterprising data miner could certainly find out a lot about me by going through my 17-year online history via Google searches and intelligent matching. But that's nothing to the situation Newcastle civil servant Darryn Walker finds himself in after allegedly posting a 12-page kidnap, torture, and murder fantasy about the pop group Girls Aloud.

As unwise postings go, this one sounds like a real winner. It was (reports say) on a porn site; it named a real pop group (making it likely to pop up in searches by the group's fans); and identified as the author was a real, findable person - a civil servant, no less. A member of the public reported the story to the Internet Watch Foundation, who reported it to the police, who arrested Walker under the Obscene Publications Act.

The IWF's mission in life is to get illegal content off the Net. To this end, it operates a public hotline to which anyone can report any material they think might be illegal. The IWF's staff sift through the reports - 31,776 in 2006, the last year their Web site shows statistics for - and determines whether the material is "potentially illegal". If it is, the IWF reports it to the police and also recommends to the many ISPs who subscribe to its service that the material be removed from their servers. The IWF so far has focused on clearly illegal material, largely pornographic images, both photographic and composited, of children. Since 2003, less than 1 percent of illegal images involving children is hosted in the UK.
As a cloistered folksinger I had never heard of the very successful group Girls Aloud; apparently they were created like synthetic gemstones in 2002 by the TV show Popstars: the Rivals. According to their Wikipedia entries, they're aged 22 to 26 - hardly children, no matter how unpleasant it is to be the heroines of such a violent fantasy.

So the case poses the question: is posting such a story illegal? That is, in the words of the Obscene Publications Act, is it likely to "deprave and corrupt"? And does it matter that the site to which it was posted is not based in the UK?

It is now several decades since any text work was prosecuted under the Obscene Publications Act, and much longer since any such prosecution succeeded. The last such court case, the 1976 prosecution against the publishers of Inside Linda Lovelace apparently left the Metropolitan Police believing they couldn't win . In 1977, a committee recommended excluding novels from the Act. Novels, not blog postings.

Succeeding in this case would therefore potentially extend the IWF's - and the Obscene Publications Unit's - remit by creating a new and extremely large class of illegal material. The IWF prefers to use the term "child abuse images" rather than "child pornography"; in the case of actual photographs of real incidents this is clearly correct. The argument for outlawing composited or wholly created images as well as photographs of actual children is that pedophiles can use them to "groom" their targets - that is, to encourage their participation in child abuse by convincing them that these are activities that other children have engaged in and showing them how. Outlawing text descriptions of real events could block child abuse victims from publishing their own personal stories; outlawing fiction, however disgusting seems a wholly ineffectual way of preventing child abuse. Bad things happen to good fictional characters all the time.

So, as a human being I have to say that I not only wouldn't write this piece, I don't even want to have to read it. But as a free speech advocate I also have to say that the money spent tracking down and prosecuting its writer would have been more effectively spent on...well, almost anything. The one thing the situation has done is widely publicize a story that otherwise hardly anyone knew existed. Suppressing material just isn't as easy as it used to be when all you had to do was tell the publisher to get it off the shelves.

Of course, for Walker none of this matters. The most likely outcome for him in today's environment is a ruined life.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 15, 2008

License to kill


Yesterday, a US federal appeals court reversed a lower court ruling that might have invalidated open-source licenses. The case, Jacobsen v. Katzer, began more than two years ago with a patent claim.

Open-source software developer Robert Jacobsen manages the collective effort that produced Java Model Railroad Interface, which allows enthusiasts to reprogram the controller chips in their trains. JMRI is distributed under the Artistic License, an older and less-well known one of the free licenses (it isn't one of the Free Software Foundation's approved licenses, though its successor, Artistic License 2.0, is). Matthew Katzer and Kamind, aka KAM Industries sells a functionally similar commercial product that, crucially, Jacobsen claims is based on downloaded portions of JMRI. The Artistic License requires attribution, copyright notices, references to the file listing copyright terms, identification of the source of the downloaded files, and a description of the changes made by the new distributor. None of these conditions were met, and accordingly Jacobsen moved for a preliminary injunction on the basis of copyright infringement. The District Court denied the motion on the grounds that the license is "intentionally broad", and argued that violating the conditions "does not create liability for copyright infringement where it would not otherwise exist". It is this decision that has been reversed.

This win for Jacobsen doesn't get him anything much yet: the case is simply remanded back to the California District Court for further consideration. But it gets the rest of the open-source movement quite a lot. The judgement affirms Richard Stallman's original insight that created the General Public License in the first place, that copyright could be used to set works free as well as to close them down.

The decision hinges on the question of whether the licensing terms are conditions or covenants, a distinctions that's clear as glass to a copyright lawyer and clear as mud to everyone else. According to the Electronic Frontier Foundation's helpful explanation (and they have lots of copyright lawyers to explain this sort of thing), it's the difference between contract law and copyright law. Violating conditions means you don't have a copyright license; violating covenants means you've broken the contact but you still have a license. In the US, it's also the difference between federal and state law. When you violate the license's conditions, therefore, as Lawrence Lessig explains , what you have is a copyright infringement.

It's hard to understand how the district court could have taken the view it did. It is very clear from both the licenses themselves and from the copious documentation of the thinking that went into their creation that their very purpose was to ensure that work created collectively and intended to be free for use, modification, and redistribution could not be turned into a closed commercial product that benefited only the company or individual that sells it. To be sure, it's not what the creators of copyright - intended as a way to give authors control over publishers - originally had in mind.

But once you grant the idea of a limited monopoly and say that creators should have the right to control how their work is used, it makes no sense to honor that right only if it's used restrictively. Either creators have the legal right to determine licensing conditions or they have not. (The practical right is of course a different story; economics and the size of publishing businesses give them sufficient clout to impose terms on creators that those creators wouldn't choose.). Seems to me that a creator could specify as a licensing condition that the work could only be published on the side of a cow, and any publisher fool enough to agree to that would be bound by it or be guilty of infringement.

But therein lies the dark side of copyright licensing conditions. The Jacobsen decision might also give commercial software publishers Ideas about the breadth of conditions they can attach to their end-user license agreements. As if these weren't already filled with screeds of impenetable legalese, much of which could be charitably described as unreasonable. EFF points this out and provides a prime example: the licensing terms imposed by World of Warcraft owner Blizzard Entertainment have been upheld in court.

Blizzard's terms ban automated playing software such as Glider, whose developer, Michael Donnelly, was the target of the suit. EFF isn't arguing that Blizzard doesn't have the right to ban bots from its servers; EFF just doesn't think accusing Glider users of copyright infringement for doing is a good legal precedent. Public Knowledge has a fuller explanation of the implications of this case, which it filed as an amicus brief. Briefly, PK argues that upholding these terms as copyright conditions could open the way for software publishers to block software that interoperates with theirs. (Interestingly, Blizzard's argument seems to rely on the notion that software copied into RAM is a copyright infringement, an approach I recall Europe rejecting a few years ago).

You'd think no company would want to sue its own customers. But keeping the traditional balance copyright law was created to achieve between providing incentives for artists and creators and public access to ideas continues to require more than relying on common sense.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 2, 2008

Bet and sue

Most net.wars are not new. Today's debates about free speech and censorship, copyright and control, nationality and disappearing borders were all presaged by the same discussions in the 1980s even as the Internet protocols were being invented. The rare exception: online gambling. Certainly, there were debates about whether states should regulate gambling, but a quick Usenet search does not seem to throw up any discussions about the impact the Internet was going to have on this particular pastime. Just sex, drugs, and rock 'n' roll.

The story started in March, when the French Tennis Federation (FFT - Fédération Française de Tennis) filed suit in Belgium against Betfair, Bwin, and Ladbrokes to prevent them from accepting bets on matches played at the upcoming French Open tennis championships, which start on May 25. The FFT's arguments are rather peculiar: that online betting stains the French Open's reputation; that only the FFT has the right to exploit the French Open; that the online betting companies are parasites using the French Open to make money; and that online betting corrupts the sport. Bwin countersued for slander.

On Tuesday of this week, the Liège court ruled comprehensively against the FFT and awarded the betting companies costs.

The FFT will still, of course, control the things it can: fans will be banned from using laptops and mobile phones in the stands. The convergence of wireless telephony, smart phones, and online sites means that in the second or two between the end of a point and the electronic scoreboard updating, there's a tiny window in which people could bet on a sure thing. Why this slightly improbable scenario concerns the FFT isn't clear; that's a problem for the betting companies. What should concern the FFT is ensuring a lack of corruption within the sport. That means the players and their entourages.

The latter issue has been a touchy subject in the tennis world ever since last August, when Russian player Nikolay Davydenko, currently fourth in the world rankings, retired in the third and final set of a match in Poland against 87th ranked Marin Vassallo Arguello, citing a foot injury. Davydenko was accused of match-fixing; the investigation still drags on. In the resulting publicity, several other players admitted being approached to fix matches. As part of subsequent rule-tightening by the Association of Tennis Professionals, the governing body of men's professional tennis, three Italian players were suspended briefly late last year for betting on other players' matches.

Probably the most surprising thing is that tennis, along with soccer and horse racing, is actually among the most popular sports for betting. A minority sport like tennis? Yet according to USA Today, the 2007 Paris Masters event saw $750 million to $1.5 billion in bets. I can only assume that the inverted pyramid of matches every week involving individual players fits well with what bettors like to do.

Fixing matches seems even more unlikely. The best payouts come from correctly picking upsets, the bigger the better. But top players are highly unlikely to throw matches to order. Most of them play a relatively modest number of events (Davydenko is admittedly the exception) and need all the match wins and points from those events to sustain their rankings. Plus, they're just too damn rich.

In 2007, Roger Federer, the ultra-dominant number one player since the end of 2003, earned upwards of $10 million in prize money alone; Davydenko picked up over $2 million (and has already won another $1 million in 2008). All of the top 12 earned over $1 million. Add in endorsements, and even after you subtract agents' fees, tax, and travel costs for self and entourage, you're still looking at wealthy guys. They might tank matches at events where they're being paid appearance fees (which are legal on the men's tour at all but the top 14 events, but proving they've done so is exceptionally difficult. Fixing matches, which could cost them in lost endorsements on top of the tour's own sanctions, surely can't be worth it.

There are several ironies about the FFT's action. First of all (something most of the journalists covering this story don't mention, probably because they don't spend a lot of time watching tennis on TV), Bwin has been an important advertiser sponsoring tennis on Eurosport. It's absolutely typical of the counter-productive and intricately incestuous politics that characterize the tennis world that one part of the sport would sue someone who pays money into another part of the sport.

Second of all, as Betfair and Bwin pointed out, all three of these companies are highly regulated European licensed operations. Ruling them out of action would mean shift online betting to less well regulated offshore companies. They also pointed out the absurdity of the parasites claim: how could they accept bets on an event without using its name? Betfair in particular documented its careful agreements with tennis's many governing bodies.

Third of all, the only reason match-fixing is an issue in the tennis world right now is that Betfair spotted some unusual betting patterns during that Polish Davydenko match, cancelled all the bets, and went public with the news. Without that, Davydenko would have avoided the fight over his family's phone records. Come to think of it, making the issue public probably explains the FFT's behavior: it's revenge.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2008

My IP address, my self

Some years back when I was writing about the data protection directive, Simon Davies, director of Privacy International, predicted a trade war between the US and Europe over privacy laws. It didn't happen, or at least it hasn't happened yet.

The key element to this prediction was the rule in the EU's data protection laws that prohibited sending data on for processing to countries whose legal regimes aren't as protective as those of the EU. Of course, since then we've seen the EU sell out on supplying airline passenger data to the US. Even so, this week the Article 29 Data Protection Working Party made recommendations about how search engines save and process personal data that could drive another wedge between the US and Europe.

The Article 29 group is one of those arcane EU phenomena that you probably don't know much about unless you're a privacy advocate or paid to find out. The short version: it's a sort of think tank of data protection commissioners from all over Europe. The UK's Information Commissioner, Richard Thomas, is a member, as are his equivalents in countries from France to Lithuania.

The Working Party (as it calls itself) advises and recommends policies based on the data protection principles enshrined in the EU Data Protection Directive. It cannot make law, but both its advice to the European Commission and the Commission's action (or lack thereof) are publicly reported. It's arguable that in a country like the UK, where the Information Commissioner operates with few legal teeth to bite with, the existence of such a group may help strengthen the Commissioner's hand.

(Few legal teeth, at least in respect of government activities: the Information Commissioner has issued an opinion about Phorm indicating that the service must be opt-in only. As Phorm and the ISPs involved are private companies, if they persisted with a service that contravened data protection law, the Information Commissioner could issue legal sanctions. But while the Information Commissioner can, for example, rule that for an ISP to retain users' traffic data for seven years is disproportionate, if the government passes a law saying the ISP must do so then within the UK's legal system the Information Commissioner can do nothing about it. Similarly, the Information Commissioner can say, as he has, that he is "concerned" about the extent of the information the government proposes to collect and keep on every British resident, but he can't actually stop the system from being built.)

The group's key recommendation: search engines should not keep personally identifiable search histories for longer than six months, and it specifically includes search engines whose headquarters are based outside the EU. The group does not say which search engines it studied, but it was reported to be studying Google as long ago as last May. The report doesn't look at requirements to keep traffic data under the Data Retention Directive, as it does not apply to search engines.

Google's shortening the life of its cookies and anonymizing its search history logs after 18 months turns out to have a significance I didn't appreciate when, at the time, I dismissed it as insultingly trivial (which it was): it showed the Article 29 working group that the company doesn't really need to keep all that data for so long. In

One of the key items the Article 29 group had to decide in writing its report on data protection issues related to search engines (PDF) is this: are IP addresses personal information? It sounds like one of those bits of medieval sophistry, like asking how many angels can dance on the head of a pin. In the dial-up days, it might not have mattered, at least in Britain, where local phone charges forced limited usage, so users were assigned a different IP address every time they logged in. But in the world of broadband, where even the supposedly dynamic IP addresses issued by cable suppliers may remain with a single subscriber for years on end. Being able to track your IP address's activities is increasingly like being able to track your library card, your credit card, and your mobile phone all at the same time. Fortunately, the average ISP doesn't have the time to be that interested in most of its users.

The fact is that any single piece of information that identifies your activities over a long period and can be mapped to your real-life identity has to be considered personal information or the data protection laws make no sense. The libertarian view, of course, would be that there are other search engines. You do not actually have to use Google, Gmail, or even YouTube. But if all search engines adopted Google's habits the choice would be more apparent than real. Time was when the US was the world's policeman. With respect to data, it seems that the EU has taken on this role. It will be interesting to see whether this decision has any impact on Google's business model and practices. If it does, that trade war could finally be upon us. If not, then Google was building up a vast data store just because we can.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 21, 2008

Copywrongs

This is a shortened version of a talk I gave at Musicians, Fans, and Copyright at the LSE on Wednesday, March 19, 2008.

Most discussions about copyright with respect to music do not include musicians. The notable exception is the record companies' trophy musicians who appear at government hearings. Because these tend to be the most famous and well-rewarded musicians they can find, their primarily contribution to the dabate seems to be to try to make politicians think, "We love you, we can't bear that you should starve, the record company must be right." It's a long time since I made a living playing, so I can't pretend to represent them. But I can make a few observations. Folk musicians in particular stand at the nexus of all the copyright arguments: they are contemporary artists and songwriters, but they mine their material from the public domain.

Every musician, at every level of the business, has been ripped off (PDF), usually when they can least afford it. The result is that they tend to be deeply suspicious of any attempt to limit their rights. The music business has such a long history of signing the powerless - young, inexperienced musicians, the black blues musicians of the Mississippi Delta, and many others - to exploitive contracts that it's hard to understand why they're still allowed to get away with it. Surely it ought to be possible to limit what rights and terms the industry can dictate to the inexperienced and desperate with stars in their eyes?

Steve Gillette, author with Tom Campbell of the popular 1966 song "Darcy Farrow", says that when Ian & Sylvia wanted to record the song, they were told to hire someone to collect royalties on their behalf. That person did little to collect royalties for many years. Gillette and Campbell eventually won a court judgement with a standard six-month waiting period - during which time John Denver recorded the song and put it on his best-selling album, Rocky Mountain High, giving the publisher a motive to fight back. They were finally able to wrest back control of the song in about 1990.

In book publishing it is commonplace for the rights to revert to authors if and when the publisher decides to withdraw their work from sale. There is no comparable practice in the music business. And so, people I know on the folk scene whose work has gone out of commercial release find themselves in the situation where their fans want to buy their music but they can't sell it. As one musician said, "I didn't work all those years to have my music stuck in a vault."

Pete Coe, a traditional performer and songwriter, tells me that the common scenario is that a young musician signs a recording contract early on, and then the company goes out of business and the recordings are bought by others. The purchasing company buys the assets - the recordings - but not the burden, the obligation to pass on royalties to the original artists. Coe himself, along with many others, is in this situation; some of his early recordings have been through two such bankruptcies. The company that owns them now owns many other folk releases of the period and either refuses to re-release the recordings or refuses to provide sales figures or pay royalties, and is not a member of MCPS. Coe points out that this company would certainly refuse to cooperate with any effort to claim the reversion of rights.

In a similar case, Nic Jones, a fine and widely admired folk guitarist who played almost exclusively traditional music, was in a terrible car accident in about 1981 that left him unable to play. Over the following years his recordings were bought up but not rereleased, so that an artist now unable to work could not benefit from his back catalogue. It is only in the last few years, with the cost of making and distributing music falling, that he and his wife have managed to release old live recordings on their own label. Term extension would, if anything, hurt Jones's ability to regain control over and exploit his own work. (Note: I have not canvassed Jones's opinion.)

The artists in these cases, like any group of cats, have reacted in different ways. Gillette, who comments also that in general it's the smaller operators who are the biggest problem, says, that term extension "only benefits the corporate media, and in my experience only serves to lend energy to turning the public trust into company assets".

Coe, on the other hand, favors term extension. "We determined," he said by email in 2006, "that once we'd regained our rights, publishing and recording, that they were never again to pass out of our control."

Coe's reaction is understandable. But I think many problems could be solved by forcing the industry to treat musicians and artists more fairly. It's notable that folk artists, through necessity, pioneered what's becoming commonplace now: releasing their own albums to sell to audiences direct at their gigs and via mail, now Web, order.

What the musicians of the future want and need, in my opinion, is the same thing that the musicians of the present and past wanted: control. In my view, there is no expansion of copyright that will give it to them.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 7, 2008

Techitics

This year, 2008, may go down in history as the year geeks got politics. At etech this week I caught a few disparaging references to hippies' efforts to change politics. Which, you know, seemed kind of unfair, for two reasons. First: the 1960s generation did change an awful lot of things, though not nearly as many as they hoped. Second: a lot of those hippies are geeks now.

But still. Give a geek something that's broken and he'll itch to fix it. And one thing leads to another. Which is why on Wednesday night Lawrence Lessig explained in an hour-long keynote that got a standing ovation how he plans to fix what's wrong with Congress.

No, he's not going to run. Some 4,500 people on Facebook were trying to push him into it, and he thought about it, but preliminary research showed that his chances of beating popular Silicon Valley favorite, Jackie Speier, were approximately zero.

"I wasn't afraid of losing," he said, noting ruefully that in ten years of copyfighting he's gotten good at it. Instead, the problem was that Silicon Valley insiders would have known that no one was going to beat Jackie Speier. But outsiders would have pointed, laughed, and said, "See? The idea of Congressional reform has no legs." And on to business as usual. So, he said, counterproductive to run.

Instead, he's launching Change Congress. "Obama has taught us that it's possible to imagine many people contributing to real change."

The point, he said, will be to provide a "signalling function". Like Creative Commongs, Change Congress will give candidates an easy way to show what level of reform they're willing to commit tto. The system will start with three options: 1) refusing money from lobbyists and political action committees (private funding groups); 2) ban earmarks (money allocated to special projects in politicians' home states); 3) commit to public financing for campaigns. Candidates can then display the badge generated from those choices on their campaign materials.

From there, said Lessig, layer something like Emily's List on top, to help people identify candidates they're willing to suppot with monthly donations, thereby subsidizing reform.

Money, he admitted, isn't the entire problem. But, like drinking for an alcoholic, it's the first problem you must solve to be able to tackle any of the others with any hope of success.

In a related but not entirely similar vein, the guys who brought us They Work For You nearly four years ago are back with UN democracy, an attempt to provide a signalling function to the United Nations> by making it easy to find out how your national representatives are voting in UN meetings. The driving force behind UNdemocracy.com is Liverpool's Julian Todd, who took the UN's URL obscurantism as a personal challenge. Since he doesn't fly, presenting the new service were Tom Loosemore, Stefan Mogdalinski, and Danny O'Brien, who pointed out that when you start looking at the decisions and debates you start to see strange patterns: what do the US and Israel have in common with Palau and Micronesia?

The US Congress and the British Parliament are all, they said, now well accustomed to being televised, and their behaviour has adapted to the cameras. At the UN, "They don't think they're being watched at all, so you see horse trading in a fairly raw form."

The meta-version they believe can be usefully and widely applied: 1) identify broken civic institution; 2) liberate data from said institution. There were three more ingredients, but they vanished the slide too quickly. But Mogdalinski noted that where in the past they have said "Ask forgiveness, not permission", alluding to the fact that most institutions if approached will behave as though they own the data. He's less inclined to apologise now. After all, isn't it *our* data that's being released in the public interest?

Data isn't everything. But the Net community has come a long way since the early days, when the prevailing attitude was that technological superiority would wash away politics-as-usual by simply making an end run around any laws governments tried to pass. Yes, technology can change the equation a whole lot. For example, once PGP escaped laws limiting the availability of strong encryption were pretty much doomed to fail (though not without a lot of back-and-forth before it became official). Similarly, in the copyright wars it's clear that copyrighted material will continue to leak out no matter how hard they try to protect it.

But those are pretty limited bits of politics. Technology can't make such an easy end run around laws that keep shrinking the public domain. Nor can it by itself solve policies that deny the reality of global climate change or that, in one of Lessig's examples, back government recommendations off from a daily caloric intake of 10 percent sugar to one of 25 percent. Or that, in another of his examples, kept then Vice-President Al Gore from succeeding with a seventh part to the 1996 Communications Act deregulating ADSL and cable because without anything to regulate what would Congressmen do without the funds those lobbyists were sending their way? Hence, the new approach.

"Technology," Lessig said, "doesn't solve any problems. But it is the only tool we have to leverage power to effect change."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 18, 2008

Harmony, where is thy sting?

On the Net, John Perry Barlow observed long ago, everything is local and everything is global, but nothing is national. It's one of those pat summations that sometimes is actually right. The EU, in the interests of competing successfully with the very large market that is the US, wants to harmonize the national laws that apply to content online.

They have a point. Today's market practices were created while the intangible products of human ingenuity still had to be fixed in a physical medium. It was logical for the publishers and distributors of said media to carve up the world into national territories. But today anyone trying to, say, put a song in an online store, or create a legal TV download service has to deal with a thicket of national collection societies and licensing authorities.

Where there's a problem there's a consultation document, and so there is in this case: the EU is giving us until February 29 (leap year!) to tell them what we think (PDF).

The biggest flaw in the consultation document is that the authors (who needed a good copy editor) seem to have bought wholesale the 2005 thinking of rightsholders (whom they call "right holders"). Fully a third of the consultation is on digital rights management: should it be interoperable, should there be a dispute resolution process, should SMEs have non-discriminatory access to these systems, should EULAs be easier to read?

Well, sure. But the consultation seems to assume that DRM is a) desirable and b) an endemic practice. We have long argued that it's not desirable; DRM is profoundly anti-consumer. Meanwhile, the industry is clearly fulfilling Naxos founder Klaus Heymann's April 2007 prophecy that DRM would be gone from online music within two years. DRM is far less of an issue now than it was in 2006, when the original consultation was launched. In fact, though, these questions seem to have been written less to aid consumers than to limit the monopoly power of iTunes.

That said, DRM will continue to be embedded in some hardware devices, most especially in the form of HDCP, a form of copy protection being built, invisibly to consumers until it gets in their way, into TV sets and other home video equipment. Unfortunately, because the consultation is focused on "Creative Content Online", such broader uses of DRM aren't included.

However, because of this and because some live streaming services similarly use DRM to prevent consumers from keeping copies of their broadcasts (and probably more will in future as Internet broadcasting becomes more widespread), public interest limitations on how DRM can be used seem like a wise idea. The problem with both DRM and EULAs is that the user has no ability to negotiate terms. The consultation leaves out an important consumer consideration: what should happen to content a consumer pays for and downloads that's protected with DRM if the service that sold it closes down? So far, subscribers lose it all; this is clea

The questions regarding multi-territory licensing are far more complicated, and I suspect answers to those depend largely on whether you're someone trying to clear rights for reuse, someone trying to protect your control over your latest blockbuster's markets, or someone trying to make a living as a creative person. The first of those clearly wants to buy one license rather than dozens. The second wants to sell dozens of licenses rather than one (unless it's for a really BIG sum of money). The third, who is probably part of the "Long Tail" mentioned in the question, may be very suspicious of any regime that turns everything he created before 2005 into "back catalogue works" that are subject to a single multi-territory license. Science fiction authors, for example, have long made significant parts of their income by selling their out-of-print back titles for reprint. An old shot in a photographer's long tail may be of no value for 30 years – until suddenly the subject emerges as a Presidential candidate. Any regime that is adopted must be flexible enough to recognize that copyrighted works have values that fluctuate unpredictably over time.

The final set of question has to do with the law and piracy. Should we all follow France's lead and require ISPs to throw users offline if they're caught file-sharing more than three times? We have said all along that the best antidote to unauthorized copying is to make it easy for people to engage in authorized copying. If you knew, for example, that you could reliably watch the latest episode of The Big Bang Theory (if there ever is one) 24 hours after the US broadcast, would you bother chasing around torrent sites looking for a download that might or might not be complete? Technically, it's nonsense to think that ISPs can reliably distinguish an unauthorized download of copyrighted material from an authorized one; filtering cannot be the answer, no matter how much AT&T wants to kill itself trying. We would also remind the EU of the famed comment of another Old Netizen, John Gilmore: "The Internet perceives censorship as damage, and routes around it."

But of course no consultation can address the real problem, which isn't how to protect copyright online: it's how to encourage creators.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 9, 2007

Watching you watching me

A few months ago, a neighbour phoned me and asked if I'd be willing to position a camera on my windowsill. I live at the end of a small dead-end street (or cul-de-sac), that ends in a wall about shoulder height. The railway runs along the far side of the wall, and parallel to it and further away is a long street with a row of houses facing the railway. The owners of those houses get upset because graffiti keeps appearing alongside the railway where they can see it and covers flat surfaces such as the side wall of my house. The theory is that kids jump over the wall at the end of my street, just below my office window, either to access the railway and spray paint or to escape after having done so. Therefore, the camera: point it at the wall and watch to see what happens.

The often-quoted number of times the average Londoner is caught on camera per day is scary: 200. (And that was a few years ago; it's probably gone up.) My street is actually one of those few that doesn't have cameras on it. I don't really care about the graffiti; I do, however, prefer to be on good terms with neighbours, even if they're all the way across the tracks. I also do see that it makes sense at least to try to establish whether the wall downstairs is being used as a hurdle in the getaway process. What is the right, privacy-conscious response to make?

I was reminded of this a few days ago when I was handed a copy of Privacy in Camera Networks: A Technical Perspective, a paper published at the end of July. (We at net.wars are nothing if not up-to-date.)

Given the amount of money being spent on CCTV systems, it's absurd how little research there is covering their efficacy, their social impact, or the privacy issues they raise. In this paper, the quartet of authors – Marci Lenore Meingast (UC Berkeley), Sameer Pai (Cornell), Stephen Wicker (Cornell), and Shankar Sastry (UC Berkeley) – are primarily concerned with privacy. They ask a question every democratic government deploying these things should have asked in the first place: how can the camera networks be designed to preserve privacy? For the purposes of preventing crime or terrorism, you don't need to know the identity of the person in the picture. All you want to know is whether that person is pulling out a gun or planting a bomb. For solving crimes after the fact, of course, you want to be able to identify people – but most people would vastly prefer that crimes were prevented, not solved.

The paper cites model legislation (PDF) drawn up by the Constitution Project. Reading it is depressing: so many of the principles in it are such logical, even obvious, derivatives of the principles that democratic governments are supposed to espouse. And yet I can't remember any public discussion of the idea that, for example, all CCTV systems should be accompanied by identification of and contact information for the owner. "These premises are protected by CCTV" signs are everywhere; but they are all anonymous.

Even more depressing is the suggestion that the proposals for all public video surveillance systems should specify what legitimate law enforcement purpose they are intended to achieve and provide a privacy impact assessment. I can't ever remember seeing any of those either. In my own local area, installing CCTV is something politicians boast about when they're seeking (re)election. Look! More cameras! The assumption is that more cameras equals more safety, but evidence to support this presumption is never provided and no one, neither opposing politicians nor local journalists, ever mounts a challenge. I guess we're supposed to think that they care about us because they're spending the money.
The main intention of Meingast, Pai, et al, however, is to look at the technical ways such networks can be built to preserve privacy. They suggest, for example, collecting public input via the Internet (using codes to identify the respondents on whom the cameras will have the greatest impact). They propose an auditing system whereby these systems and their usage is reviewed. As the video streams become digital, they suggest using layers of abstraction of the resulting data to limit what can be identified in a given image. "Information not pertinent to the task in hand," they write hopefully, "can be abstracted out leaving only the necessary information in the image." They go on into more detail about this, along with a lengthy discussion of facial recognition.

The most depressing thing of all: none of this will ever happen, and for two reasons. First, no government seems to have the slightest qualm of conscience about installing surveillance systems. Second, the mass populace don't seem to care enough to demand these sorts of protections. If these protections are to be put in place at all, it must be done by technologists. They must design these systems so that it's easier to use them in privacy-protecting ways than to use them in privacy-invasive ways. What are the odds?

As for the camera on my windowsill, I told my neighbour after some thought that they could have it there for a maximum of a couple of weeks to establish whether the end of my street was actually being used as an escape route. She said something about getting back to me when something or other happened. Never heard any more about it. As far as I am aware, my street is still unsurveilled.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 24, 2007

Game gods

Virtual worlds have been with us for a long time. Depending who you listen to, they began in 1979, or 1982, or it may have been the shadows on the walls of Plato's cave. We'll go with the University of Essex MUD, on the grounds that its co-writer Richard Bartle can trace its direct influence on today's worlds.

At State of Play this week, it was clear that just as the issues surrounding the Internet in general have changed very little since about 1988, neither have the issues surrounding virtual worlds.

True, the stakes are higher now and, as Professor Yee Fen Lim noted, when real money starts to be involved people become protective.

Level 70 warrior accounts on World of Warcraft go for as little as $10 (though your level number cannot disguise your complete newbieness), but the unique magic sword you won in a quest may go for much more. The best-known pending case is Bragg versus Second Life over virtual property the world's owners confiscated when they realized that Bragg was taking advantage of a loophole in their system to buy "land" at exceptionally cheap prices. Lim had an interesting take on the Bragg case: as a legal concept, she argued, property is right of control, even though Linden Labs itself defines its virtual property as rental of a processor. As computer science that's fine, but it's not law. Otherwise, she said, "Property is mere illusion."

Ultimately, the issues all come down to this: who owns the user experience? In subscription gaming worlds, the owners tend to keep very tight control of everything – they claim ownership in all intellectual property in the world, limit users' ability to create their own content, and block the sale of cheats as much as possible. In a free-form world like Second Life which may host games but is itself a platform rather than a game, users are much freer to do what they want but the EULAs or Terms of Service may be just as unfair.

Ultimately, no matter what the agreement says, today's privately owned virtual worlds all function under the same reality: the game gods can pull the plug at any time. They own and control the servers. Possession is nine-tenths of the law, and all that. Until someone implements open source world software on a P2P platform, this will always be the way. Linden Labs says, for what it's worth, that its long-term intention is to open-source its platform so that anyone may set up a world. This, too, has been done before, with The Palace.

One consequence of this is that there is no such thing as virtual privacy, a topic that everyone is aware of but no one's talking about. The piecemeal nature of the Net means that your friend's IRC channel doesn't know anything about your Web use, and Amazon.com doesn't track what you do on eBay. But virtual worlds log everything. If you buy a new shirt at a shop and then fly to a distant island to have sex with it, all that is logged. (Just try to ensure the shirt doesn't look like a child's shirt and you don't get into litigation over who owns the island…)

There are, as scholars say, legitimate reasons. Logging everything that happens is important in helping game developers pinpoint the source of crashes and eliminate bugs. Logs help settle disputes over who did what to whose magic sword. And in a court case, they may be important evidence (although how you can ensure that the logs haven't been adjusted to suit the virtual world provider, who is usually one of the parties to the litigation, I don't know).

As long as you think of virtual worlds as games, maybe this isn't that big a problem. After all, no one is forced to spend half their waking hours killing enough monsters in World of Warcraft to join a guild for a six-hour quest.

But something like Second Life aspires to be a lot more than that. The world is adding voice communication, which will be interesting: if you have to use your real voice, the relative anonymity conferred by the synthetic world are gone. Quite apart from bandwidth demands (lag is the bane of every SLer's existence), exploring what virtual life is like in the opposite gender isn't going to work. They're going to need voice synthesizers.

Much of the law in this area is coming out of Asia, where massively multi-player online games took off so early with such ferocity that, according to Judge Unggi Yoon, in a recent case a member of a losing team in one such game ran to the café where the winning team was playing and physically battered one of its members. Yoon, who explained some of the new laws, is an experienced online gamer, all the way back to playing Ultima Online in middle school. In his country, a law has recently come into force taxing virtual world transactions (it works like a VAT threshold – under $100 a month you don't owe anything). For Westerners, who are used to the idea that we make laws and export them rather than the other way around, this is quite a reality shift.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 10, 2007

Wall of sheep

Last week at Defcon my IM ID and just enough of the password to show they knew what it was appeared on the Wall of Sheep. This screen projection of the user IDs, partial passwords, and activities captured by the installed sniffer inevitably runs throughout the conference.

It's not that I forgot the sniffer was there, or that there is a risk in logging onto an IM client unencrypted over a Wi-Fi hot spot (at a hacker conference!) but that I had forgotten that it was set to log in automatically whenever it could. Easily done.

It's strange to remember now that once upon a time this crowd – or at least, type of crowd – was considered the last word in electronic evil. In 1995 the capture of Kevin Mitnick made headlines everywhere because he was supposed to be the baddest hacker ever. Yet other than gaining online access and free phone calls, Mitnick is not known to have ever profited from his crimes – he didn't sell copied source code to its owners' competitors, and he didn't rob bank accounts. We would be grateful – really grateful – if Mitnick were the worst thing we had to deal with online now.

Last night, the House of Lords Science and Technology Committee released its report on Personal Internet Security. It makes grim reading even for someone who's just been to Defcon and Black Hat. The various figures the report quotes, assembled after what seems to have been an excellent information-gathering process (that means, they name-check a lot of people I know and would have picked for them to talk to) are pretty depressing. Phishing has cost US banks around $2 billion, and although the UK lags well behind - £33.5 million in bank fraud in 2006 – here, too, it's on the rise. Team Cymru found (PDF) that on IRC channels dedicated to the underground you could buy credit card account information for between $1 (basic information on a US account) to $50 (full information for a UK account); $1,599,335.80 worth of accounts was for sale on a single IRC channel in one day. Those are among the few things that can be accurately measured: the police don't keep figures breaking out crimes committed electronically; there are no good figures on the scale of identity theft (interesting, since this is one of the things the government has claimed the ID card will guard against); and no one's really sure how many personal computers are infected with some form of botnet software – and available for control at four cents each.

The House of Lords recommendations could be summed up as "the government needs to do more". Most of them are unexceptional: fund more research into IT security, keep better statistics. Some measures will be welcomed by a lot of us: make banks responsible for losses resulting from electronic fraud (instead of allowing them to shift the liability onto consumers and merchants); criminalize the sale or purchase of botnet "services" and require notification of data breaches. (Now I know someone is going to want to say, "If you outlaw botnets, only outlaws will have botnets", but honestly, what legitimate uses are there for botnets? The trick is in defining them to include zombie PCs generating spam and exclude PCs intentionally joined to grids folding proteins.)

Streamlined Web-based reporting for "e-crime" could only be a good thing. Since the National High-Tech Crime Unit was folded into the Serious Organised Crime Agency there is no easy way for a member of the public to report online crime. Bringing in a central police e-crime unit would also help. The various kite mark schemes – for secure Internet services and so on – seem harmless but irrelevant.

The more contentious recommendations revolve around the idea that we the people need to be protected, and that it's no longer realistic to lay the burden of Internet security on individual computer users. I've said for years that ISPs should do more to stop spam (or "bad traffic") from exiting their systems; this report agrees with that idea. There will likely be a lot of industry ink spilled over the idea of making hardware and software vendors liable if "negligence can be demonstrated". What does "vendor" mean in the context of the Internet, where people decide to download software on a whim? What does it mean for open source? If I buy a copy of Red Hat Linux with a year's software updates, that company's position as a vendor is clear enough. But if I download Ubuntu and install it myself?

Finally, you have to twitch a bit when you read, "This may well require reduced adherence to the 'end-to-end' principle." That is the principle that holds that the network should carry only traffic, and that services and applications sit at the end points. The Internet's many experiments and innovations are due to that principle.
The report's basic claim is this: criminals are increasingly rampant and increasingly rapacious on the Internet. If this continues, people will catastrophically lose confidence in the Internet. So we must improve security by making the Internet safer. Couldn't we just make it safer by letting people stop using it? That's what people tell you to do when you're going to Defcon.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 15, 2007

Six degrees of defamation

We used to speculate about the future of free speech on the Internet if every country got to impose its own set of cultural quirks and censorship dreams on The lowest common denominator would win – probably Singapore.

We forgot Canada. Michael Geist, the Canada Research Chair of Internet and E-Commerce Law at the University of Ottawa, is being sued for defamation by Wayne Crookes, a Vancouver businessman (it says here). You might think that Geist, who doubles as a columnist for the Toronto Star (so enlightened, a newspaper with a technology law column!), had slipped up and said something unfortunate in one of his public pronouncements. But no. Geist is part of an apparently unlimited number of targets that have linked to other sites that have linked to sites that allegedly contained defamatory postings.

In Geist's words on his blog at the end of May, "I'm reportedly being sued for maintaining a blogroll that links to a site that links to a site that contains some allegedly defamatory third party comments." (Geist has since been served.)
Crookes is also suing Yahoo!, MySpace, and Wikipedia. (If you followed the link to the Wikipedia stub identifying Wayne Crookes, now you know why it's so short. Wikipedia's own logs, searchable via Google, show that it's replacing the previous entry.) Plus P2Pnet, OpenPolitics.ca, DomainsByProxy, and Google. In fact, it's arguable that if Crookes isn't suing you your Net presence is so insignificant that you should put your head in a bucket.

One of the things about a very young medium – as the Net still is – is that the legal precedents about how it operates may be set by otherwise obscure individuals. In Britain, one of the key cases determining the liability of ISPs for material they distribute was 1999's Laurence Godfrey vs Demon Internet. Godfrey was, or is, an otherwise unremarkable British physics lecturer working in Canada until he discovered Usenet; his claim to fame (see for example the Net.Legends FAQ) is a series of libel suits he launched to protect his reputation after a public dispute whose details probably few remember or understand. In 2000 Demon settled the case, paying Godfrey £15,000 and legal costs. And thus were today's notice and takedown rules forged.

The truly noticeable thing about Godfrey's case against Demon was that Demon was not Godfrey's ISP, nor was it the ISP used by the poster whose 1997 contributions to soc.culture.thai were at issue. Demon was merely the largest ISP in Britain that carried the posting, along with the rest of the newsgroup, on its servers. The case therefore is one of a string of cases that loosely circled a single issue: the liability of service providers for the material they host. US courts decided in 1991, in Cubby vs Compuserve, that an online service provider was more like a bookstore than a publisher. But under the Digital Millennium Copyright Act it has become alarmingly easy to frighten individuals and service providers into taking down material based on an official-looking lawyer's letter. (The latest target, apparently, is guitar tablature, which, speaking as a musician myself, I think is shameful.)

But the more important underlying thread is the attempt to keep widening the circle of liability. In Cubby, at least the material at issue appeared on the Journalism Forum which, though independently operated, was part of CompuServe's service. That particular judgement would not have helped any British service provider: in Britain, bookstores, as well as publishers, can be held responsible for libels that appear in the books they sell, a fact that didn't help Demon in the Godfrey case.

In the US, the next step was 2600 DeCSS case (formally known as Universal City vs Reimerdes, which covered not only posting copies of the DVD-decrypting software but linking to sites that had it available. This, of course, was a copyright infringement case, not a libel case; with respect to libel the relevant law seems to be, of all things, the 1996 Communications Decency Act, which allocated sole responsibility to the original author. Google itself has already won at least one lawsuit over including allegedly defamatory material in its search results.

But legally Canada is more like Britain than like the US, so the notion of making service providers responsible may be a more comfortable one. In his column on the subject, Geist argues that if Crookes' suits are successful Canadian free speech will be severely curtailed. Who would dare run a wiki or allow comments on their blog if they are to be held to a standard that makes them liable for everything posted there? Who would even dare put a link to a third-party site on a Web site or in a blogroll if they are to be held liable for all the content not only on that site but on all sites that site links to? Especially since Crookes's claim against Wikimedia is not that the site failed to remove the offending articles when asked, but that the site failed to monitor itself proactively to ensure that the statements did not reappear.

The entire country may have to emigrate virtually. Are you now, or have you ever been, Canadian?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 20, 2006

Spam, spam, spam, and spam

Illinois is a fine state. It is the Land of Lincoln. It is the birth place of such well-known Americans as Oprah Winfrey, Roger Ebert, and Ronald Reagan. It has a baseball team so famous that even I know it's called the Chicago Cubs. John Dewey (as in the Dewey decimal system for cataloguing library books) came from Illinois. So did the famous pro-evolution lawyer Clarence Darrow, Mormon church founder Joseph Smith, the nuclear physicist Enrico Fermi, semiconductor inventor William Shockley, and Frank Lloyd Wright.

I say all this because I don't want anyone to think I don't like or respect Illinois or the intelligence and honor of its judges, including those of Charles Kocoras, who who awarded $11.7 million in damages to e360Insight, a company branded a spammer by the Spamhaus Project.

The story has been percolating for a while now, but is reasonably simple. e360Insight says it's not a bad spammer guy but a good opt-in marketing guy; Spamhaus first said the Illinois court didn't have jurisdiction over a British company with no offices, staff, or operations in the US, then decided to appeal against the court's $11.7 million judgement. e360Insight filed a motion asking the court to haveICANN and/or Spamhaus's domain registrar, the Canadian company Tucows, remove Spamhaus's domain from the Net. The judge refused to grant this request, partly because doing so would cut off Spamhaus's lawful activities, not just those in contravention of the order he issued against Spamhaus. And a good time is being had by all the lawyers.

The case raises so many problems you almost don't know where to start. For one thing, there's the arms race that is spam and anti-spam. This lawsuit escalates it, in that if you can't get rid of an anti-spammer through DDoS attacks, well, hey, bankrupt them through lawsuits.

Spam, as we know, is a terrible, intractable problem that has broken email, and is trying to break blogs, instant messaging, online chat, and, soon, VOIP. (The net.wars blog, this week, has had hundreds of spam comments, all appearing to come from various Gmail addresses, all landing in my inbox, breaking both blogs and email in one easy, low-cost plan. The breakage takes two forms. One is the spam itself – up to 90 percent of all email. But the second is the steps people take to stop it. No one can use email with any certainty now.

Some have argued that real-time blacklists are censorship. I don't think it's fair to invoke the specter of Joseph McCarthy. For one thing, using these blacklists is voluntary. No one is forced to subscribe, not even free Webmail users. That single fact ought to be the biggest protection against abuse. For another thing, spam email in the volumes it's now going out is effectively censorship in itself: it fills email boxes, often obscuring and sometimes blocking entirely wanted email. The fact that most of it either is a scam or advertises something illegal is irrelevant; what defines spam, I have long argued, is the behavior that produces it. I have also argued that the most effective way to put spammers out of business is to lean on the credit card companies to pull their authorisations.

Mail servers are private property; no one has the automatic right to expect mine to receive unwanted email just as I am not obliged to speak to a telemarketer who phones during dinner.

That does not mean all spambusters are perfect. Spamhaus provides a valuable public service. But not all anti-spammers are sane; in 2004 journalist Brian McWilliams made a reasonable case in his book Spam Kings that some anti-spammers can be as obsessive as the spammers they chase.

The question that's dominated a lot of the Spamhaus coverage is whether an Illinois court has jurisdiction over a UK-based company with no offices or staff in the US. In the increasingly connected world we live in, there are going to be a lot of these jurisdictional questions. The first one I remember – the 1996 case United States vs. Thomas – came down in favor of the notion that Tennessee could impose its community decency standards on a bulletin board system in California. It may be regrettable – but consumers are eager enough for their courts to have jurisdiction in case of fraud. Spamhaus is arguably as much in business in the US as any foreign organisation whose products are bought or used in the US. Ultimately, "Come here and say that" just isn't much of a legal case.

The really tricky and disturbing question is: how should blacklists operate in future? Publicly listing the spammers whose mail is being blocked is an important – even vital – way of keeping blacklists honest. If you know what's being blocked and can take steps to correct it, it's not censorship. But publishing those lists makes legal action against spam blockers of all types – blacklists, filtering software, you name it – easier.

Spammers themselves, however, should not rejoice if Spamhaus goes down. Spam has broken email, that's not news. But if Spamhaus goes and we actually receive all the spam it's been weeding out for us – the flood will be so great that spam will finally break spam itself.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 9, 2001

Save the cookie

You would think that by this time in the Internet's history we would have reached the point where the politicians making laws would have learned a thing or two about how it works, and would therefore not be proposing (and passing) quite such stupid laws as they used to. Apparently not.

Somehow, tacked onto an otherwise sensible bill aimed at protecting consumer privacy are provisions requiring Web sites to use cookies only on an opt-in basis. Consultation to remove this bit of idiocy closes in mid-November.

The offending bit appears in the second report on the proposal for a European Parliament and Council directive concerning the processing of personal data and the protection of privacy in the electronic communications sector" (PDF), and is labelled "amendment 26 to article 5, paragraph 2a". What seems to be upsetting the EC is that cookies may enter a user's computer without that user's specific permission.

Well, that's true. On the other hand, it's pretty easy to set any browser to alert you whenever a site wants to send you a cookie - and have fun browsing like that, because you'll be interrupted about every two and a half seconds. Microsoft's Internet Explorer 6 lets you opt out of cookies entirely.

A lot of people are oddly paranoid about cookies, which are, like the Earth in the Hitchhiker's Guide to the Galaxy, mostly harmless. At heart, what cookies do is give Web sites persistent memory. Unlike what many people think, a connection to a Web site is not continuous; you request a page, and then you request another page, and without cookies the Web site does not connect the two transactions.

Cookies are what make it possible to build up an order in a shopping cart or personalize a site so it remembers your ID and password or knows you're interested in technology news and not farming. These uses do not invade privacy.

There are, of course, plenty of things you can do with cookies that are not harmless. Take Web bugs. These hidden graphics, usually 1x1 pixels, enable third parties to track what you do on the Web and harvest all sorts of information about you, your computer, and what browser you use. Privacy-protecting sites like the Anonymizer depend on cookies.

Similarly, the advertising agency DoubleClick has been under severe fire for the way it tracks users from site to site, even though it says that the data are anonymized and the purpose is merely to ensure that the ads you see are targeted to your interests rather than random.

MEPs who want to protect consumer privacy, therefore, should not be looking at the technology itself but at how the technology is used. They should be attrempting to regulate behavior that invades privacy, not the technology itself. To be fair, the report mentions all these abuses. The problem is simply that the clause is overbroad, and needs some revision. Something along the lines of requiring sites to explain in their privacy policies how they use cookies and a prohibition on actually spying on users would do nicely.

The point is to get at what people do with technology, not outlaw the technology itself.

We've had similar problems in the US, most recently and notably with the Digital Millennium Copyright Act, which also tends to criminalize technology rather than behaviour. This is the crevasse that Sklyarov fell into. For those who haven't been following the story, Sklyarov, on behalf of his Russian software company, Elcomsoft, wrote a routine that takes Adobe eBooks and converts them into standard PDFs. Yes, that makes them copiable. But it also makes it possible for people who have bought eBooks to back them up, run them through text-to-speech software (indispensable for the blind), or read them on a laptop or PDA after downloading them onto their desktop machine.

In the world of physical books, we would consider these perfectly reasonable things to do. But in the world of digital media these actions are what rightsholders most fear. Accordingly, the DMCA criminalizes creating and distributing circumvention. As opponents to the act pointed out at the time, this could include anything from scissors and a bottle of Tippex to sophisticated encryption cracking software. The fuss over DeCSS, which removes regional coding from DVDs, is another case in point. While the movie studios argue that DeCSS is wholly intended to enable people to illegally copy DVDs, the original purpose was to let Linux users play the DVDs they'd paid for on their computers, for which no one provides a working commercial software player.

The Internet Advertising Bureau has of course gone all out to save the cookie. It is certainly true, as they say, that it would impair electronic commerce in Europe, the more so because it would be impossible to impose the same restrictions on non-EU businesses.

If MEPs really want to protect consumer privacy, here's what they should do. First of all, learn something about what they are doing. Second of all, focus on behaviour.