"Would you rather have privacy or the right to privacy?" Mireille Hildebrandt asked on Wednesday morning. She coupled it with this analogy: "Would you rather have food or the right to food?"
The second one's easy: how hungry am I?
Her questions were posed on the first day of the Computers, Privacy, and Data Protection, the Brussels event that attracts privacy theorists and regulators from all over Europe (mostly) and the rest of the world (somewhat). It's often described as the European equivalent of Computers, Freedom, and Privacy - but the difference between "data protection" and "freedom" seems to be the difference between activism as expressed by NGO activists and campaigners and activism as expressed by earnestly serious people trying to construct an international regulatory system within government.
Americans-who-are-not-privacy-advocates have a tendency to claim that the US has just as strong - no! stronger! because everything in America is bigger! - privacy protections as Europe does. We can blame this misconception, Bob Gellman noted in a question, on President Reagan. And it cannot possibly be true: if it were, why would big, data-driven US companies be pushing so hard to derail data protection reform?
Two key themes dominated the week: the details of and prospects for getting data protection reform through on schedule, and considering the appropriate response to pervasive global mass surveillance. Peter Hustinx, the departing European data protection supervisor, wrapped up the conference (for the last time) with a burst of optimism: "I strongly believe that change and the outcome of this reform is unavoidable," he said, "and that the review is far from dead even though some commentators wish it were."
A panel on Wednesday morning was much more divided. The Polish regulator, Wojciech Wiewiórowski, outlined worse and worst scenarios of non-passage; others remained optimistic. So the best conclusion one can draw is that they don't know. In itself, that's deeply disturbing because of the week's other main theme, on which everyone was in agreement. If seven months of revelations of mass surveillance via the giant spying-as-a-service platform much of the Internet has become can't get the EU solidly behind increased privacy protection, what can?
Traditionally, the US approach is driven by consumer protection; the EU's is driven by data protection. But the underlying point these days is broader than Justices Warren and Brandeis's 1890 citation of Judge Cooley's right to be let alone. than creating a fair balance of power between citizens/consumers and the governments/large companies we all have to deal with. A government or a company like Google does not have to become evil to squash someone's rights like a bug; it may fail at individual rights simply because each individual is too small for its in-house microscope.
Except in its absence, privacy is so slippery to grasp that everyone tries to find workable analogies. Christophe Lazaro tried driving a car, since both that and consent to data use operate in a particular environment, have external social consequences, and become routine enough to take place in only partial awareness. You can extend the analogy as he did, and ask whether insurance to distribute the risk of poor choices should be mandatory. And there it fails: the consequences of driving badly are relatively predictable and bounded by physics. The consequences of mistaken consent - or violated consent - are unknowable and may not even be finite.
This is especially true because of the findings of Meg Leta Ambrose, who studies digital decay. The Internet's memory is like William Gibson's future: unevenly distributed. She finds that only 10 to 15 percent of Web content lasts a year. The half-life of sites is 556 days, of URLs two months, of content two days. Put that in your right to be forgotten.
That's why Hildebrandt's question perfectly captured the zeitgeist. The general rule is that technology is fast, but law is slow. In this business, Phil Zimmermann is the archetype: working on a PC in 1991, he wrote and released PGP, pre-empting the possibility of a legal ban on domestic use of strong encryption. Two generations (counting idiosyncratically) of privacy-enhancing technologies later, and what do we have? SSL (hacked by the NSA). Encrypted email (shut down rather than grant law enforcement demanded back door access). Tor (hacked by the NSA, albeit with difficulty). And so on.
The 1995 data protection principles have held up remarkably well considering how visibly and how profoundly "data" has changed in the interval. Yet the big change snuck up on us unawares, it occurred to me during a panel on social networks. The line that was easily drawn between data controllers and data subjects since 1995 has blurred exactly like the copyright industries' imaginary line between consumers and creators.
Granted, Zimmermann's PGP (and its GPG counterpart) still function as well as they ever did. But they protect content, not today's bigger threat, metadata. Technology can enable us to claim our privacy, but only temporarily partially, until the technology is cracked or bypassed. Technology wins battles. To win the war, we need law as well.
Wendy M. Grossman Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.