Main

January 13, 2023

The ad delusion

Thumbnail image for Facebook-76536_640.pngThe fundamental lie underlying the advertising industry is that people can be made to like ads. People inside the industry sometimes believe this to a delusional degree - at an event some years ago, for example, I remember a Facebook representative suggesting that correctly targeted ads could be even more compelling to the site's users than *pictures of their grandchildren*. As if.

Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.

Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.

This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.

This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.

Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".

At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.

At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.

At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.

The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.

The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.

Illustrations: Image of Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon or Twitter.

January 5, 2023

Resolution

metThumbnail image for Metropolis-openingshot.pngFor the last five years a laptop has been whining loudly in my living room. It hosts my mail server.

I know: who has their own mail server any more? Even major universities famed for their technological leadership now outsource to Google and Microsoft.

In 2003, when I originally set it up, lots of geeky friends had them. I wanted my email to come to the same domain as my website, which by then was already eight years old. I wanted better control of spam than I was getting with the email addresses I was using at the time. I wanted to consolidate the many email addresses I had accrued through years of technology reporting. And I wanted to be able to create multiple mailboxes at that domain for different purposes, so I could segregate the unreadable volume of press releases from personal email (and use a hidden, unknown address for sensitive stuff, like banking). At the time, I had that functionality via an address on the now-defunct Demon Internet, but Demon had become a large company in its ten years of existence, and you never knew...

In 2015, when Hillary Clinton came under fire for running her own mail server, I explained all this for Scientific American. The major benefit of doing it yourself, I seem to recall concluding at the time, was one Clinton's position barred her from gaini ng: the knowledge that if someone wants your complete historical archive they can't get it by cutting a secret deal with your technology supplier.

For about the first ten years, running my own mail server was a reasonably delightful experience. Being able to use IMAP to synchronize mail across multiple machines or log into webmail on my machine hanging at the end of my home broadband made me feel geekishly powerful, like I owned at least this tiny piece of the world. The price seemed relatively modest: two days of pain every couple of years to update nad upgrade it. And the days of pain weren't that bad; I at least felt I was gaining useful experience in the process.

Around me, the technological world chnaged. Gmail and other services got really good at spam control. The same friends with mail servers first began using Gmail for mailing lists, and then, eventually, for most things.

And then somehow, probably around six or seven years ago, the manageable two days of pain crossed into "I don' wanna" territory. Part of the problem was deciding whether to stick with Windows as the operating system or shift to Linux. Shifting to Linux required a more complicated and less familiar installation process as well as some extra difficulty in transferring the old data files. Staying with Windows, however, meant either sticking with an old version heading for obsolescence or paying to upgrade to a new version I didn't really want and seemed likely to bring its own problems. I dithered.

I dithered for a long time.

Meanwhile, dictionary attacks on that server became increasingly relentless. This is why the laptop is whining: its limited processing power can't keep up with each new barrage of some hacker script trying endless user names to find the valid ones.

There have been weirder attacks. One, whose details I have mercifully reppressed, overwhelmed the server entirely; I was only able to stop it by barring a succession of Internet addresses.

Things broke and didn't get repaired, awaiting the upgrade that never happened. At some point, I lost the ability to log in remotely via the web. I'm fairly sure the cause was that I changed a setting and not some hacker attack, but I've never been able to locate and fix it. This added to the dither of upgrading, as did the discovery that my server software appeared to have been bought by a Russian company.

Through all this, the outside world became more hostile to small servers, as part of efforts to improve spam blocking security against attacks. Delaying upgrading the server has also meant not keeping up well enough with new protocols and preventions as they've developed. Administrators I deal with began warning me about resulting incompatibilities. Gmail routinely dropped my email to friends into spam folders. I suspect this kind of concentration will be the future of the Mastodon Fediverse if it reaches mainstream use.

The warnings this fall that Britain might face power outages this winter broke the deadlock. I was going to have to switch to hosted email like everyone else. Another bit of unwiring.

I can see already that it will be a great relief not worrying about the increasingly fragile server any more. I can reformat and give away that old laptop and the less old one that was supposed to replace it. I will miss the sense of technological power having it gave me, but if I'm honest I haven't had that in a long time now. In fact, the server itself seems to want to be put out of its misery: it stopped working a few days before Christmas, and I'm running on a hosted system as a failover. Call it my transitional server.

If I *really* miss it, I suppose I can always set up my own Mastodon instance. How hard can it be, right?


Illustrations: A still from Fritz Lang's 1927 classic, Metropolis, in celebration of its accession into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon.or Twitter.


The ad delusion

Thumbnail image for Facebook-76536_640.pngThe fundamental lie underlying the advertising industry is that people can be made to like ads. People inside the industry sometimes believe this to a delusional degree - at an event some years ago, for example, I remember a Facebook representative suggesting that correctly targeted ads could be even more compelling to the site's users than *pictures of their grandchildren*. As if.

Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.

Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.

This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.

This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.

Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".

At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.

At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.

At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.

The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.

The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.

Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 28, 2022

MAGICAL, Part 1

hasrbrouck-cpdp2017.jpg"What's that for?" I asked. The question referred to a large screen in front of me, with my newly-captured photograph in the bottom corner. Where was the camera? In the picture, I was trying to spot it.

The British Airways gate attendant at Chicago's O'Hare airport tapped the screen and a big green checkmark appeared.

"Customs." That was all the explanation she offered. It had all happened so fast there was no opportunity to object.

Behind me was an unforgiving line of people waiting to board. Was this a good time to stop to ask:

- What is the specific purpose of collecting my image?

- What legal basis do you have for collecting it?

- Who will be storing the data?

- How long will they keep it?

- Who will they share it with?

- Who is the vendor that makes this system and what are its capabilities?

It was not.

I boarded, tamely, rather than argue with a gate attendant who certainly didn't make the decision to install the system and was unlikely to know much about its details. Plus, we were in the US, where the principles of the data protection law don't really apply - and even if they did, they wouldn't apply at the border - even, it appears, in Illinois, the only US state to have a biometric privacy law.

I *did* know that US Customs and Border Patrol had begun trialing facial recognition in selected airports, beginning in 2017. Long-time readers may remember a net.wars report from the 2013 Biometrics Conference about the MAGICAL [sic] airport, circa 2020, through which passengers flow unimpeded because their face unlocks all. Unless, of course, they're "bad people" who need to be kept out.

I think I even knew - because of Edward Hasbrouck's indefatagable reporting on travel privacy - that at various airports airlines are experimenting with biometric boarding. This process does away entirely with boarding cards; the airline captures biometrics at check-in and uses them to entirely automate the "boarding process" (a favorite bit of airline-speak of the late comedian George Carlin). The linked explanation claims this will be faster because you can have four! automated lanes instead of one human-operated lane. (Presumably then the four lanes merge into a giant pile-up in the single-lane jetway.)

It was nonetheless startling to be confronted with it in person - and with no warning. CBP proposed taking non-US citizens' images in 2020, when none of us were flying, and Hasbrouck wrote earlier this year about the system's use in Seattle. There was, he complained, no signage to explain the system despite the legal requirement to do so, and the airport's website incorrectly claimed that Congress mandated capturing biometrics to identify all arriving and departing international travelers.

According to Biometric Update, as of last February, 32 airports were using facial recognition on departure, and 199 airports were using facial recognition on arrival. In total, 48 million people had their biometrics taken and processed in this way in fiscal 2021. Since the program began in 2018, the number of alleged impostors caught: 46.

"Protecting our nation, one face at a time," CBP calls it.

On its website, British Airways says passengers always have the ability to opt out except where biometrics are required by law. As noted, it all happened too fast. I saw no indication on the ground that opting out was possible, even though notice is required under the Paperwork Reduction Act (1980).

As Hasbrouck says, though, travelers, especially international travelers and even more so international travelers outside their home countries, go through so many procedures at airports that they have little way to know which are required by law and which are optional, and arguing may get you grounded.

He also warns that the system I encountered is only the beginning. "There is an explicit intention worldwide that's already decided that this is the new normal, All new airports will be designed and built with facial recognition built into them for all airlines. It means that those who opt out will find it more and more difficult and more and more delaying."

Hasbrouck, who is probably the world's leading expert on travel privacy, sees this development as dangerous. Largely, he says, it's happening unopposed because the government's desire for increased surveillance serves the airlines' own desire to cut costs through automating their business processes - which include herding travelers onto planes.

"The integration of government and business is the under-noticed aspect of this. US airports are public entities but operate with the thinking of for-profit entities - state power merged with the profit motive. State *monopoly* power merged with the profit motive. Automation is the really problematic piece of this. Once the infrastructure is built it's hard for airline to decide to do the right thing." That would be the "right thing" in the sense of resisting the trend toward "pre-crime" prediction.

"The airline has an interest in implying to you that it's required by government because it pressures people into a business process automation that the airline wants to save them money and implicitly put the blame on the government for that," he says. "They don't want to say 'we're forcing you into this privacy-invasive surveillance technology'."


Illustrations: Edward Hasbrouck in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 7, 2022

Recycle

recycle.jpegBad ideas never die.

In particular, bad ideas in Internet policy never die. Partly, it's a newcomer problem. In the 1990s, one manifestation of this was that every newly-connected media outlet would soon run the story warning readers not to open an email with a particular subject line - for example, Join the Crew - because it would instantly infect your computer. These were virus hoaxes. At the time, emails were all plain text, and infection on opening an email was a technical impossibility. (Would that it still were.) This did end because the technology changed.

Still with us, though, are repeated calls to end online anonymity. It doesn't matter who it was this week, but there was a professorial tweet: social media should require proof of identity. This despite decades of experience and research that show that often the worst online behavior comes from people operating under their own well-known, real-world identity, and that many people who use anonymity really need it. And I do mean decades: it's 30 years since Lee Sproull and Sara Kiesler published their study of human behavior on corporate mailing lists.

This week, Konstantinos Komaitis, a senior director at the Internet Society, and 28 other Internet experts and academics sent a letter to the European Commission urging it to abandon possibly imminent proposals to require content providers such as Google and Facebook to pay "infrastructure fees" to telecommunications companies. The letter warns, as you'd expect, that bringing in such feeds upends the network neutrality rules in place in many parts of the world, including the EU, where they became law in the 2015 Open Internet Regulation.

Among prior attempts, Komaitis highlights similar proposals from 2012, but he could have as easily pointed to 2005, when the then CEO of AT&T, Ed Whitacre, said he was tired of big Internet sites using "my pipes" "for free". At the time, network neutrality was being hotly disputed.

The Internet community has long distrusted telcos. First, because the pioneers still remember their hostility to the nascent Internet and, as they will remind you at any mention of the International Telecommunications Union, because during the telcos' decades of monopoly were also decades of stagnation. A small sample of the workarounds and rule-breaking Internet founders had to adopt in Britain alone was presented at an event in 2013 that featured notable contributors Peter Kirstein, Roger Scantlebury, and Vint Cerf.

Of course, we all know what's happened since then: scrappy little Internet startups became Big Tech, and now everyone wants a piece of their wealth - governments, through taxation and telcos through changing the entire business model.

Until the EU's proposals surfaced last year, it was possible to think that this particular bad idea had finally died of old age. AT&T has changed CEOs a couple of times, and for a while in there it was owner of Time-Warner, which has its own streaming products. The fundamental issue is that the Internet infrastructure has grown up as a sort-of cooperative, in which everyone pays for their own connections and freely exchanges data with peers. In the world the telcos - and the postal services - live in, senders pay for carriage and intermediate carriers get a slice ("settlement"). Small wonder the telcos want to see that world return. (They shouldn't have been so dismissive at the beginning.)

EU telcos have been tilting at this particular wind turbine for a long time; in 2012, the European Telecommunications Network Operators Association (ETNO) called for settlement as part of a larger proposal to turn Internet governance over to the International Telecommunications Union. A contemporaneous 2012 presentation by analyst Falk von Bornstaedt argued that "sending party network pays" is the necessary future in order to provide quality-of-service guarantees.

The current EU call for this change is backed by Duetsche Telekom, Orange, Telefonica, and 13 other telcos. They have a new excuse: the energy crisis and plans for combating climate change mean they need Big Tech to share the costs of rolling out 5G and fiber optic cabling. More than half of global network traffic, they argue, is attributable to just six companies: Google, Facebook/Meta, Netflix, Apple, Amazon, and Microsoft.

It is certainly true that the all-you-can-eat model of Internet connection encourages some wastefulness such as ubiquitous Facebook trackers or constantly-connected subscription office software. Moving to "the metaverse", as Meta has $70 billion worth of hope that you will, will make this exponentially worse.

On the other hand, consider the truly undesirable consequences of changing the business model. The companies paying the telcos extra for carriage will expect in return to have their traffic prioritized. That in turn will disadvantage their competitors who don't have either that financial burden or that privileged access. Soon, what's left of the open Internet would be even more of an oligopoly, particularly with respect to high-bandwidth applications like video or virtual worlds, where network lag is the enemy of tolerable quality.

In a column (PDF), lays out the issues quite clearly and warns: 1) we may not have the tools to understand the consequences of such a change; and 2) we might not be able to unwind it if we regret it later, particularly if these companies continue to merge into even bigger and more predatory giants.

Tl;dr: Please don't do this.

Illustrations: Recycling symbol.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 4, 2022

Sovereign stack

UA-sunflowers.jpgBut first, a note: RIP Cliff Stanford, who in 1993 founded the first Internet Service Provider to offer access to consumers in the UK, died this week. Stanford's Demon Internet was my first ISP, and I well remember having to visit their office so they could personally debug my connection, which required users to precisely configure a bit of code designed for packet radio (imagine getting that sort of service now!). Simon Rockman has a far better-informed obit than I could ever write.

***

On Monday, four days after Russia invaded Ukraine, the Ukrainian minister for digital transformation, Mykhailo Fedorov, sent a letter (PDF) to the Internet Corporation for Assigned Names and Numbers and asked it to shut down Russian country code domains such as .ru, .рф, and .su. Quick background: ICANN manages the Internet's domain name system, the infrastructure that turns the human-readable name for a website or email address that you type in into the routing numbers computers actually use to get your communications to where you want them to go. Fedorov also asked ICANN to shut down the DNS root servers located in Russia, and plans a separate letter to request the revocation of all numbered Internet addresses in use by Russian members of RIPE-NCC, the registry that allocates Internet numbers in Europe and West Asia.

Shorn of the alphabet soup, what Fedorov is asking ICANN to do is sanction Russia by using technical means to block both incoming (we can't get to their domains) and outgoing (they can't get to ours) Internet access, on the basis that Russia uses the Internet to spread propaganda, disinformation, hate speech and the promotion of violence.

ICANN's refusal (PDF) came quickly. For numerous reasons, ICANN is right to refuse, as the Internet Society, Access Now, and others have all said.

Internet old-timers would say that ICANN's job is management, not governance. This is a long-running argument going all the way back to 1998, when ICANN was created to take over from the previous management, the University of Southern California computer scientist Jon Postel. Among other things, Postel set up much of the domain name system, selecting among submitted proposals to run registries for both international top-level domains (.com and .net, for example), and country code domains (such as .uk and .ru). Especially in its early years, digital rights groups watched ICANN with distrust, concerned that it would stray into censorship at the behest of one or another government instead of focusing on its actual job, ensuring the stability and security of the network's operation.

For much of its history ICANN was accountable to the US National Telecommunications and Information Administration, part of the Department of Commerce. It became formally independent as a multistakeholder organization in 2016, after much wrangling over how to construct the new model.

This history matters because the alternative to ICANN was transitioning its functions to the International Telecommunications Union, an agency of the United Nations, a solution the Internet community generally opposed, then and now. Just a couple of weeks ago, Russia and China began a joint push towards greater state control, which they intended to present this week to the ITU's World Telecommunication Standardization Assembly. Their goal is to redesign the Internet to make it more amenable to government control, exactly the outcome everyone from Internet pioneers to modern human rights activists seeks to avoid.

So, now. Shutting down the DNS at the request of one country would put ICANN exactly where it shouldn't be: making value judgments about who should have access.

More to the specific situation, shutting off Russian access would be counterproductive. The state shut down the last remaining opposition TV outlet on Thursday, along with the last independent radio station. Many of the remaining independent journalists are leaving the country. Recognizing this, the BBC is turning its short-wave radio service back on. But other than that. the Internet is the only remaining possibility most Russians have of accessing independent news sources - and Russia's censorship bureau is already threatening to block Wikipedia if it doesn't cover the Ukraine invasion to its satisfaction.

In fact, Russia has long been working towards a totally-controlled national network that can function independently of the rest of the Internet, like the one China already has. As The Economist writes, China is way ahead; it has 25 years of investment in its Great Firewall, and owns its entire national "stack". That is, it has domestic companies that make chips, write software, and provide services. Russia is far more dependent on foreign companies to provide many of the pieces necessary to fill out the "sovereign stack" it mandated in 2019 legislation. In July 2021, Russia tested disconnecting its nascent "Runet" from the Internet, though little is known about the results. It is

There are other, more appropriate channels for achieving Fedorov's goal. The most obvious are the usual social media suspects and their ability to delete fake accounts and bots and label or remove misinformation. Facebook, Google, and Twitter all moved quickly to block Russian state media from running ads on their platforms or, in Facebook's case, monetizing content. Since then, Google has paused all ad sales in Russia. The economic sanctions enacted by many countries and the crash in the ruble should shut down Russians' access to most Western ecommerce. Many countries are kicking Russia's state-media channels off

This war is a week old. It will end - sometime. It will not pay in the long term (assuming we have one) to lock Russian citizens, many of whom oppose the war, into a state media-controlled echo chamber. Out best hope is to stay connected and find ways to remediate the damage, as painful as that is.


Illustrations: Sunflowers under a blue sky (by Inna Radetskaya at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 24, 2021

Scale

hockey-stick.jpgWeb3, the push to decentralize the net, got a lot more attention this week after the venture capital firm Andreesen Horowitz published guidance for policy makers - while British software engineer Stephen Diehl to blogged calling web3 "bullshit", a "vapid marketing campaign", and a "rhetorical trick" (thanks to Mike Nelson for the pointer).

Here, a month ago, we tried to tease out some of the hard problems web3 is up against. Diehl attacks the technical basis, citing the costs of the computation and bandwidth necessary to run a censorship-proof blockchain network, plus the difficulty of storage, as in "who owns the data?". In other words, web3, as he understands it, won't scale.

Meanwhile, on Twitter, commenters have highlighted Andreesen Horowitz's introductory words, "We are radically optimistic about the potential of web3 to restore trust in institutions and expand access to opportunity." If, the argument goes, venture capitalists are excited about web3 that's a clear indicator that they expect to reap the spoils. Which implies an eventual outcome favoring giant corporate interests.

The thing that modern venture capitalists always seek with (due) diligence is scale. Scale means you can make more of something without incurring (much) additional cost. Scale meant Instagram could build a business Facebook would buy for $1 billion with only 13 employees. Venture capitalists want the hockey stick.

Unsurprisingly, given the venture capital appeal, the Internet is full of things that scale - social media sites, streaming services, software, other forms of digital content distribution, and so on. Yet many of the hard problems we struggle to solve are conflicts between scale and all the things on the Internet that either *don't* scale. Easy non-Internet example: viruses scale, nurses don't. Or, more nettishly, facial recognition scales; makeup artists don't. And so on.

An obvious and contentious Internet example: content moderation. Even after AI has automatically removed the obvious abuses, edge cases rapidly escalate beyond the resources most companies are willing to throw at it. In his book Social Warming, Charles Arthur suggests capping the size of social networks, an idea echoed recently by Lawfare editor Ben Wittes in an episode of In Lieu of Fun, who commented that sites shouldn't be allowed to grow larger than they can "moderate well". It's hard to think of a social media site that hasn't. It's also hard to understand how such a cap would work without frustrating everyone. If you're user number cap+1, do you have to persuade all your friends to join a less-populated network so you can be together?

More broadly - a recurrent theme - community on the Internet does not scale. In every form of online community back to bulletin board systems and Usenet, increasing size always brings abuse. In addition, over and over online forums show the power law distribution of posters: a small handful do most of the talking, followed by a long tail of occasional contributors and a vast majority of lurkers. The loudest and most persistent voices set the tone, get the attention, and reap the profits, if there are any to be had.

The problem of scaling content moderation applies more generally to online governance. As societies grow, become more complex, and struggle with abuse, turning governance over to paid professionals seems to be the near-universal solution.

Another thing that doesn't scale: discovery, as Benedict Evans recently pointed out in a discussion of email newsletters and Substack.

One of the marvels of 2021 has been the reinvention of emailed newsletters as a paying proposition. Of course, plenty of people were making *some* money from such things way back even before email. But this year has taken it to a new level. People are signing six-figure deals with Substack and giving up ordinary journalism gigs and book deals to do it.

Evans points out that in newsletters, as in previous Internet phenomena - podcasts, web pages (hence search engines, and ecommerce (hence aggregation) - the first people who show up in an empty space with good stuff people want do really well. We don't hear so much any more about first-mover advantage, but it often still applies.

Non-fungible tokens (NFTs) may be the latest example. A few very big paydays are drawing all sorts of people into the field. Some will profit, but many more will not. Meanwhile, scams and copyright and other issues are proliferating. Even if regulation eventually makes participation safer, the problem will remain: people have limited resources to spend on such things, and the field will be increasingly crowded.

So, too, Substacks and newsletters: there are not only limits to how many subscriptions people can afford, but also to how many things they have time to read. In a crowded field, discovery is everything.

Individuals' attention spans and financial resources do not scale. The latter is one reason the pay-with-data model has been so successful on the web; the former is part of why people will sacrifice privacy and participatory governance in favor of convenience.

So, our partial list of things that do not scale: content moderation, community, discovery, governance. Maybe also security to some extent. In general: anything that requires human labor to be added proportionately to its expansion. Incorporating solving problems of scale will matter if we're going to have a different outcome from web3 than from previous iterations.


Illustrations: A hockey stick.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 17, 2021

Dependencies at scals

xkcd-dependency.pngIt's the complexity that's going to get us. (We're talking cyber system failures, not covid!)

In the 1990s and early 2000s Internet pundits used to have a fun game: what was going to kill the Internet? Or, what was going to kill the Internet *next*? The arrival of the web, which brought a much larger user base and comparatively data-hungry graphics (comparatively as in text; obviously much worse was to come), nearly did it for a bit, which is why a lot of us called it the "World Wide Wait".

Here's one example, a net.wars from 2002, based on a panel from the 1998 Computers, Freedom, and Privacy conference: 50 ways to crash the net. The then-recent crisis that had suggested the panel was a denial-of-service attack on the core 13 routers that form the heart of the domain name system. But also: the idea was partly suggested by a Wired article by Simson Garfinkel about how to crash the Internet, based on both the router incident and another in which a construction crew in Virginia sliced through a crucial fiber optic cable. As early as that, Garfinkel blamed centralization and corporatization; the "Internet" that was built to withstand a bomb outage was the old military Internet, not the commercial one built on its bones.

But that's not what's going to get us. People learn! People fix things! In fact, experts tell me, the engineering that underlies the Internet is nothing like it was even ten years ago. "The Internet" as an engineer would talk about it is remarkably solid and robust. When the rest of us sloppily complain about "the Internet" what we mean is buggy software, underfunded open source projects that depend on one or a few overworked people but underpin software used by billions, human error, database leaks, sloppy security policies, corporate malfeasance, criminal attacks, failures of content moderation on Facebook, and power outages. When these factors come into play and connections break, "the Internet" is actually still fine. The average user, however, when unable to reach Netflix and find many other sites are also unreachable, interprets the situation as "the Internet is out". It's a mental model issue.

A few months ago, we noted the fragile brittleness of today's "Internet" after an incident in which one person made a perfectly ordinary configuration change that should have done nothing more than alter the settings on their account and instead set off a cascade of effects that knocked out a load of other Internet services. Also right around then, a ransomware attack using a leaked password and a disused VPN account led to corporate anxiety that shut down the Colonial pipeline, leading to gas shortages up and down the US east coast. These were not outages of "the Internet", but without the Internet they would not have happened.

This year is ending with more such issues. Last week, Amazon Web Services had an outage service event in which "unexpected behavior" created a feedback loop of increasing congestion that might as well have been a denial-of-service attack. What followed was an eight-hour lesson in service dependence. Blocked during that time: parts of Amazon's own retail and delivery operations, including Whole Foods; Disney+; Netflix; Internet of Things devices including Amazon Ring doorbells, Roomba vacuum cleaners, and connected cat litter boxes; and the teaching platform Canvas.

Separately but almost simultaneously, a vulnerability now dubbed Log4Shell was reported to the Apache Foundation, which notified the world at large on December 9. The vulnerability is one of a classic type in which a program - in this case popular logging software Log4j - interprets an input data string as an instruction to execute. In this case, as Dan Goodin explains at Ars Technica, the upshot is that attackers can execute any Java code they like on the affected computer. The vulnerability, which has been present since 2013, is all over the place, embedded in systems that run...everything. Within a few days 44% of corporate networks had been probed and more than 60 exploit variants had been developed, with some attacks coming from state actors and criminal hacking groups. As Goodin explains, your best hope is that your bank, brokerage, and favorite online shops are patching their systems right now.

The point about all this is that greater complexity breeds more, and more difficult to find and fix, errors. Even many technical experts had never heard of Log4j until this bug appeared. Few would expect a bug in a logging utility to be so broadly dangerous, just as few could predict which major businesses would be taken out by an AWS outage. As Kurt Marko writes at Diginomica, the two incidents show the hidden and unexpected dependencies lurking on today's "Internet". The same permissionlessness that allowed large businesses to start with nothing and scale up means dependencies no one has found (yet). In 2014, shortly after Heartbleed reminded everyone of the dangers of infrastructure dependence on software maintained by one or two volunteers, Farhad Majoo warned at the New York Times about the risks of just this complexity.

Complexity and size bring dependencies at scale - harder to predict than the weather, in part because software is forever. Humans are not good at understanding scale.


Illustrations: XKCD's classic cartoon, "Dependency".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 5, 2021

The vanishing Post Office (part II)

Kew-postoffice-2021.JPGBack in 2014, a big red van rolled up outside the post office around the corner, packed it up, and drove it away. After months of uncertainty, that local post office (or, more correctly, *sub* post office) was reinstalled in a centrally located newsagent in Kew village.

Peace has mostly ensued.

Over the years, however, the subpostmaster in charge of it has become visibly and increasingly frustrated as its income continued to drop. Several years ago, he began talking about selling up, if only he could find a buyer. He was closed for some months early in the pandemic, and, although he did reopen, it was with shorter hours. Now, I hear he'll be gone come New Year's, and it will all be the new buyer's problem. I don't know what the buyer will put in its place, but it sounds like it won't be a post office. It may not even be a newsagent, in which case the village's only surviving place to buy a newspaper will be the shop in the station, down from three just a few years ago.

Now, if you want to look at this little story as a pure question of efficiency and available service, you will probably point out that there is a perfectly good, larger, and fuller-service post office barely a mile away in Richmond, reachable by foot, bike, or frequent bus. (You can drive, but you can't park.) However, the main point is that 30 years ago Kew had a full-fledged Post Office in its own solid building (which has long since been remodeled into a pizza restaurant) and now it won't have one at all. And while Kew will survive as a community, the same is not true for many other places that are less favored. In April 2019, the National Federation of SubPostmasters predicted that 22% of post offices around the UK would close or downsize over the next 12 months; our retiring guy is one dot in this expanding nationwide pattern.

The even larger point is that the loss of our post office isn't due to a carefully thought-out plan for reorganization or changed ideas about what communities need in order to remain worthy of the name, but the result of terminal frustration for the subpostmaster. It is alienation and attrition.

Some other statistics from that 2019 survey. The NFSP found that 76% of subpostmasters were earning less than minimum wage per hour from their post office work; 61% reported their income had dropped; and 19% needed an outside job for themselves or their spouse/partner in order to survive. Can't-wait-to-retire showed me the survey, which is currently being rerun.

All of that is without the recent scandal in which hundreds of subpostmasters were prosecuted for fraud based on the output of buggy software; 39 convictions were quashed.

No wonder they're quitting.

It's easy to blame the Internet and email, but it's not that simple. Yes, the Internet cut deeply into personal correspondence, but so did government decisions such as the drive to switch to direct electronic benefits payments - still ongoing - and the digitisation of services like passport and car registration renewals that local post offices used to provide. In addition, since 2006 the postal market has been opened to competition, the Royal Mail was privatized and, in 2013, floated on the stock exchange while the nation's post offices were segregated into the subsidiary Post Office Limited. Competition has enabled cheap, convenient services to flourish, but has also creamed off the most profitable parts of package delivery.

Ultimately, the problem is that today's communities were built around services like banks and post offices that at one time were community hubs but are now outposts of national or even international businesses. In this version of globalization, local communities hollow out because the social infrastructure that underpins them vanishes or loses its local face. It's the difference between living in a real place and picking a convenientish bedroom you can afford.

Some time ago, the Scottish government began studying the country's towns and came up with three main types: independent, dependent, and interdependent. An independent town has enough services that residents don't need to go elsewhere for daily needs such as jobs, doctors and dentists, retail shopping, and public sector services. A dependent town's residents can't function without traveling elsewhere to meet their basic needs. An interdependent town is somewhere in between. It's not all about population or location: St Andrews, Fife, population 16,870, is an interdependent town; the remote northern town of Thurso, population 7,933, is independent (it has to be!); and Houston, west of Glasgow in Renfrewshire, population 6,396, is interdependent to dependent.

As I understand it, the idea of looking at towns this way is to work out how to ensure that as many locations as possible remain viable and help boost those that are struggling. Maybe study will show that post offices, like many churches, don't matter any more and what communities need in today's world is something else - broadband-supplied virtual reality hubs, or communal kitchens. But as all the traditional community hubs disappear or are severely cut back - post offices, libraries, youth clubs, leisure centers - we need that kind of study. It's really not enough to just say, "Oh, there's another one down the road a piece - and there's an app!"


Illustrations: The soon-to-be-gone post office sign.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 8, 2021

The inside view

Facebook user revenue chart.pngSo many lessons, so little time.

We have learned a lot about Facebook in the last ten days, at least some of it new. Much of it is from a single source, the documents exfiltrated and published by Frances Haugen.

We knew - because Haugen is not the first to say so - that the company is driven by profits and a tendency to view its systemic problems as PR issues. We knew less about the math. One of the more novel points in Haugen's Senate testimony on Tuesday was her explanation of why Facebook will always be poorly moderated outside the US: safety does not scale. Safety costs the same for each new country Facebook adds - but each new country is also a progressively smaller market than the last. Consequence: the cost-benefit analysis fails. Currently, Haugen said, Facebook only covers 50 of the world's approximately 500 languages, and even in some of those cases the country does not have local experts to help understand the culture. What hope for the rest?

Additional data: at the New York Times, Kate Klonick checks Facebook's SEC filings to find that average revenue per North American user per *quarter* was $53.56 in the last quarter of 2020, compared to $16.87 for Europe, $4.05 for Asia, and $2.77 for the rest of the world. Therefore, Klonick said at In Lieu of Fun, most of its content moderation money is spent in the US, which has less than 10% of the service's users. All those revenue numbers dropped slightly in Q1 2021.

We knew that in some countries Facebook is the only Internet people can afford to access. We *thought* that it only represented a single point of failure in those countries. Now we know that when Facebook's routing goes down - its DNS and BGP routing were knocked out by a "maintenance error" - the damage can spread to other parts of the Internet. The whole point of the Internet was to provide communications in case of a bomb outage. This is bad.

As a corollary, the panic over losing connections to friends and customers even in countries where social pressure, not data plans, ties people to Facebook is a sign of monopoly. Haugen, like Kevin Roose in the New York Times, sees signs of desperation in the documents she leaked. This company knows its most profitable audiences are aging; Facebook is now for "old people". The tweens are over at Snapchat, TikTok, and even Telegram, which added 70 million signups in the six hours Facebook was out.

We already knew Facebook's business model was toxic, a problem it shares with numerous other data-driven companies not currently in the spotlight. A key difference: Zuckerberg's unassailable control of his company's voting shares. The eight SEC complaints Haugen has filed is the first potential dent in that.

Like Matt Stoller, I appreciate a lot of Haugen's ideas for remediation: pushing people to open links before sharing, and modifying Section 230 to make platforms responsible for their algorithmic amplification, an idea also suggested by fellow data scientist Roddy Lindsay and British technology journalist Charles Arthur in his new book, Social Warming. For Stoller, these are just tweaks to how Facebook works. Haugen says she wants to "save" Facebook, not harm it. Neither her changes nor Zuckerberg's call for government regulation touch its concentrated power. Stoller wants "radical decentralization". Arthur wants cap social network size.

One fundamental mistake may be to think of Facebook as *a* monopoly rather than several at once. As an economic monopoly, businesses all over the world depend on Facebook and subsidiaries to reach their customers, and advertisers have nowhere else to go. Despite last year's pledged advertising boycott over hate speech on Facebook, since Haugen's revelations began, advertisers have been notably silent. As a social monopoly, Facebook's outage was disastrous in regions where both humanitarians and vulnerable people rely on it for lifesaving connections; in richer countries, the inertia of established connections leaves Facebook in control of large swaths of our social and community networks. This week taught us that its size also threatens infrastructure. Each of these calls for a different approach.

Stoller has several suggestions for crashing Facebook's monopoly power, one of which is to ban surveillance advertising. But he rejects regulation and downplays the crucial element of interoperability; create a standard so that messaging can flow between platforms, and you've dismantled customer lock-in. The result would be much more like the decentralized Internet of the 1990s.

Greater transparency would help; just two months ago Facebook shut down independent research into content interactions and its political advertising - and tried to blame the Federal Trade Commission.

This is *not* a lesson. Whatever we have learned Mark Zuckerberg has not. At CNN, Donie O'Sullivan fact-checks Zuckerberg's response.

A day after Haugen's testimony, Zuckerberg wrote (on Facebook, requiring a login): "I think most of us just don't recognize the false picture of the company that is being painted." Cue Robert Burns: "O wad some Pow'r the giftie gie us | To see oursels as ithers see us!" But really, how blinkered do you have to be to not recognize that if your motto is Move fast and break things people are going to blame you for the broken stuff everywhere?


Illustrations: Slide showing revenue by Facebook user geography from its Q1 2021 SEC filing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 13, 2021

Legacy

QRCode-2-Structure.pngThe first months of the pandemic saw a burst of energetic discussion about how to make it an opportunity to invest in redressing inequalities and rebuilding decaying systems - public health, education, workers' rights. This always reminded me of the great French film director François Truffaut, who, in his role as the director of the movie-within-the-movie in Day for Night, said, "Before starting to shoot, I hope to make a fine film. After the problems begin, I lower my ambition and just hope to finish it." It seemed more likely that if the pandemic went on long enough - back then the journalist Laurie Garrett was predicting a best case of three years - early enthusiasm for profound change would drain away to leave most people just wishing for something they could recognize as "normal". Drinks at the pub!

We forget what "normal" was like. London today seems busy. But with still no tourists, it's probably a tenth as crowded as in August 2019.

Eighteen months (so far) has been long enough to make new habits driven by pandemic-related fears, if not necessity, begin to stick. As it turns out the pandemic's new normal is really not the abrupt but temporary severance of lockdown, which brought with it fears of top-down government-driven damage to social equity and privacy: covid legislation, imminuty passports, and access to vaccines. Instead, the dangerous "new normal" is the new habits building up from the bottom. If Garrett was right, and we are at best halfway through this, these are likely to become entrenched. Some are healthy: a friend has abruptly realized that his grandmother's fanaticism about opening windows stemmed from living through the 1918 Spanish flu pandemic. Others...not so much.

One of the first non-human casualties of the pandemic has been cash, though the loss is unevenly spread. This week, a friend needed more than five minutes to painfully single-finger-type masses of detail into a pub's app, the only available option for ordering and paying for a drink. I see the convenience for the pub's owner, who can eliminate the costs of cash (while assuming the costs of credit cards and technological intermediation) and maybe thin the staff, but it's no benefit to a customer who'd rather enjoy the unaccustomed sunshine and chat with a friend. "They're all like this now," my friend said gloomily. Not where I live, fortunately.

Anti-cash campaigners have long insisted that cash is dirty and spreads disease; but, as we've known for a year, covid rarely spreads through surfaces, and (as Dave Birch has been generous enough to note) a recent paper finds that cash is sometimes cleaner. But still: try to dislodge the apps.

A couple of weeks ago, the Erin Woo at the New York Times highlighted cash-free moves. In New York City, QR codes have taken over in restaurants and stores as contact-free menus and ordering systems. In the UK, QR codes mostly appear as part of the Test and Trace contact tracing app; the idea is you check in when you enter any space, be it restaurant, cinema, or (ludicrously) botanic garden, and you'll be notified if it turns out it was filled with covid-infected people when you were there.

Whatever the purpose, the result is tight links between offline and online behavior. Pre-pandemic, these were growing slowly and insidiously; now they're growing like an invasive weed at a time when few of us can object. The UK ones may fall into disuse alongside the app itself. But Woo cites Bloomberg: half of all US full-service restaurant operators have adopted QR-code menus since the pandemic began.

The pandemic has also helped entrench workplace monitoring. By September 2020, Alex Hern was reporting at the Guardian that companies were ramping up their surveillance of workers in their homes, using daily mandatory videoconferences, digital timecards in the form of cloud logins, and forced participation on Slack and other channels.

Meanwhile at NBC News, Olivia Solon reports that Teleperformance, one of the world's largest call center companies, to which companies like Uber, Apple, and Amazon outsource customer service, has inserted clauses in its employment contracts requiring workers to accept in-home cameras that surveil them, their surroundings, and family members under 18. Solon reports that the anger over this is enough to get these workers thinking about unionizing. Teleperformance is global; it's trying this same gambit in other countries.

Nearer to home, all along, there's been a lot of speculation about whether anyone would ever again accept commuting daily. This week, the Guardian reports that only 18% of workers have gone back to their offices since UK prime minister Boris Johnson ended all official restrictions on July 19. Granted, it won't be clear for some time whether this is new habit or simply caution in the face of the fact that Britain's daily covid case numbers are still 25 times what they were a year ago. In the US, Google is suggesting it will cut pay for staff who resist returning to the office, on the basis that their cost of living is less. Without knowing the full financial position, doesn't it sound like Google is saving money twice?

All these examples suggest that what were temporary accommodations are hardening into "the way things are". Undoing them is a whole new set of items for last year's post-pandemic to-do list.


Illustrations: Graphic showing the structure of QR codes (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 23, 2021

Internet fragmentation as a service

Screenshot from 2021-07-23 11-48-13.png"You spend most of your day telling a robot that you're not a robot. Think about that for two minutes and tell me you don't want to walk into the ocean," the comedian John Mulaney said in his 2018 comedy special, Kid Gorgeous. He was talking about captchas.

I was reminded of this during a recent panel at the US Internet Governance Forum hosted by Mike Nelson. Nelson's challenge to his panelists: imagine alternative approaches to governments' policy goals that won't damage the Internet. They talked about unintended consequences (and the exploitation thereof) of laws passed with good intentions, governments' demands for access to data, ransomware, content blocking, multiplying regional rulebooks, technical standards and interoperability, transparency, and rising geopolitical tensions, which cyberspace policy expert Melissa Hathaway suggested should be thought about by playing a mash-up of the games Risk and Settlers of Catan.The main topic: is the Internet at risk of Internet fragmentation?

So much depends on what you mean by "fragmentation". No one mentioned the physical damage achievable by ten backhoes. Nor the domain name system that allows humans and computers to find each other; "splitting the root" (that is, the heart of the DNS) used to dominate such discussions. Nor captchas, but the reason Mulaney sprang to mind was that every day (in every way) captchas frustrate access. Saying that makes me privileged; in countries where Facebook is zero-rated but the rest of the Internet costs money people can't afford on their data plans, the Internet is as cloven as it can possibly be.

Along those lines, Steve DelBianco raised the idea of splintering-by-local-law, the most obvious example being the demand in many countries for data localization. DelBianco, however, cited Illinois' Biometric Information Privacy Act (2008), which has been used to sue platforms on behalf of unnamed users for automatically tagging their photos online. Result: autotagging is not available to Illinois users on the major platforms, and neither is the Google Nest and Amazon Ring doorbells' facility for recognizing and admitting friends and family. See also GDPR, noted above, which three and a half years after taking force still has US media sites blocking access by insisting that our European visitors are important to us.

You could also say that the social Internet is splintering along ideological lines as the extreme right continue to build their own media and channels. In traditional media, this was Roger Ailes' strategy. Online, the medium designed to connect people doesn't care who it connects or for what purpose. Commercial social media engagement algorithms have exacerbated this, as many current books make plain.

Nelson, whose Internet policy experience goes back to the Clinton administration, suggested that policy change is generally driven by a big event: 9/11, for example, which led promptly to the passage of the PATRIOT Act (US) and the Anti-Terrorism, Crime, and Security Act (UK), or the Colonial Pipeline hack that has made ransomware an urgent mainstream concern. So, he asked: what kind of short, sharp shock would cause the Internet to fracture? If you see data protection law as a vector, the 2013 Snowden revelations were that sort of event; a year earlier, GDPR looked like fading away.

You may be thinking, as I was, that we're literally soaking in global catastrophes: the COVID-19 pandemic, and climate change. Both are slow-burning issues, unlike the high-profile drivers of legislative panic Nelson was looking for, but both generate dozens of interim shocks.

I'm always amazed so little is said about climate change and the future of the Internet; the IT industry's emissions just keep growing. China's ban on cryptocurrency mining, which it attributes to environmental concerns, may be the first of many such limits on the use of computing power. Disruptions to electricity supplies - just yesterday, the UK's National Grid warned there may be blackouts this winter - don't "break" the Internet, but they do make access precarious.

So far, the pandemic's effect has mostly been to exacerbate ideological splits and accelerate efforts to curb the spread of misinformation via social media. It's also led to increased censorship in some places; early on, China banned virus-related keywords on WeChat, and this week the Indian authorities raided a newspaper that criticized the government's pandemic response. In addition, the exposure and exacerbation of social inequalities brought by the pandemic may, David Bray suggested in the panel, be contributing to the increase in cybercrime, as "failed states" struggle to rescue their economies. This week's revelations of the database of numbers of interest to NSO Group clients since 2016 doesn't fragment the Internet as a global communications system, but it might in the sense that some people may not be able to afford the risk of being on it.

This is where Mulaney comes in. Today, robots gatekeep web pages. Three trends seem likely to expand their role: online, age verification and online safety laws; covid passports, which are beginning to determine access to physical-world events; and the Internet of Things, which is bridging what's left of the divide between cyberspace and the real world. In the Internet subsumed into everything of our future, "splitting the Internet" may no longer be meaningful as the purely virtual construct Nelson's panel was considering. In the cyber-physical world world, Internet fragmentation must also be hybrid.


Illustrations: The IGF-USA panel in action.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

Immune response

Thumbnail image for china-alihealth.jpegThe slight reopening of international travel - at least inbound to the UK - is reupping discussions of vaccination passports, which we last discussed here three months ago. In many ways, the discussion recapitulates not only the ID card battles of 2006-2010 but also last year's concerns about contact tracing apps.

We revisit so soon for two reasons. First, the UK government has been sending out conflicting messages for the last month or more. Vaccination passports may - or may not - be required for university attendance and residence; they may be required for domestic venues - and football games! - in September. One minister - foreign secretary Dominic Raab - says the purpose would be to entice young people to get vaccinated, an approach that apparently worked in France, where proposing to require vaccination passports in order to visit cafes caused a Eiffel Tower-shaped spike in people presenting for shots. Others seem to think that certificates of either vaccination or negative tests will entice people to go out more and spend money. Or maybe the UK won't do them at all; if enough people are vaccinated why would we need proof of any one individual's status? Little has been said about whatever the government may have learned from the test events that were supposed to show if it was safe to resume mass entertainment gatherings.

Second, a panel discussion last month hosted by Allyson Pollack raised some new points. Many of us have thought of covids passport for international travel as roughly equivalent to proof of vaccination for yellow fever. However, Linet Taylor argues that the only time someone in a high-income country needs one is if they're visiting a country where the disease is endemic. By contrast, every country has covid, and large numbers - children, especially - either can't access or do not qualify for covid vaccinations. The problems that disparity caused for families led Israel to rethink its Green Pass, which expired in June and was not renewed. Therefore, Taylor said, it's more relevant to think about lowering the prevalence of the disease than to try to distinguish between vaccinated and unvaccinated. The chief result of requiring vaccination passports for international travel, she said, will be to add extra barriers for those traveling from low-income countries to high-income countries and cement into place global health inequality and unequal access to vaccines. She concluded that giving the responsibility to technology companies merely shows we have "no plan to solve them any other way".

It also brings other risks. Michael Veale, and Seda F. Gürses explain why the computational infrastructure required to support online vaccination verification undercuts public health objectives. Ellen Ullman wrote about this in 1997: computer logic eliminates fuzzy human accommodations, and its affordances foster administrative change from help to surveillance and inclusion to exclusion. No one using the system - that is people going to pubs and concerts - will have any control over what it's doing.

Last year, Westerners were appalled at the passport-like controls China put in place. This year, New York state is offering the Excelsior Pass. Once you load the necessary details into the pass, a mobile phone app, scanning it gains you admission to a variety of venues. IBM, which built the system, is supposedly already investigating how it can be expanded.

As Veale pointed out, a real-time system to check vaccination certificates will also know everywhere each individual certificate hass been checked, adding inevitable intrusion far beyond the vaccinated-yes/no binary. Two stories this week bear Veale out. The first is the New York Times story that highlighted the privacy risks of QR codes that are proliferating in the name of covid safety. Again, the average individual has no way to tell what data is incorporated into the QR code or what's being saved.

The second story is the outing of Monsignor Jeffrey Burrill by The Pillar, a Medium newsletter that covers the Catholic Church. The Pillar says its writers legally obtained 24 months' worth of supposedly anonymized, aggregated app signal data. Out of that aggregated mass they used known locations Burrill frequents to pick out a phone ID with matching history, and used that to track the phone's use of the LGBTQ dating app Grindr and visits to gay nightclubs. Burrill resigned shortly after being informed of the story.

More important is the conclusion Bruce Schneier draws: location data cannot be successfully anonymized. So checking vaccination passports in fact means building the framework of a comprehensive tracking system, whether or not that's the intention..

Like contact tracing apps before them, vaccination passports are a mirage that seem to offer the prospect of living - in this case, to people who've been vaccinated against covid - as if the pandemic does not exist. Whether it "works" depends on what your goal is. If it's to create an airport-style fast track through everyday life, well, maybe. If it's to promote public health, then safety measures such as improved ventilation, moving events outdoors, masks, and so on are likely a better bet. If we've learned anything from the last year and a half, it should be that no one can successfully create an individual bubble in which they can pretend the pandemic is over even while it rages in the rest of the world,


Illustrations: China's Alipay Health Code in March, 2020 (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 16, 2021

When software eats the world

The_National_Archives_at_Kew_-_geograph.org.uk_-_2127149.jpgOne part of our brains knows that software can be fragile. Another part of our brains, when faced with the choice of trusting the human or trusting the machine...trusts the machine. It may have been easier to pry trust away from the machine twenty years ago, when systems crashed more often, sometimes ruining months of work and the mantra, "Have you tried turning it off and back on again?" didn't yet work as a reliable way of restoring function. Perhaps more important, we didn't *have* to trust software because we had canonical hard copies. Then, as predicted, the copies became "backups". Now, often, they don't exist at all, with the result that much of what we think we know is becoming less well-attested. How many of us even print out our bank statements any more? Three recent stories highlight this.

First is the biggest UK computer-related scandal for many years, the outrageous Post Office prosecution of hundreds of subpostmasters of theft and accounting fraud, all while insisting that their protests of innocence must all be lies because its software, sourced from Fujitsu, could not possibly be wrong. Eventually, the Court of Appeal quashed 39 convictions and excoriated both the Post Office and Fujitsu for denying the existence of two known bugs that led to accounting discrepancies. They should never have been able to get away with their claim of infallibility - first, because generations of software engineers could have told the court that all software has bugs, and second, because Ross Anderson's work proving that software vulnerabilities were the cause of phantom ATM withdrawals, overriding the UK banking industry's insistence that its software, too, was infallible.

At Lawfare, Susan Landau, discussing work she did in collaboration with Steve Bellovin, Matt Blaze, and Brian Owsley. uses the Post Office fiasco as a jumping-off point to discuss the increasing problem of bugs in software used to produce evidence presented in court. Much of what we think of as "truth" - Breathalyzer readings, forensic tools, Hawkeye line calls in tennis matches - are not direct measurements but software-derived interpretations of measurements. Hawkeye at least publishes its margin for error even though tennis has decided to pretend it doesn't exist. Manufacturers of evidence-producing software, however, claim commercial protection, leaving defendants unable to challenge the claims being made about them. Landau and her co-authors conclude that courts must recognize that they can't assume the reliability of evidence produced bysoftware and that defendants must be able to conduct "adversarial audits".

Second story. At The Atlantic, Jonathan Zittrain complains that the Internet is "rotting". Link rot - broken links when pages get deleted or reorganized - and content drift, which sees the contents of a linked page change over time, are familiar problems for anyone who posts anything online. Gabriel Weinberg, the founder of search engine DuckDuckGo, has has talked about API rot, which breaks dependent functionality. Zittrain's particular concern is legal judgments, which increasingly may incorporate disappeared or changed online references like TikTok videos and ebooks. Ebooks in particular can be altered on the fly, leaving no trace of that thing you distinctly remember seeing.

Zittrain's response has been to help create sites to track these alterations and provide permanent links. It probably doesn't matter much that the net.wars archive has (probably) thousands of broken links. As long as the Internet Archive's Wayback Machine continues to exist as a source for vaped web pages, most of the ends of those links can be recovered. The Archive is inevitably incomplete, and only covers the open web. But it *does* matter if the basis for a nation's legal reasoning and precedents - what Zittrain calls "long-term writing" - can't be established with any certainty. Hence the enormous effort put in by the UK's National Archives to convert millions of pages of EU legislation so all could understand the legitimacy of post-Brexit UK law.

Third story. It turns out the same is true for the brick-by-brick enterprise we call science. In the 2020 study Open is not forever, authors Mikael Laakso, Lisa Matthias, and Najko Jahn find journal rot. Print publications are carefully curated and preserved by librarians and archivists, as well as the (admittedly well-funded) companies that publish them. Open access journals, however, have had a patchy record of success, and the study finds that between 2000 and 2019 174 open access journals from all major research disciplines and from all geographical regions vanished from the web. In science, as in law, it's not enough to retain the end result; you must be able to show your work and replicate your reasoning.

It's more than 20 years since I heard experts begin to fret about the uncertain durability of digital media; the Foundation for Information Research included the need for reliable archives in its 1998 founding statement. The authors of the journal study note that the journals themselves are responsible for maintaining their archives and preserving their portion of the scholarly record; they conclude that solving this problem will require the participation of the entire scholarly community.

What isn't clear, at least to me, is how we assure the durability of the solutions. It seemed a lot easier when it was all on paper in a reassuringly solid building.

Illustrations: The UK National Archives, in Kew (photo by Erian Evans via Wikimedia)..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 9, 2021

The border-industrial complex*

Rohingya_Refugee_Camp_26_(sep_2020).jpgMost people do not realize how few rights they have at the border of any country.

I thought I did know: not much. EFF has campaigned for years against unwarranted US border searches of mobile phones, where "border" legally extends 100 miles into the country. If you think, well, it's a big country, it turns out that two-thirds of the US population lives within that 100 miles.

No one ever knows what the border of their own country is like for non-citizens. This is one reason it's easy for countries to make their borders hostile: non-citizens have no vote and the people who do have a vote assume hostile immigration guards only exist in the countries they visit. British people have no idea what it's like to grapple with the Home Office, just as most Americans have no experience of ICE. Datafication, however, seems likely to eventually make the surveillance aspect of modern border passage universal. At Papers, Please, Edward Hasbrouck charts the transformation of travel from right to privilege.

In the UK, the Open Rights Group and the3million have jointly taken the government to court over provisions in the post-Brexit GDPR-enacting Data Protection Act (2018) that exempted the Home Office from subject access rights. The Home Office invoked the exemption in more than 70% of the 19,305 data access requests made to its office in 2020, while losing 75% of the appeals against its rulings. In May, ORG and the3million won on appeal.

This week's announced Nationality and Borders Bill proposes to make it harder for refugees to enter the country and, according to analyses by the Refugee Council and Statewatch, make many of them - and anyone who assists them - into criminals.

Refugees have long had to verify their identity in the UK by providing biometrics. On top of that, the cash support they're given comes in the form of prepaid "Aspen" cards, which means the Home Office can closely monitor both their spending and their location, and cut off assistance at will, as Privacy International finds. Scotland-based Positive Action calls the results "bureaucratic slow violence".

That's the stuff I knew. I learned a lot more at this week's workshop run by Security Flows, which studies how datafication is transforming borders. The short version: refugees are extensively dataveilled by both the national authorities making life-changing decisions about them and the aid agencies supposed to be helping them, like the UN High Commissioner for Refugees (UNHCR). Recently, Human Rights Watch reported that UNHCR had broken its own policy guidelines by passing data to Myanmar that had been submitted by more than 830,000 ethnic Rohingya refugees who registered in Bangladeshi camps for the "smart" ID cards necessary to access aid and essential services.

In a 2020 study of the flow of iris scans submitted by Syrian refugees in Jordan, Aalborg associate professor Martin Lemberg-Pedersen found that private companies are increasingly involved in providing humanitarian agencies with expertise, funding, and new ideas - but that those partnerships risk turning their work into an experimental lab. He also finds that UN agencies' legal immunity coupled with the absence of common standards for data protection among NGOs and states in the global South leave gaps he dubs "loopholes of externalization" that allow the technology companies to evade accountability.

At the 2020 Computers, Privacy, and Data Protection conference a small group huddled to brainstorm about researching the "creepy" AI-related technologies the EU was funding. Border security represents a rare opportunity, invisible to most people and justified by "national security". Home Secretary Priti Patel's proposal to penalize the use of illegal routes to the UK is an example, making desperate people into criminals. People like many of the parents I knew growing up in 1960s New York.

The EU's immigration agencies are particularly obscure. I had encoutnered Warsaw-based Frontex, the European Border and Coast Guard Agency which manages operational control of the Schengen Area, but not of EU-LISA, which since 2012 has managed the relevant large-scale IT systems SIS II, VIS, EURODAC, and ETIAS (like the US's ESTA). Unappetizing alphabet soup whose errors few know how to challenge.

The behind-the-scenes the workshop described sees the largest suppliers of ICT, biometrics, aerospace, and defense provide consultants who help define work plans and formulate calls to which their companies respond. The list of vendors appearing in Javier Sánchez-Monedero's 2018 paper for the Data Justice Lab, begins to trace those vendors, a mix of well-known and unknown. A forthcoming follow-up focuses on the economics and lobbying behind all these databases.

In the recent paper on financing border wars, Mark Akkerman analyzes the economic interests behind border security expansion, and observes "Migration will be one of the defining human rights issues of the 21st century." We know it will increase, increasingly driven by climate change; the fires that engulfed the Canadian village of Lytton, BC on July 1 made 1,000 people homeless, and that's just the beginning.

It's easy to ignore the surveillance and control directed at refugees in the belief that they are not us. But take the UK's push to create a hostile environment by pushing border checks into schools, workplaces, and health services as your guide, and it's obvious: their surveillance will be your surveillance.

*Credit the phrase "border-industrial complex" to Luisa Izuzquiza.

Illustrations: Rohingya refugee camp in Bangladesh, 2020 (by Rocky Masum, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 18, 2021

Libera me

irc-networks-netsplit.de-top10_2021u.pngA man walks into his bar and finds...no one there.

OK, so the "man" was me, and the "bar" was a Reddit-descended IRC channel devoted to tennis...but the shock of emptiness was the same. Because tennis is a global sport, this channel hosts people from Syracuse NY, Britain, Indonesia, the Netherlands. There is always someone commenting, checking the weather wherever tennis is playing, checking scores, or shooting (or befriending) the channel's frequent flying ducks.

Not now: blank, empty void, like John Oliver's background for the last, no-audience year. Those eight listed users are there in nickname only.

A year ago at this time, this channel's users were comparing pandemic restrictions. In our lockdowns, I liked knowing there was always someone in another time zone to type to in real time. So: slight panic. Where *are* they?

IRC dates to the old cooperative Internet. It's a protocol, not a service, so anyone can run an IRC server, and many people do, even though the mainstream, especially the younger mainstream, long since moved on through instant messaging and on to Twitter, WhatsApp groups, Telegram channels, Slack, and Discord. All of these undoubtedly look prettier and are easier to use, but the base functionality hasn't changed all that much.

IRC's enduring appeal is that it's all plain text and therefore bandwidth-light, it can host any size of conversation from a two-person secret channel to a public channel of thousands, multiple clients are available on every platform, and it's free. Genuinely free, not pay-with-data free - no ads! Accordingly, it's still widely used in the open source community. Individual channels largely set their own standards and community norms...and their own games. Circa 2003, I played silly trivia quizzes on a TV-related channel. On this one...ducks. A sample:

゜゜・。 ​ 。・゜゜\_o​< FLAP​ FLAP!

However, the fact that anyone *can* run their own server doesn't mean that everyone *does*, and like other Internet services (see also: open web, email), IRC gravitated towards larger networks that enable discovery. If you host your own server, strangers can only find it if you let them; on a large network users can search for channels, find topics they're interested in, and connect to the nearest server. While many IRC networks still survive, in recent years by far the biggest, according to Netsplit, is Freenode, largely because of its importance in providing connections and support for the open source community. Freenode is also where the missing tennis channel was hosted until about Tuesday, three days before I noticed it was silent. As you'll see in the Netsplit image above, that was when Freenode traffic plummeted, countered by a near-vertical rise in traffic on Libera Chat. That is where my channel turned out to be restored to its usual bustling self.

What happened is both complicated and pretty simple: ownership changed hands without anyone's quite realizing what it was going to mean. To say that IRC is free to use does not mean there are no costs: besides computers and bandwidth, the owners of IRC servers must defend their networks against attacks. Freenode, Wikipedia explains, began as a Linux support channel on another network run by four people, who went on to set up their own network, which eventually became the largest support network for the open source community. A series of ownership changes led from a California charity through a couple of steps to today's owner, the UK-based private company Freenode Ltd, which is owned by Andrew Lee, a technology entrepreneur and founder of the Private Internet Access VPN. No one appears to have thought much about this until last month, when 20 to 30 of the volunteers who run Freenode ("staff") resigned accusing Lee of executing a hostile takeover. Some of them promptly set up Libera as an alternative.

What makes this story about a somewhat arcane piece of the old Internet interesting - aside from the book that demands to be written about IRC's rich history, culture, and significance - is that this is the second time in the last 18 months that a significant piece of the non-profit infrastructure has been targeted for private ownership. The other was the .org top-level domain. These underpinnings need better protection.

On the day traffic plummeted, Lee made deciding to move really easy: as part of changing the network's underlying software, he decided to remove the entire database of registered names and channels - committing suicide, some called it. Because, really: if you're going to have to reregister and reconstruct everything anyway, the barrier to moving to that identical new network over there with all the familiar staff and none of the new owner mishegoss is gone. Hence the mass exodus.

This is why IRC never spawned a technology giant: no lock-in. Normally when you move a conversation it dies. In this case, the entire channel, with its scripts and games and familiar interface, could be recreated at speed and resume as if nothing had happened. All they had to do was tell people. Five minutes after I posted a plaintive query on Reddit, someone came to retrieve me.

So, now: a woman logs into an IRC channel and finds all the old regulars. A duck flaps past. I have forgotten the ".bang" command. I type ".bef" instead. The duck is saved.

Illustrations: Netsplit's graph of IRC network traffic from June 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 11, 2021

The fragility of strangers

Colonial_Pipeline_System.pngThis week, someone you've never met changed the configuration settings on their individual account with a company you've never heard of and knocked out 85% of that company's network. Dumb stuff like this probably happens all the time without attracting attention, but in this case the company, Fastly. is a cloud provider that also runs an intermediary content delivery network intended to speed up Internet connections. Result: people all over the world were unable to reach myriad major Internet sites such as Amazon, Twitter, Reddit, and the Guardian for about an hour.

The proximate cause of these outages, Fastly has now told the world, was a bug that was introduced (note lack of agency) into its software code in mid-May, which laid dormant until someone did something completely normal to trigger it.

In the early days, we all assumed that as more companies came onstream and admins built experience and expertise, this sort of thing would happen less and less. But as the mad complexity of our computer systems and networks continues to increase - Internet of Things! AI! - now it's more likely that stuff like this will also increase, will be harder to debug, and will cause far more ancillary damage - and that damage will not be limited to the virtual world. A single random human, accidentally or intentionally, is now capable of creating physical-world damage at scale.

Ransomware attacks earlier this month illustrate this. Attackers' use of a single leaked password linked to a disused VPN account in the systems that run the Colonial Pipeline compromised gasoline supplies down a large swathe of the US east coast. Near-simultaneously, a ransomware attack on the world's largest meatpacker, JBS, briefly halted production, threatening food security in North America and Australia. In December, an attack on network management software supplied by the previously little-known SolarWinds compromised more than 18,000 companies and government agencies. In all these cases, random strangers reached out across the world and affected millions of personal lives by leveraging a vulnerability inside a company that is not widely known but that provides crucial services to companies we do know and use every day.

An ordinary person just trying to live their life has no defense except to have backups of everything - not just data, but service providers and suppliers. Most people either can't afford that or don't have access to alternatives, which means that precarious lives are made even more so by hidden vulnerabilities they can't assess.

An earlier example: in 2012, journalist Matt Honan's data was entirely wiped out through an attack that leveraged quirks of two unrelated services - Apple and Amazon - against each other to seize control of his email address and delete all his data. Moral: data "in the cloud" is not a backup, even if the hosting company says they keep backups. Second moral: if there is a vulnerability, someone will find it, sometimes for motives you would never guess.

If memory serves, Akamai, founded in 1998, was the first CDN. The idea was that even though the Internet means the death of distance, physics matters. Michael Lewis captured this principle in detail in his book Flash Boys, in which a handful of Wall Street types pay extraordinary amounts to shave a few split-seconds off the time it takes to make a trade by using a ruler and map to send fiber topic cables along the shortest possible route between exchanges. Just so, CDNs cache frequently accessed content on mirror servers around the world. When you call up one of those pages, it, or frequently-used parts of it in the case of dynamically assembled pages, is served up from the nearest of those servers, rather than from the distant originator. By now, there are dozens of these networks and what they do has vastly increased in sophistication, just as the web itself has. A really major outlet like Amazon will have contracts with more than one, but apparently switching from one to the other isn't always easy, and because so many outages are very short it's often easier to wait it out. Not in this case!

At The Conversation, criminology professor David Wall also sees this outage as a sign of the future for the same reason I do: centralization and consolidation have shrunk, and continue to shrink, the number of single points of widespread failure. Yes, the Internet was built to withstand a bomb outage is true - but as we have been writing for 20 years now, this Internet is not that Internet. The path to today's Internet has led from the decentralized era of Usenet, IRC, and own-your-own mail server to web hosting farms to the walled gardens of Facebook, Google, and Apple, and the AI-dominating Big Nine. In 2013, Edward Snowden's revelations made plain how well that suits surveillance-hungry governments, and it's only gotten worse since, as companies seek to insert themselves into every aspect of our lives - intermediaries that bring us a raft of new insecurities that we have no time or ability to audit.

Increasing complexity, hidden intermediation, increasing numbers of interferers, and increasing scale all add up to a brittle and fragile Internet, onto which we continue to pile all our most critical services and activities. What could possibly go wrong?


Illustrations: Map of the Colonial Pipeline.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 9, 2020

Incoming

fifth-element-destroys-evil.pngThis week saw the Antitrust Subcommittee of the (US) House Judiciary Committee release the 449-page report (PDF) on its 16-month investigation into Google, Apple, Facebook, and Amazon - GAFA, as we may know them. Or, if some of the recommendations in this report get implemented, *knew* them. The committee has yet to vote on the report, and the Republican members have yet to endorse it. So this is very much a Democrats' report...but depending how things go over the next month, come January that may be sufficient to ensure action.

At BIG, Matt Stoller has posted a useful and thorough summary. As he writes, the subcommittee focused on a relatively new idea of "gatekeeper power", which each of the four exercises in its own way (app stores, maps, search, phone operating systems, personal connections), and each of which is aided by its ability to surveil the entirety of the market and undermine current and potential rivals. It also attacks the agencies tasked with enforcing the antitrust laws for permitting the companies to make some 500 acquisitions. The resulting recommendations fall into three main categories: restoring competition in the digital economy, strengthening the antitrust laws, and reviving antitrust enforcement.

In a discussion while the report was still just a rumor, a group of industry old-timers seemed dismayed at the thought of breaking up these companies. A major concern was the impact on research. The three great American corporate labs of the 1950s to 1980s were AT&T's Bell Labs, Xerox PARC, and IBM's Watson. All did basic research, developing foundational ideas for decades to come but that might never provide profits for the company itself. The 1984 AT&T breakup effectively killed Bell Labs. Xerox famously lost out on the computer market. IBM redirected its research priorities toward product development. GAFA and Microsoft operate substantial research labs today, but they are more focused on the technologies, such as AI and robotics, that they envision as their own future.

The AT&T case is especially interesting. Would the Internet have disrupted AT&T's business even without the antitrust case, or would AT&T, kept whole, been able to use its monopoly power to block the growth of the Internet? Around the same time, European countries were deliberately encouraging competition by ending the monopolies of their legacy state telcos. Without that - or with AT&T left intact - anyone wanting to use the arriving Internet would have been paying a small fortune to the telcos just to buy a modem to access it with. Even as it was, the telcos saw Voice over IP as a threat to their lucrative long distance business, and it was only network neutrality that kept them from suppressing it. Today, Zoom-like technology might be available, but likely out of reach for most of us.

The subcommittee's enlistment of Lina Khan as counsel suggests GAFA had this date from the beginning. Khan made waves while still a law student by writing a lengthy treatise on Amazon's monopoly power and its lessons for reforming antitrust law, back when most of us still thought Amazon was largely benign. One of her major points was that much opposition to antitrust enforcement in the technology industry is based on the idea that every large company is always precariously balanced because at any time, a couple of guys in a garage could be inventing the technology that will make them obsolete. Khan argued that this was no longer true, partly because those two garage guys were enabled by antitrust enforcement that largely ceased after the 1980s, and partly because GAFA are so powerful that few start-ups can find funding to compete with them directly and rich enough to buy and absorb or shut down anyone who tries. The report, like the hearings, notes the fear of reprisal among business owners asked for their experiences, as well as the disdain with which these companies - particularly Facebook - have treated regulators. All four companies have been repeat offenders, apparently not inspired to change their behavior by even the largest fines.

Stoller thinks that we may now see real action because our norms have shifted. In 2011, admiration for monopolists was so widespread, he writes, that Occupy Wall Street honored Steve Jobs' death, whereas today US and EU politicians of all stripes are targeting monopoly power and intermediary liability. Stoller doesn't speculate about causes, but we can think of several: the rapid post-2010 escalation of social media and smartphones; Snowden's 2013 revelations; the 2016 Cambridge Analytica scandal; and the widespread recognition that, as Kashmir Hill found, it's incredibly difficult to extricate yourself from these systems once you are embedded in them. Other small things have added up, too, such as Mark Zuckerberg's refusal to appear in front of a grand committee assembled by nine nations.

Put more simply, ten years ago GAFA and other platforms and monopolists made the economy look good. Today, the costs they impose on the rest of society - precarious employment, lost privacy, a badly damaged media ecosystem, and the difficulty of containing not just misinformation but anti-science - are clearly visible. This, too, is a trend that the pandemic has accelerated and exposed. When the cost of your doing business is measured in human deaths, people start paying attention pretty quickly. You should have paid your taxes, guys.


Illustrations: The fifth element breaks up the approaching evil in The Fifth Element.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 4, 2020

The Internet as we know it

Internet_map_1024-2005-the Opte Project.jpgIt's another of those moments when people to whom the Internet is still a distinctive and beloved medium fret that it's about to be violently changed into all everything they were glad it wasn't when it began. That this group is a minority is in itself a sign. Circa 1994, almost every Internet user was its defender. Today, for most people, the Internet just *is* and ever has been - until someone comes along and wants to delete their favorite service.

Fears of splintering the Internet are as old as the network itself. Different people have focused on different mechanisms: TV and radio-style corporate takeover (see for example Robert McChesney's work); incompatible censorship and data protection regimes, technical incompatibilities born of corporate overreach, and so on. In 2013, four seemed significant: copyright, localizing data storage (data protection), censorship; losing network neutrality, and splitting the addressing system.

Then, the biggest threats appeared to be structural censorship and losing network neutrality. Both are still growing. In 2019, Access Now says, 213 Internet shutdowns in 33 countries collectively disrupted 1,706 days of Internet access. No one imagined this in the 1990s, when all countries vied to reap the benefits of getting their citizens online. More conceivable were government regulation, shifting technological standards, corporate ownership, copyright laws, and unequal access...but we never expected the impact of the eventual convergence with the mobile world, a clash of cultures that got serious after 2010, when social media and smartphones began mutually supercharging.

A couple of weeks ago, James Ball introduced a new threat, writing disapprovingly about US president Donald Trump's executive order declaring the video-sharing app TikTok a national emergency. Ball rightly calls this ban "generational vandalism, but then he writes that banning an app solely because of the nationality of its owner, he writes, "could be an existential threat to the Internet as we know it".

If that's true, then the Internet is already not "the Internet as we know it". So much depends on when your ideas of "the Internet" were formed and where you live. As Ball himself acknowledges in his new book, The System: Who Owns the Internet and How It Owns Us, in some countries Facebook is synonymous with the Internet because of the zero-rating deals the company has struck with mobile phone operators. In China, "the Internet", contrary to what most people believed was possible in the 1990s, is a giant, firewalled nationally controlled space. TikTok, as primarily a mobile phone app lives in a highly curated "the Internet" of app stores. Finally, even though "the Internet" in the 1990s sense is still with us in that people can still build their new ideas, most people's "the Internet" is now confined to the same few sites that exercise extraordinary control over what is read, seen, and heard.

The Australian Competition and Consumer Commission's new draft News Media Bargaining Code provides an example. It requires Google and Facebook (and, eventually, others) to negotiate in good faith to pay news media companies for use of their content when users share links and snippets. Unlike Spain's previous similar attempt, Google can't escape by shutting down its news service because it also serves up news through its search engine and YouTube. Facebook has said it will block Australian users from sharing local or international news on Facebook and Instagram if the code becomes mandatory. But, as Alex Hern writes, the problem is that "One of the big ways that Facebook and Google have been bad for the news industry has been by becoming indispensable to the news industry". Australia can push this code into force, but when it does Google won't pay publishers *and* publishers will lose most of their traffic, exactly as happened in Spain and Germany. But misinformation will flourish.

This is still an upper network layer problem, albeit simplified by corporate monopoly. On the 1995-2010 web, there would be too many site owners to contend with, just as banning apps (see also India) is vastly simplified by needing to negotiate with just two app store owners. Censoring the open Internet required China to build a national firewall and hire maintainers while millions of new sites and services arrived every day. When they started, no one believed it could even be done.

The mobile world is not and never has been "the Internet as we know it", built to facilitate openness for scientists. Telephone companies have always been happiest with controlled systems and walled gardens, and before 2006, manufacturers like Nokia, Motorola, and Psion had to tailor their offerings to telco specifications. The iPhone didn't just change the design and capabilities of the slab in your hand; it also changed the makeup and power structures of the industry as profoundly as the PC had changed computing before it.

But these are still upper layers. Far more alarming, as Milton Mueller writes at the Internet Governance Project, is Trump's policy of excluding Chinese businesses from Internet infrastructure - and China's ideas for "new IP". This is a crucialthreat to the interoperable bedrock of "the network of all networks". As the Internet Society explains, it is that cooperative architecture "with no central authority" that made the Internet so successful. This is the first principle that built the Internet as we know it.


Illustrations: Map of the Internet circa 2005 (via The Opte Project at Wikimedia Commons.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 21, 2020

The end of choice

new-22portobelloroad.jpgAt the Congressional hearings a few weeks ago, all four CEOs who appeared - Mark Zuckerberg (Facebook), Jeff Bezos (Amazon), Sundar Pichai (Google), and Tim Cook (Apple) - said essentially the same thing in their opening statements: they have lots of competitors, they have enabled millions of people to build small businesses on their platforms, and they do not have monopoly power. The first of these is partly true, the second is true, and the third...well, it depends which country you're talking about, how you look at it, and what you think they're competing for. In some countries outside the US, for example, Facebook *is* the Internet because of its Free Basics program.

In the weeks since: Google still intends to buy Fitbit, which for $2.1 billion would give it access to a huge pile of health-data-that's-not-categorized-as-health data; both the US and the EU are investigating.

In California, an appeals court has found that Amazon can be liable for defective products sold by third-party sellers.

Meanwhile, Apple, which this week became the first company in history to hit a $2 trillion market cap, deleted Epic's hugely popular game Fortnite from the App Store because its latest version breaks Apple's rules by allowing players to bypass the Apple payment system (and 30% commission) to pay Epic directly for in-game purchases. In response, Epic has filed suit - and, writes Matt Stoller, if a company with Epic's clout can't force Apple to negotiate terms, who can? Stoller describes the Apple-Epic suit as certainly about money but even more about "the right way to run an economy". Stoller goes on to find this thread running through other current disputes, and believes this kind of debate leads to real change.

At Stratechery Ben Thompson argues that the Democrats didn't prove their case. Most interesting of the responses to the hearings, though, is an essay by Benedict Evans, who argues that breaking up the platforms will achieve nothing. Instead, he says, citing relevant efforts by the EU and UK competition authorities, better to dig into how the platforms operate and write rules to limit the potential for abuse. I like this idea, in part because it is genuinely difficult to see how break-ups would work. However, the key issue is enforcement; the EU made not merging databases a condition of Facebook's acquisition of WhatsApp - and three years later Facebook decided to do it anyway. The resulting fine of €110 million was less than 1% of the $19 billion purchase price.

In 1998, when the Evil Borg of Tech was Microsoft, it, too, was the subject of antitrust actions. Echoing the 1984 breakup of AT&T, people speculated about creating "Baby Bills", either by splitting the company between operating systems and productivity software or by splitting it into clones and letting them compete with each other. Instead, in 2004 the EU ordered Microsoft to unbundle its media player and, in 2009, Internet Explorer to avoid new fines. The company changed, but so did the world around it: the web, online services, free software, smartphones, and social media all made Microsoft less significant. Since 2010, the landscape has changed again. As the economist Lina Khan wrote in 2017, two guys in a garage can no longer knock off the current crop by creating the next new big technology.

Today's expanding hybrid cyber-physical systems will entrench choices none of us made into infrastructure none of us can avoid. In 2017, for example, San Diego began installing "smart" streetlights intended to do all sorts of good things: drive down energy costs, monitor air pollution, point out empty parking spaces, and so on. The city also thought it might derive some extra income from allowing third parties to run apps on its streetlight network. Instead, as Tekla S. Perry reported at IEEE Spectrum in January, to date the system's sole use has been to provide video footage to law enforcement, which has taken advantage to solve serious crimes but also to investigate vandalism and illegal dumping.

In the UK, private developers and police have been rolling out automated facial recognition without notifying the public; this week, in a case brought by Liberty, the UK Court of Appeal ruled that its use breaches privacy rights and data protection and equality laws. This morning, I see that, undeterred, Lincolnshire Police will trial a facial recognition system that is supposed to be able to detect people's moods.

The issue of monopoly power is important. But even if we find a way to ensure fair competition we won't have solved a bigger problem that is taking shape: individuals increasingly have no choice about whether to participate in the world these companies are building. For decades we have had no choice about being credit-scored. Three years ago, despit the fatuous comments of senior politicians, it was obvious that the only people who can opt out of using the Internet are those who are economically inactive or highly privileged; last year journalist Kashmir Hill proved the difficulty of doing without GAFA. The pandemic response is making opting out either antisocial, a health risk, or both. And increasingly, going out of your house means being captured on video and analyzed whether you like it or not. No amount of controlling individual technology companies will solve this loss of agency. That is up to us.

Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 14, 2020

Revenge of the browser wars

Netscape-1.0N.pngThis week, the Mozilla Foundation announced major changes. As is the new norm these days, Mozilla is responding to a problem that existed BCV (before coronavirus) but has been exposed, accelerated, and compounded by the pandemic. But the response sounds grim: approximately a quarter of the workforce to be laid off and a warning that the company needs to find new business models. Just a couple of numbers explain the backdrop: according to Statcounter, Firefox's second-position share of desktop/laptop browser usage has dropped to 8.61% behind Chrome at 69.55%. On mobile and tablets, where the iPhone's Safari takes a large bite out of Chrome's share, Firefox doesn't even crack 1%. You might try to trumpify those percentages by suggesting it's a smaller share but a larger user population, but unfortunately no; at CNet, Stephen Shankland reports that usage is shrinking in raw numbers, too, down to 210 million monthly users from 300 million in 2017.

Yes, I am one of those users.

In its 2018 annual report and 2018 financial statement (PDF), Mozilla explains that most of its annual income - $430 million - comes from royalty deals with search engines, which pay Firefox to make them the default (users can change this at will). The default varies across countries: Baidu (China), Yandex (Russia, Belarus, Kazakhstan, Turkey, and Ukraine), and Google everywhere else, including the US and Canada. It derives a relatively small amount - $20 million or so in total - of additional income from subscriptions, advertising, donations and dividends and interest on the investments where it's parked its capital.

The pandemic has of course messed up everyone's financial projections. In the end, though, the underlying problem is that long-term drop in users; fewer users must eventually generate fewer search queries on which to collect royalties. Presumably this lies behind Mozilla's acknowledgment that it needs to find new ways to support itself - which, the announcement also makes clear, it has so far struggled to do.

The problem for the rest of us is that the Internet needs Firefox - or if not Firefox itself, another open source browser with sufficiently significant cloud to keep the commercial browsers and their owners honest. At the moment, Mozilla and Firefox are the only ones in a position to lead that effort, and it's hard to imagine a viable replacement.-

As so often, the roots of the present situation go back to 1995, when - no Google then and Apple in its pre-Jobs-return state - the browser kings were Microsoft's Internet Explorer and Netscape Navigator, both seeking world wide web domination. Netscape's 1995 IPO is widely considered the kickoff for the dot-com boom. By 1999, Microsoft was winning and then high-flying AOL was buying Netscape. It was all too easy to imagine both building out proprietary protocols that only their browsers could read, dividing the net up into incompatible walled gardens. The first versions of what became Firefox were, literally, built out of a fork of Netscape whose source code was released before the AOL acquisition.

The players have changed and the commercial web has grown explosively, but the danger of slowly turning the web into a proprietary system has not. Statcounter has Google (Chrome) and Apple (Safari) as the two most significant players, followed by Samsung Internet (on mobile) and Microsoft's Edge (on desktop), with a long tail of others including Opera (which pioneered many now-common features), Vivaldi (built by the Opera team after Telenor sold it to a Chinese consortium), and Brave, which markets itself as a privacy browser. All these browsers have their devoted fans, but they are only viable because websites observe open standards. If Mozilla can't find a way to reverse Firefox's user base shrinkage, web access will be dominated by two of the giant companies that two weeks ago were called in to the US Congress to answer questions about monopoly power. Browsers are a chokepoint they can control. I'd love to say the hearings might have given them pause, but two weeks later Google is still buying Fitbit, Apple and Google have removed Fortnite from the app store for violating its in-app payment rules, and Facebook has launched Tiktok clone Instagram Reels.

There is, at the moment, no suggestion that either Google or Apple wants to abuse its dominance in browser usage. If they're smart, they'll remember the many benefits of the standards-based approach that built the web. They may also remember that in 2009 the threat of EU fines led Microsoft to unbundle its Internet Explorer browser from Windows.

The difficulty of finding a viable business model for a piece of software that millions of people use is one of the hidden costs of the Internet as we know it. No one has ever been able to persuade large numbers of users to pay for a web browser; Opera tried in the late 1990s, and wound up switching first to advertising sponsorship and then, like Mozilla, to a contract with Google.

Today, Catalin Cimpanu reports at ZDNet that Google and Mozilla will extend their deal until 2023, providing Mozilla with perhaps $400 million to $500 million a year. Assuming it goes through as planned, it's a reprieve - but it's not a solution - as Mozilla, fortunately, seems to know.

Illustrations: Netscape 1.0, in 1994 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 31, 2020

Driving while invisible

jamesbond-invisiblecar.jpegThe point is not whether it's ludicrous but whether it breaks the law.

Until Hannah Smethurst began speaking at this week's gikii event - the year's chance to mix law, digital rights, and popular culture - I had not realized just how many invisible vehicles there are in our books and films. A brief trawl turns up: Wonder Woman's invisible jet, Harry Potter's invisibility cloak and other invisibility devices, and James Bond's invisible Aston Martin. Do not trouble me with your petty complaints about physics. This is about the law.

Every gikii (see here for links to writeups of previous years) - ranges from deeply serious-with-a-twist to silly-with-an-insightful undercurrent. This year's papers included the need for a fundamental rethink of how we regulate power (Michael Veale), the English* "bubble" law that effectively granted flatmates permanent veto power over each other's choice of sex partner (gikii founder Lilian Edwards), and the mistaken-identity frustrations of having early on used your very common name as your Gmail address (Jat Singh).

In this context, Smethurst's paper is therefore business as usual. As she explained, there is nothing in highway legislation that requires your car to be visible. The same is not true of number plates, which the law says must be visible at all times. But can you enforce it? If you can't see the car, how do you know you can't see the number plate? More uncertain is the highway code's requirement to indicate braking and turns when people don't know you're there; Smethurst suggested that a good lawyer could argue successfully that turning on the lights unexpectedly would dazzle someone. No, she said, the main difficulty is the dangerous driving laws. Well, that and the difficulty of getting insurance to cover the many accidents when people - pedestrians, cyclists, other cars - collide with it.

This raised the possibly of "invisibility lanes", an idea that seems like it should be the premise for a sequel to Death Race 2000. My overall conclusion: invisibility is like online anonymity. People want it for themselves, but not for other people - at least, not for other people they don't trust to behave well. If you want an invisible car so you can drive 100 miles an hour with impunity, I suggest a) you probably aren't safe to have one, and b) try driving across Kansas.

We then segued into the really important question: if you're riding an invisible bike, are *you* visible? (General consensus: yes, because you're not enclosed.)

On a more serious note, people have a tendency to laugh nervously when you mention that numerous jurisdictions are beginning to analyze sewage for traces of coronavirus. Actually, wastewater epidemiology, as this particular public health measure is known, is not a new surveillance idea born of just this pandemic, though it does not go all the way back to John Snow and the Broadwick Street pump. Instead, Snow plotted known cases on a map, and spotted the pump as the source of contagion when they formed a circle around it. Still, epidemiology did start with sewage.

In the decades since wastewater epidemiology was developed, some of its uses have definitely had an adversarial edge, such asestablishing the level of abuse of various drugs and doping agents or particular diseases in a given area. The goal, however, is not to supposed to be trapping individuals; instead it's to provide population-wide data. Because samples are processed at the treatment plant along with everyone else's, there's a reasonable case to be made the system is privacy-preserving; even though you could analyze samples for an individual's DNA and exact microbiome, matching any particular sample to its own seems unlikely.

However, Reuben Binns argued, that doesn't mean there are no privacy implications. Like anything segmented by postcode, the catchment areas defined for such systems are likely to vary substantially in the number of households and individuals they contain, and a lot may depend on where you put the collection points. This isn't so much an issue for the present purpose, which is providing an early-warning system for coronavirus outbreaks, but will be later, when the system is in place and people want to use it for other things. A small neighborhood with a noticeable concentration of illegal drugs - or a small section of an Olympic athletes village with traces of doping agents above a particular threshold - could easily find itself a frequent target of more invasive searches and investigations. Also, unless you have your own septic field, there is no opt-out.

Binns added this unpleasant prospect: even if this system is well-intentioned and mostly harmless, it becomes part of a larger "surveillant assemblage" whose purpose is fundamentally discriminatory: "to create distinctions and hierarchies in populations to treat them differently," as he put it. The direction we're going, eventually every part of our infrastructure will be a data source, for our own good.

This was also the point of Veale's paper: we need to stop focusing primarily on protecting privacy by regulating the use and collection of data, and start paying attention to the infrastructure. A large platform can throw away the data and still have the models and insights that data created - and the exceptional computational power to make use of it. All that infrastructure - there's your invisible car.

Illustrations: James Bond's invisible car (from Live and Let Die).

*Correction: I had incorrectly identified this law as Scottish.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2020

Software inside

Hedy_Lamarr_in_The_Conspirators_2.jpgIn 2011, Netscape creator-turned-venture capitalists Marc Andreesen argued that software is eating the world. Andreesen focused on a rather narrow meaning of "world" - financial value. Amazon ate Borders' lunch; software fuels the success of Wal-Mart, Fedex, airlines, and financial services. Like that.

There is, however, a more interesting sense in which software is eating the world, and that's its takeover of what we think of as "hardware". A friend tells me, for example, that part of the pleasure he gets from driving a Tesla is that its periodic software updates keep the car feeling new, so he never looks enviously at the features on later models. Still, these updates do at least sound like traditional software. The last update of 2019, for example, included improved driver visualization, a "Camp Mode" to make the car more comfortable to spend the night in, and other interface improvements. I assume something as ordinarily useful as map updates is too trivial to mention.

Even though this means a car is now really a fancy interconnected series of dozens of computer networks whose output happens to be making a large, heavy object move on wheels. I don't have trouble grasping the whole thing, not really. It's a control system.

Much more confounding was the time, in late 1993. when I visited Demon Internet, then a startup founded to offer Internet access to UK consumers. Like quite a few others, I was having trouble getting connected via the Demon's adapted version of KA9Q, connection software written for packet radio. This was my first puzzlement: how could software for "packet radio" (whatever that was) do anything on a computer? That was nothing to my confusion when Demon staffer Mark Turner explained to me that the computer could parse the stream of information coming into it and direct the results to different applications simultaneously. At that point, I'd only ever used online services where you could only do one thing at a time, just as you could only make one phone call at a time. I remember finding the idea of one data stream servicing many applications at once really difficult to grasp. How did it know what went where?

That is software, and it's what happened in the shift from legacy phone networks' circuit switching to Internet-style packet switching.

I had a similar moment of surreality when first told about software-defined radio. A radio was a *thing*. How could it be software? By then I knew about spread spectrum, invented by the actress Hedy Lamarr and pianist George Antheil to protect wartime military conversations from eavesdropping, so it shouldn't have seemed as weird as it did.

And so to this week, when, at the first PhD Cyber Security Winter School, I discovered programmable - - that is, software-defined - networks. Of course networks are controlled by software already, but at the physical layer it's cables, switches, and routers. If one of those specialized devices needs to be reconfigured you have to do it locally, device by device. Now, the idea is more generic hardware that can be reprogrammed on the fly, enabling remote - and more centralized and larger-scale - control. Security people like the idea that a network can both spot and harden itself against malicious traffic much faster. I can't help being suspicious that this new world will help attackers, too, first by providing a central target to attack, and second because it will be vastly more complex. Authentication and encryption will be crucial in an environment where a malformed or malicious data packet doesn't just pose a threat to the end user who receives it but can reprogram the network. Helpfully, the NSA has thought about this in more depth and greater detail. They do see centralization as a risk, and recommend a series of measures for protecting the controller; they also highlight the problems increased complexity brings.

As the workshop leader said, this is enough of a trend for Cisco, and Intel to embrace it; six months ago, Intel paid $5 billion for Barefoot Networks, the creator of P4, the language I saw demonstrated for programming these things.

At this point I began wondering if this doesn't up-end the entire design philosophy of the Internet, which was to push all the intelligence out to the edges, The beginnings of this new paradigm, active networking, appeared around the early 2000s. The computer science literature - for example, Activating Networks (PDF), by Jonathan M. Smith, Kenneth L. Calvert, Sandra L. Murphy, Hilarie K. Orman, and Larry L. Peterson, and Active Networking: One View of the Past, Present, and Future (PDF), by Smith and Scott M. Nettles - plots out the problems of security and complexity in detail, and considers the Internet and interoperability issues. The Road to SDN: An Intellectual History of Programmable Networks, by Nick Feamster, Jennifer Rexford, and Ellen Zegura, recapitulates the history to date.

My real question, however, is one I suspect has received less consideration: will these software-defined networks make surveillance and censorship easier or harder? Will they have an effect on the accessibility of Internet freedoms? Are there design considerations we should know about? These seem like reasonable questions to ask as this future hurtles toward us.

Illustrations: Hedy Lamarr, in The Conspirators, 1944..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 6, 2019

The dot-org of our discontents

copernicus map of the universe.jpg
On so many sides, in so many ways, the sale of .org is a tangled thicket of stakeholder unhappiness. There seem to be six major categories of complaints: sentimental, financial, practical, structural, ethical, and paranoid.

First, some background. The domain name system of which .org is a part was devised in the mid-1980s by Paul Mockapetris. Even though no rules limit registrations in .org, the idea that it is used by non-commercial organizations persists. In 1997, as the Internet was being commercialized, the original one-man manager, John Postel, was replaced by the Internet Corporation for Assigned Names and Numbers, which manages the network of registries (each manages one top-level domain) and registrars (which sell domain names inside those TLDs). At the same time, the gTLDs were opened up to competition; the change led in 2002 to .org's being handed off to the Internet Society-created, non-profit Public Interest Registry. Six months ago, the Internet Society announced that PIR would drop its non-profit status; two weeks ago came the sale to newly-formed Ethos Capital. Suspicious minds shouted betrayal, and the Nonprofit Technology Enterprise Network promptly set up SaveDotOrg. Its nearly 13,000 signatures include hundreds of NGOs and Internet organizations.

At The Register, Kieren McCarthy - the leading journalistic expert on the DNS and its governance - laid out what little was known about the sale and the resulting conflict. He followed up with the silence that met the complaints; the discontent on view at the Internet Governance Forum; an interview with ISOC CEO Andrew Sullivan, who said "Most people don't care one way or another" and that only a court order would stop the sale; and finally the news that the opportunity to sell PIR came out of the blue. Late on, the price emerged: $1.14 billion. Internet Society says it will use the money to further its mission to promote the good of the Internet community at large.

So, to the six categories. The Old Net remains sentimentally attached to the idea of .org as a home for non-profits such as Internet infrastructure managers IETF and ISOC, human rights NGOs ACLU, and Amnesty International, and worldwide resources Wikipedia, the Internet Archive, and the Gutenberg Project. But, as the New York Times comments, this image of .org is deceptive; it's also the home of the commercial entity Craigslist and dozens of astroturf fronts for corporate interests.

At Techdirt, Mike Masnick traces the ethics, finding insider connections. The slightly paranoid concerns surround: the potential for the registry owner to engage in censorship. The practical issue is the removal of the price cap; an organization with a long-time Internet presence can in theory simply register a new domain name if the existing one becomes too expensive, but in practice the switching costs are substantial, and we all pay them as links break all over the web. A site like Public Domain Review could be held to ransom by rapidly rising prices. Finally, the structural concern is that yet another piece of the Internet infrastructure is being sold off to centralized private interests who will conveniently forget the promises they make at the time of the sale.

The most interesting is financial: New Zealand fund manager Lance Wiggs thinks ISOC is undercharging by $1 billion; he also thinks they should publish far more detail.

Most of Wiggs' questions remained unanswered after yesterday evening's community call organized by SaveDotOrg and NTEN. The question-answering session included Sullivan, Ethos CEO Erik Brooks, Ethos Chief Purpose Officer Noral Abusitta-Ouri, EFF attorneys Mitch Stoltz and Cara Gagliano, PIR CEO John Nevin, and ISOC Ireland head Brandt Dainow.

Sullivan said both that there were other suitors for PIR and that because speed was of the essence to close the deal it was necessary to negotiate under non-disclosure agreements. The EFFers were skeptical. The Ethos Capital folks sprayed reassurances and promises of community outreach, but were evasive about nailing these down with binding, legally enforceable contracts. Least impressed was Dainow, which shows there's disagreement within ISOC itself. I'm no financier, but I do know this: the more you're pressured to close a deal quickly the more you should distrust the terms.

To some extent, all of this is a consequence of the fundamental problem with the DNS: we have no consensus on what it's for. This was already obvious in 1997, when I first wrote about it. Then, as now, insiders were enraging the broader community by making deals among themselves without wider consultation - a habit agreed principles and purposes could constrain. In 2004 I asked, "Does it follow geography, trademarks and company names, or types of users? Is it a directory or a marketing construct? Should names be automatically guessable? What about, instead, dividing up the Net by language? Or registered company names, like .plc.uk and .ltd.uk? Or content, like .xxx or .kids? Why not have an electronic commerce space where retailers register according to the areas they deliver to?" None of these is more obviously right than another, but as long as there are no agreed answers, disputes like these will keep emerging.


Illustrations: Copernican map of the universe (from the Stanford collection, via Public Domain Review).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 4, 2019

Digital London

cropped-view-from-walkie-talkie-2017.jpgAnyone studying my travel patterns on the London Underground will encounter a conundrum: what makes a person undertake, once or twice a week, a one-way journey into the center of town? How do they get home?

For most people, it would remain, like Sudoku, a pointless puzzle. For Transport for London, the question is of greater importance: what does it plan for? On Monday, at an event run by the Greater London Authority intelligence unit to showcase its digital tools, a TfL data analyst expressed just this sort of conundrum. I had asked, "What's the hardest problem you're working on?" And he said, "Understanding human behavior." Data shows what happened. It gives no clue as to *why* unless you can map the data to other clue-bearing streams. If you can match the dates, times, and weather reports, the flood onto and into buses, trains, tubes, and taxis may be clearly understood as: it was raining. But beyond that...people are weird.

And they're numerous. As London's chief digital officer, Theo Blackwell, said, London is now the largest it's ever been, only recently passing the peak it reached in 1939. Seventy-odd years of peace and improving public health has enabled uninterrupted growth to 9 million; five or six years hence it's expected to reach 11 million, a 20+% rise that will challenge the capacity of housing and transport, and exacerbate the impact of climate change. Think water: London, is drier than you'd expect.

A fellow attendee summed this up this way: "London has put on an entire Birmingham in size in the last ten years." Two million more is approaching the size of greater Manchester. London, in a term used by Greenwood Strategic Advisors' Craig Stephens, is an "attractor city". People don't need a reason to come here, as they do when moving to smaller places. As a result, tracking and predicting migration is one of the thornier problems.

TfL's planning problems are, therefore, a subset of the greater range of conundrums facing London, some of them fueled by the length of the city's history. David Christie, TfL's demand forecasting and analytics manager, commented, for example, that land use was a challenge because there hasn't been an integrated system to track it. Mike Bracken, one of the founders of the Government Digital Service, reminded that legacy systems and vendor lock-in are keeping the UK lagging well behind countries like Peru and Madagascar, which solve services in 12 weeks. "We need to hurry up," he said, "because our mental model of where we stand in relationship to other nations is not going to stand for much longer." He had a tip for making things work: "Don't talk about blockchain. Just fix your website."

Christie's group does the technical work of modeling for TfL. In the 1970s, he said, his department would prepare an input file and send it off to the Driver and Vehicle Licensing Agency's computer and they'd get back results two months later. He still complains that run times for the department's models are an issue, but the existing model has been the basis for current schemes such as Crossrail 1 and the Northern Line extension. What makes this model - the London Simulator - sound particularly interesting was Christie's answer to the question of how they validate the data. "The first requirement of the model is to independently recreate the history." Instead of validating the data, they validate the model by looking to see what it's wrong about in the last 25 years.

Major disruptors TfL expects include increasingly flexible working patterns, autonomous vehicles, more homes. Christie didn't mention it, but I imagine Uber's arrival was an unpredictable external black swan event, abruptly increasing congestion and disrupting modal share. But is it any part of why the car journeys per day have dropped 8% since 2000?

Refreshingly, the discussion focused on using technology in effective ways to achieve widely-held public goals, rather than biased black-box algorithms and automated surveillance, or the empty solutionist landscapes Ben Green objects to in The Smart-Enough City. Instead, they were talking things like utilities sharing information about which roads they need to dig up when, intended to be a win for residents, who welcome less disruption, and for companies, which appreciate saving some of the expense. When, in a final panel, speakers were asked to name significant challenges they'd like to solve, they didn't talk about technology. Instead, Erika Lewis, the deputy director for data policy and strategy at the Department for Culture, Media, and Sport, said she wanted to improve how the local and city governments interface with central government and design services from the ground up around the potential uses for the data. "We missed the boat on smart meters," she said, "but we could do it with self-driving cars."

Similarly, Sarah Mulley, GLA's executive director for communities and intelligence, said engaging with civil society and the informal voluntary sector was a challenge she wanted to solve. "[They have] a lot to say, but there aren't ways to connect into it." Blackwell had the last word. "In certain areas, data has been used in a quite brutal way," he said. "How to gain trust is a difficult leadership challenge for cities."


Illustrations: London in 2017, looking south past London Bridge toward Southwark Cathedral and the Shard from the top of the Walkie-Talkie building.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2019

Name change

Dns-rev-1-wikimedia.gifIn 2014, six months after the Snowden revelations, engineers began discussing how to harden the Internet against passive pervasive surveillance. Among the results have been efforts like Let's Encrypt, EFF's Privacy Badger, and HTTPS Everywhere. Real inroads have been made into closing some of the Internet's affordances for surveillance and improving security for everyone.

Arguably the biggest remaining serious hole is the domain name system, which was created in 1983. The DNS's historical importance is widely underrated; it was essential in making email and the web usable enough for mass adoption before search engines. Then it stagnated. Today, this crucial piece of Internet infrastructure still behaves as if everyone on the Internet can trust each other. We know the Internet doesn't live there any more; in February the Internet Corporation for Assigned Names and Numbers, which manages the DNS, warned of large-scale spoofing and hijacking attacks. The NSA is known to have exploited it, too.

The problem is the unprotected channel between the computer into which we type humanly-readable names such as pelicancrossing.net and the computers that translate those names into numbered addresses the Internet's routers understand, such as 216.92.220.214. The fact that routers all trust each other is routinely exploited for the captive portals we often see when we connect to public wi-fi systems. These are the pages that universities, cafes, and hotels set up to redirect Internet-bound traffic to their own page so they can force us to log in, pay for access, or accept terms and conditions. Most of us barely think about it, but old-timers and security people see it as a technical abuse of the system.

Several hijacking incidents raised awareness of DNS's vulnerability as long ago as 1998, when security researchers Matt Blaze and Steve Bellovin discussed it at length at Computers, Freedom, and Privacy. Twenty-one years on, there have been numerous proposals for securing the DNS, most notably DNSSEC, which offers an upwards chain of authentication. However, while DNSSEC solves validation, it still leaves the connection open to logging and passive surveillance, and the difficulty of implementing it has meant that since 2010, when ICANN signed the global DNS root, uptake has barely reached14% worldwide.

In 2018, the IETF adopted DNS-over-HTTPS as a standard. Essentially, this sends DNS requests over the same secure channel browsers use to visit websites. Adoption is expected to proceed rapidly because it's being backed by Mozilla, Google, and Cloudflare, who jointly intend to turn it on by default in Chrome and Firefox. In a public discussion at this week's Internet Service Providers Association conference, a fellow panelist suggested that moving DNS queries to the application level opens up the possibility that two different apps on the same device might use different DNS resolvers - and get different responses to the same domain name.

Britain's first public notice of DoH came a couple of week ago in the Sunday Times, which billed it as Warning over Google Chrome's new threat to children. This is a wild overstatement, but it's not entirely false: DoH will allow users to bypass the parts of Britain's filtering system that depend on hijacking DNS requests to divert visitors to blank pages or warnings. An engineer would probably argue that if Britain's many-faceted filtering system is affected it's because the system relies on workarounds that shouldn't have existed in the first place. In addition, because DoH sends DNS requests over web connections, the traffic can't be logged or distinguished from the mass of web traffic, so it will also render moot some of the UK's (and EU's) data retention rules.

For similar reasons, DoH will break captive portals in unfriendly ways. A browser with DoH turned on by default will ignore the hotel/cafe/university settings and instead direct DNS queries via an encrypted channel to whatever resolver it's been set to use. If the network requires authentication via a portal, the connection will fail - a usability problem that will have to be solved.

There are other legitimate concerns. Bypassing the DNS resolvers run by local ISPs in favor of those belonging to, say, Google, Cloudflare, and Cisco, which bought OpenDNS in 2015, will weaken local ISPs' control over the connections they supply. This is both good and bad: ISPs will be unable to insert their own ads - but they also can't use DNS data to identify and block malware as many do now. The move to DoH risks further centralizing the Internet's core infrastructure and strengthening the power of companies most of us already feel have too much control.

The general consensus, however, is that like it or not, this thing is coming. Everyone is still scrambling to work out exactly what to think about it and what needs to be done to mitigate accompanying risks, as well as find solutions to the resulting problems. It was clear from the ISPA conference panel that everyone has mixed feelings, though the exact mix of those feelings and which aspects are identified as problems - differ among ISPs, rights activists, and security practitioners. But it comes down to this: whether you like this particular proposal or not, the DNS cannot be allowed to remain in its present insecure state. If you don't want DoH, come up with a better proposal.


Illustrations: DNS diagram (via Б.Өлзий at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 22, 2019

Layer nine

nemeth-osi-9layer-tshirt.jpgIs it possible to regulate the internet without killing it?

Before you can answer that you have to answer this: what constitutes killing the Internet? The Internet Society has a sort of answer, which is a list of what it calls Internet invariants, a useful phrase that is less attackable as "solutionism" by Evgeny Morozov than alternatives that portray the Internet as if it were a force of nature instead of human-designed and human-made.

Few people watching video on their phones on the Underground care about this, but networking specialists view the Internet as a set of layers. I don't know the whole story, but in the 1980s researchers, particularly in Europe, put a lot of work into conceptualizing a seven-layer networking model, Open Systems Interconnection. By 1991, however, a company CEO told me, "I don't know why we need it. TCP/IP is here now. Why can't we just use that?" TCP/IP are the Internet protocols, so that conversation showed the future. However, people still use the concepts OSI built. The bottom, physical layers, are the province of ISPs and telcos. The ones the Internet Society is concerned about are the ones concerning infrastructure and protocols - the middle layers. Layer 7, "Application", is all the things users see - and politicians fight over.

We are at a layer the OSI model failed to recognize, identified by the engineer Evi Nemeth. We - digital and human rights activists, regulators, policy makers, social scientists, net.wars readers - are at layer 9.

So the question we started with might also be phrased, "Is it possible to regulate the application layer while leaving the underlying infrastructure undamaged?" Put like that, it feels like it ought to be. Yet aspects of Internet regulation definitely entangle downwards. Most are surveillance-related, such as the US requirement that ISPs enable interception and data retention. Emerging demands for localized data storage and the General Data Protection Regulation also may penetrate more deeply while raising issues of extraterritorial jurisdiction. GDPR seeds itself into other countries like the stowaway recursive clause of the GNU General Public License for software: both require their application to onward derivatives. Localized data storage demands blocks and firewalls instead of openness.

Twenty years ago, you could make this pitch to policy makers: if you break the openness of the Internet by requiring a license to start an online business, or implementing a firewall, or limiting what people can say and do, you will be excluded form the Internet's economic and social benefits. Since then, China has proved that a national intranet can still fuel big businesses. Meanwhile, the retail sector craters and a new Facebook malfeasance surfaces near-daily, the policy maker might respond that the FAANG- Fab Five pay far less in tax than the companies they've put out of business, employment precarity is increasing, and the FAANGs wield disproportionate power while enabling abusive behavior and the spread of extremism and violence. We had open innovation and this is what it brought us.

To old-timers this is all kinds of confusion. As I said recently on Twitter, it's subsets all the way down: Facebook is a site on the web, and the web is an application that runs on the Internet. They are not equivalents. Here. In countries where Facebook's Free Basics is zero-rated, the two are functionally equivalent.

Somewhere in the midst of a discussion yesterday about all this, it was interesting to consider airline safety. That industry understood very early that safety was crucial to its success. Within 20 years of the Wright Brothers' first flight in 1903, the nascent industry was lobbying the US Congress for regulation; the first airline safety bill passed in 1926. If the airline industry had instead been founded by the sort of libertarians who have dominated large parts of Internet development...well, the old joke about the exchange between General Motors and Bill Gates applies. The computer industry has gotten away with refusing responsibility for 40 years because they do not believe we'll ever stop buying their products, and we let it.

There's a lot to say about the threat of regulatory capture even in two highly regulated industries, medicine and air travel, and maybe we'll say it here one week soon, but the overall point is that outside of the open source community, most stakeholders in today's Internet lack the kind of overarching common goal that continues to lead airlines and airplane manufacturers to collaborate on safety despite also being fierce competitors. The computer industry, by contrast, has spent the last 50 years mocking government for being too slow to keep up with technological change while actively refusing to accept any product liability for software.

In our present context, the "Internet invariants" seem almost quaint. Yet I hope the Internet Society succeeds in protecting the Internet's openness because I don't believe our present situation means that the open Internet has failed. Instead, the toxic combination of neoliberalism, techno-arrogance, and the refusal of responsibility (by many industries - just today, see pharma and oil) has undermined the social compact the open Internet reflected. Regulation is not the enemy. *Badly-conceived* regulation is. So the question of what good regulation looks like is crucial.


Illustrations: Evi Nemeth's adapted OSI model, seen here on a T-shirt historically sold by the Internet Systems Consortium.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 22, 2019

Metropolis

Metropolis-openingshot.png"As a citizen, how will I know I live in a smarter city, and how will life be different?" This question was probably the smartest question asked at yesterday's Westminster Forum seminar on smart cities (PDF); it was asked by Tony Sceales, acting as moderator.

"If I feel safe and there's less disruption," said Peter van Manen. "You won't necessarily know. Thins will happen as they should. You won't wake up and say, 'I'm in the city of the future'," said Sam Ibbott. "Services become more personalized but less visible," said Theo Blackwell the Chief Digital Office for London.

"Frictionless" said Jacqui Taylor, offering it as the one common factor she sees in the wildly different smart city projects she has encountered. I am dubious that this can ever be achieved: one person's frictionless is another's desperate frustration: streets cannot be frictionless for *both* cars and cyclists, just as a city that is predicted to add 2 million people over the next ten years can't simultaneously eliminate congestion. "Working as intended" was also heard. Isn't that what we all wish computers would do?

Blackwell had earlier mentioned the "legacy" of contactless payments for public transport. To Londoners smushed into stuffed Victoria Line carriages in rush hour, the city seems no smarter than it ever was. No amount of technological intelligence can change the fact that millions of people all want to go home at the same time or the housing prices that force them to travel away from the center to do so. We do get through the ticket barriers faster.

"It's just another set of tools," said Jennifer Schooling. "It should feel no different."

The notion of not knowing as the city you live in smartens up should sound alarm bells. The fair reason for that hiddenness is the reality that, as Sara Degli Esposti pointed out at this year's Computers, Privacy, and Data Protection, this whole area is a business-to-business market. "People forget that, especially at the European level. Users are not part of the picture, and that's why we don't see citizens engaged in smart city projects. Citizens are not the market. This isn't social media."

She was speaking at CPDP's panel on smart cities and governance, convened by the University of Stirling's William Webster, who has been leading a research project, CRISP, to study these technologies. CRISP asked a helpfully different question: how can we use smart city technologies to foster citizen engagement, coproduction of services, development of urban infrastructure, and governance structures?

The interesting connection is this: it's no surprise when CPDP's activists, regulators, and academics talk about citizen engagement and participation, or deplore a model in which smart cities are a business-led excuse for corporate and government, surveillance. The surprise comes when two weeks later the same themes arise among Westminster Forum's more private and public sector speakers and audience. These are the people who are going to build these new programs and services, and they, too, are saying they're less interested in technology and more interested in solving the problems that keep citizens awake at night: health, especially.

There appears to be a paradigm shift beginning to happen as municipalities begin to seriously consider where and on what to spend their funds.

However, the shift may be solely European. At CPDP, Canadian surveillance studies researcher David Murakami Wood told the story of Toronto, where (Google owner) Alphabet subsidiary Sidewalk Labs swooped in circa 2014 with proposals to redevelop the Quayside area of Toronto in partnership with Waterfront Toronto. The project has been hugely controversial - there were hearings this week in Ottawa, the provincial capital.

As Murakami Wood's tells it, for Sidewalk Labs the area is a real-world experiment using real people's lives as input to create products the company can later sell elsewhere. The company has made clear it intends to keep all the data the infrastructure generates on its servers in the US as well as all the intellectual property rights. This, Murakami Wood argued, is the real cost of the "free" infrastructure. It is also, as we're beginning to see elsewhere, the extension of online tracking or, as Murakami Wood put it, surveillance capitalism into the physical world: cultural appropriation at municipal scale from a company that has no track record in building buildings, or even publishing detailed development plans. Small wonder that Murakami Wood laughed when he heard Sidewalk Labs CEO Dan Doctoroff impress a group of enthusiastic young Canadian bankers with the news that the company had been studying cities for *two years*.

Putting these things together, we have, as Andrew Adams suggested, three paradigms, which we might call US corporate, Chinese authoritarian, and, emerging, European participatory and cooperative. Is this the choice?

Yes and no. Companies obviously want to develop systems once, sell them everywhere. Yet the biggest markets are one-off outliers. "Croydon," said Blackwell, "is the size of New Orleans." In addition, approaches vary widely. Some places - Webster mentioned Glasgow - are centralized command and control; others - Brazil - are more bottom-up. Rick Robinson finds that these do not meet in the middle.

The clear takeaway overall is that local context is crucial in shaping smart city projects and despite some common factors each one is different. We should built on that.


Illustrations: Fritz Lang's Metropolis (1927).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 8, 2019

Doing without

kashmir-hill-untech-gizmodo.pngOver at Gizmodo, Kashmir Hill has conducted a fascinating experiment: cutting, in turn, Amazon, Facebook, Google, Microsoft, and Apple, culminating with a week without all of them. Unlike the many fatuous articles in which privileged folks fatuously boast about disconnecting, Hill is investigating a serious question: how deeply have these companies penetrated into our lives? As we'll see, this question encompasses the entire modern world.

For that reason, it's important. Besides, as Hill writes, it's wrong to answer objections to GAFAM's business practices - or their privacy policies - with, "Well, don't use them, then." It may be to buy from smaller sites and local suppliers, delete Facebook, run Linux, switch to AskJeeves and OpenStreetMap, and dump the iPhone, but doing so requires a substantial rethink of many tasks. As regulators consider curbing GAFAM's power, Hill's experiment shows where to direct our attention.

Online, Amazon is the hardest to avoid. As Lina M. Khan documented last year, Amazon underpins an ever-increasing amount of Internet infrastructure. Netflix, Signal, the WELL, and Gizmodo itself all run on top of Amazon's cloud services, AWS. To ensure she blocked all of them, Hill got a technical expert to set up a VPN that blocked all IP addresses owned by each company and monitored attempted connections. Even that, however, was complicated by the use of content delivery networks, which mask the origin of network traffic.

Barring Facebook also means dumping Instagram and WhatsApp, and, as Hill notes, changing the signin procedure for any website where you've used your Facebook ID. Even if you are a privacy-conscious net.wars reader who would never grant Facebook that pole position, the social media buttons on most websites and ubiquitous trackers also have to go.

For Hill, blocking Apple - which seems easy to us non-Apple users - was "devastating". But this is largely a matter of habit, and habits can be re-educated. The killer was the apps: because iMessage reroutes texts to its own system, some of Hill's correspondents' replies never arrive, and she can't FaceTime her friends. Her conclusion: "It's harder to get out of Apple's ecosystem than Google's." However, once out she found it easy to stay that way - as long as she could resist her friends pulling her back in.

Google proved easier than expected despite her dependence on its services - Maps, calendar, browser. Here the big problem was email. The amount of stored information made it impossible to simply move and delete the account; now we know why Google provides so much "free" storage space. Like Amazon, the bigger issue was all the services Google underpins - trackers, analytics, and, especially, Maps, which Uber, Lyft, and Yelp depend. Hill should be grateful she didn't have a Nest thermostat and doesn't live in Minnesota. The most surprising bit is that so many sites load Google *fonts*. Also, like Facebook, Google has spread logins across the web, and Hill had to find an alternative to Dropbox, which uses Google to verify users.

In our minds, Microsoft is like Apple. Don't like Windows? Get a Mac or use Linux. Ah, but: I have seen the Windows Blue Screen of Death on scheduling systems on both the London Underground and Philadelphia's SEPTA. How many businesses that I interact with depend on Microsoft products? PCs, Office, and Windows servers and point of sale systems are everywhere. A VPN can block LinkedIn, Skype, and (sadly) Github - but it can't block any of those - or the back office systems at your bank. You can sell your Xbox, but even the local film society shows movies using VLC on Windows.

Hill's final episode, in which she eliminates all five simultaneously, posted just last night. As expected, she struggles to find alternative ways to accomplish many tasks she hasn't had to think about before. Ironically, this is easier if you're an Old Net Curmudgeon: as soon as she says large file, can't email, I go, "FTP!" while various web services all turn out to behosted on AWS, and she eventually lands on "command line". It's a definite advantage if you remember how you did stuff *before* the Internet - cash can pay the babysitter (or write a check!), and old laptops can be repurposed to run Linux. Even so, complete avoidance is really only realistic for a US Congressman. The hardest for me personally would be giving up my constant compaion, DuckDuckGo, which is hosted on...AWS.

Several things need to happen to change this - and we *should* change it because otherwise we're letting them pwn us, as in Dave Eggers' The Circle. The first is making the tradeoffs visible, so that we understand who we're really benefiting and harming with our clicks. The second is also regulatory: Lina Khan described in 2017 how to rethink antitrust law to curb Amazon. Facebook, as Marc Rotenberg told CNBC last week, should be required to divest Instagram and WhatsApp. Both Facebook and Google should spin off or discontinue their identity verification and web-wide login systems into separate companies. Third, we should encourage alternatives by using them.

But the last thing is the hardest: we must convince all our friends that it's worth putting up with some inconvenience. As a lifelong non-drinker living in pub-culture Britain, I can only say: good luck with that.


Illustrations: Kashmir Hill and her new technology.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2018

The Rochdale hypothesis

Unity_sculpture,_Rochdale_(1).JPGFirst, open a shop. Thus the pioneers of Rochdale, Lancashire, began the process of building their town. Faced with the jobs and loss of income brought by the Industrial Revolution, a group of 28 people, about half of them weavers, designed the set of Rochdale principles, and set about finding £1 each to create a cooperative that sold a few basics. Ten years later, Wikipedia tells us, Britain was home to thousands of imitators: cooperatives became a movement.

Could Rochdale form the template for building a public service internet?

This was the endpoint of a day-long discussion held as part of MozFest and led by a rogue band from the BBC. Not bad, considering that it took us half the day to arrive at three key questions: What is public? What is service? What is internet?

Pause.

To some extent, the question's phrasing derives from the BBC's remit as a public service broadcaster. "Public service" is the BBC's actual mandate; broadcasting the activity it's usually identified with, is only the means by which it fulfills that mission. There might be - are - other choices. To educate, inform, to entertain, those are its mandate. Neither says radio or TV.

Probably most of the BBC's many global admirers don't realize how broadly the BBC has interpreted that. In the 1980s, it commissioned a computer - the Acorn, which spawned ARM, whose chips today power smartphones - and a series of TV programs to teach the nation about computing. In the early 1990s, it created a dial-up Internet Service Provider to help people get online. Some ten or 15 years ago I contributed to an online guide to the web for an audience with little computer literacy. This kind of thing goes way beyond what most people - for example, Americans - mean by "public broadcasting".

But, as Bill Thompson explained in kicking things off, although 98% of the public has some exposure to the BBC every week, the way people watch TV is changing. Two days later, the Guardian reported that the broadcasting regulator, Ofcom, believes the BBC is facing an "existential crisis" because the younger generation watches significantly less television. An eighth of young people "consume no BBC content" in any given week. When everyone can access the best of TV's back catalogue on a growing array of streaming services, and technology giants like Netflix and Amazon are spending billions to achieve worldwide dominance, the BBC must change to find new relevance.

So: the public service Internet might be a solution. Not, as Thompson went on to say, the Internet to make broadcasting better, but the Internet to make *society* better. Few other organizations in the world could adopt such a mission, but it would fit the BBC's particular history.

Few of us are happy with the Internet as it is today. Mozilla's 2018 Internet Health Report catalogues problems: walled gardens, constant surveillance to exploit us by analyzing our data, widespread insecurity, and increasing censorship.

So, again: what does a public service Internet look like? What do people need? How do you avoid the same outcome?

"Code is law," said Thompson, citing Lawrence Lessig's first book. Most people learned from that book that software architecture could determine human behaviour. He took a different lesson: "We built the network, and we can change it. It's just a piece of engineering."

Language, someone said, has its limits when you're moving from rhetoric to tangible service. Canada, they said, renamed the Internet "basic service" - but it changed nothing. "It's still concentrated and expensive."

Also: how far down the stack do we go? Do we rewrite TCP/IP? Throw out the web? Or start from outside and try to blow up capitalism? Who decides?

At this point an important question surfaced: who isn't in the room? (All but about 30 of the world's population, but don't get snippy.) Last week, the Guardian reported that the growth of Internet access is slowing - a lot. UN data to be published next month by the Web Foundation, shows growth dropped from 19% in 2007 to less than 6% in 2017. The report estimates that it will be 2019, two years later than expected, before half the world is online, and large numbers may never get affordable access. Most of the 3.8 billion unconnected are rural poor, largely women, and they are increasingly marginalized.

The Guardian notes that many see no point in access. There's your possible starting point. What would make the Internet valuable to them? What can we help them build that will benefit them and their communities?

Last week, the New York Times suggested that conflicting regulations and norms are dividing the Internet into three: Chinese, European, and American. They're thinking small. Reversing the Internet's increasing concentration and centralization can't be by blowing up the center because it will fight back. But decentralizing by building cooperatively at the edges...that is a perfectly possible future consonant with its past, even we can't really force clumps of hipsters to build infrastructure in former industrial towns, by luring them there with cheap housing prices. Cue Thompson again: he thought of this before, and he can prove it: here's his 2000 manifesto on e-mutualism.

Building public networks in the many parts of Britain where access is a struggle...that sounds like a public service remit to me.

Illustrations: Illustrations: The Unity sculpture, commemorating the 150th anniversary of the Rochdale Pioneers (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2018

Lost in transition

End_all_DRM_in_the_world_forever,_within_a_decade.jpg"Why do I have to scan my boarding card?" I demanded loudly of the machine that was making this demand. "I'm buying a thing of milk!"

The location was Heathrow Terminal 5. The "thing of milk" was a pint of milk being purchased with a view to a late arrival in a continental European city where tea is frequently offered with "Kafeesahne", a thick, off-white substance that belongs with tea about as much as library paste does.

A human materialized out of nowhere, and typed in some codes. The transaction went through. I did not know you could do that.

The incident sounds minor - yes, I thanked her - but has a real point. For years, UK airport retailers secured discounts for themselves by demanding to scan boarding cards at the point of purchase while claiming the reason was to exempt the customers from VAT when they are taking purchases out of the country. Just a couple of years ago the news came out: the companies were failing to pass the resulting discounts on to customers and simply pocketing the VAT. Legally, you are not required to comply with the request.

They still ask, of course.

If you're dealing with a human retail clerk, refusing is easy: you say "No" and they move on to completing the transaction. The automated checkout (which I normally avoid), however is not familiar with No. It is not designed for No. No is not part of its vocabulary unless a human comes along with an override code.

My legal right not to scan my boarding card therefore relies on the presence of an expert human. Take the human out of that loop - or overwhelm them with too many stations to monitor - and the right disappears, engineered out by automation and enforced by the time pressure of having to catch a flight and/or the limited resource of your patience.

This is the same issue that has long been machinified by DRM - digital rights management - and the locks it applies to commercially distributed content. The text of Alice in Wonderland is in the public domain, but wrap it in DRM and your legal rights to copy, lend, redistribute, and modify all vanish, automated out with no human to summon and negotiate with.

Another example: the discount railcard I pay for once a year is renewable online. But if you go that route, you are required to upload your passport, photo driver's license, or national ID card. None of these should really be necessary. If you renew at a railway station, you pay your money and get your card, no identification requested. In this example the automation requires you to submit more data and take greater risk than the offline equivalent. And, of course, when you use a website there's no human to waive the requirement and restore the status quo.

Each of these services is designed individually. There is no collusion, and yet the direction is uniform.

Most of the discussion around this kind of thing - rightly - focuses on clearly unjust systems with major impact on people's lives. The COMPAS recidivism algorithm, for example, is used to risk-assess the likelihood that a criminal defendant will reoffend. A ProPublica study found that the algorithm tended to produced biased results of two kinds: first, black defendants were more likely than white defendants to be incorrectly rated as high risk; second, white reoffenders were incorrectly classified as low-risk more often than black ones. Other such systems show similar biases, all for the same basic reason: decades of prejudice are baked into the training data these systems are fed. Virginia Eubanks, for example, has found similar issues in systems such as those that attempt to identify children at risk and that appear to see poverty itself as a risk factor.

By contrast, the instances I'm pointing out seem smaller, maybe even insignificant. But the potential is that over time wide swathes of choices and rights will disappear, essentially automated out of our landscape. Any process can be gamed this way.

At a Royal Society meeting last year, law professor Mireille Hildebrandt outlined the risks of allowing the atrophy of governance through the text-driven law that today is negotiated in the courts. The danger, she warned, is that through machine deployment and "judgemental atrophy" it will be replaced with administration, overseen by inflexible machines that enforce rules with no room for contestability, which Hildebrandt called "the heart of the rule of law".

What's happening here is, as she said, administration - but it's administration in which our legitimate rights dissipate in a wave of "because we can" automated demands. There are many ways we willingly give up these rights already - plenty of people are prepared to give up anonymity in financial transactions by using all manner of non-cash payment systems, for example. But at least those are conscious choices from which we derive a known benefit. It's hard to see any benefit accruing from the loss of the right to object to unreasonable bureaucracy imposed upon us by machines designed to serve only their owners' interests.


Illustrations: "Kill all the DRM in the world within a decade" (via Wikimedia.).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 17, 2018

Redefinition

Robber-barons2-bosses-senate.pngOnce upon a nearly-forgotten time, the UK charged for all phone calls via a metered system that added up frighteningly fast when you started dialing up to access the Internet. The upshot was that early Internet services like the now-defunct Demon Internet could charge a modest amount (£10) per month, secure that the consciousness of escalating phone bills would drive subscribers to keep their sessions short. The success of Demon's business model, therefore, depended on the rapaciousness of strangers.

I was reminded of this sort of tradeoff by a discussion in the LA Times (proxied for EU visitors) of cable-cutters. Weary of paying upwards of $100 a month for large bundles of TV channels they never watch, Americans are increasingly dumping them in favor of cheaper streaming subscriptions. As a result, ISPs that depend on TV package revenues are raising their broadband prices to compensate, claiming that the money is needed to pay for infrastructure upgrades. In the absence of network neutrality requirements, those raised prices could well be complemented by throttling competitors' services.

They can do this, of course, because so many areas of the US are lucky if they have two choices of Internet supplier. That minimalist approach to competition means that Americans pay more to access the Internet than many other countries - for slower speeds. It's easy to raise prices when your customers have no choice.

The LA Times holds out hope that technology will save them; that is, the introduction of 5G, which promises better speeds and easier build-out, will enable additional competition from AT&T, Verizon, and Sprint - or, writer David Lazarus adds, Google, Facebook, and Amazon. In the sense of increasing competition, this may be the good news Lazarus thinks it is, even though he highlights AT&T's and Verizon's past broken promises. I'm less sure: physics dictates that despite its greater convenience the fastest wireless will never be as fast as the fastest wireline.

5G has been an unformed mirage on the horizon for years now, but apparently no longer: CNBC says Verizon's 5G service will begin late this year in Houston, Indianapolis, Los Angeles, and Sacramento and give subscribers TV content in the form of an Apple TV and a YouTube subscription. A wireless modem will obviate the need for cabling.

The potential, though, is to entirely reshape competition in both broadband and TV content, a redefinition that began with corporate mergers such as Verizon's acquisition of AOL and Yahoo (now gathered into its subsidiary, "Oath") and AT&T's whole-body swallowing of Time Warner, which includes HBO. Since last year's withdrawal of privacy protections passed during the Obama administration, ISPs have greater latitude to collect and exploit their customers' online data trails. Their expansion into online content makes AT&T and Verizon look more like competitors to the online behemoths. For consumers, greater choice in bandwidth provider is likely to be outweighed by the would-you-like-spam-with-that complete lack of choice about data harvesting. If the competition 5G opens up is provided solely by avid data miners who all impose the same terms and conditions...well, which robber baron would you like to pay?

There's a twist. The key element that's enabled Amazon and, especially, Netflix to succeed in content development is being able to mine the data they collect about their subscribers. Their business models differ - for Amazon, TV content is a loss-leader to sell subscriptions to its premium delivery service; for Netflix, TV production is a bulwark against dependence on third-party content creators and their licensing fees - but both rely on knowing what their customers actually watch. Their ambitions, too, are changing. Amazon has canceled much of its niche programming to chase HBO-style blockbusters, while Netflix is building local content around the world. Meanwhile, AT&T wants HBO to expand worldwide and focus less on its pursuit of prestige; Apple is beginning TV production; and Disney is pulling its content from Netflix to set up its own streaming service.

The idea that many of these companies will be directly competing in all these areas is intriguing, and its impact will be felt outside the US. It hardly matters to someone in London or Siberia how much Internet users in Indianapolis pay for their broadband service or how good it is. But this reconfiguration may well end the last decade's golden age of US TV production, particularly but not solely for drama. All the new streaming services began by mining the back catalogue to build and understand an audience and then using creative freedom to attract talent frustrated by the legacy TV networks' micromanagement of every last detail, a process the veteran screenwriter Ken Levine has compared to being eaten to death by moths.

However, one last factor could provide an impediment to the formation of this landscape: on June 28, California adopted the Consumer Privacy Act, which will come into force in 2020. As Nick Confessore recounts in the New York Times Magazine, this "overnight success" required years of work. Many companies opposed the bill: Amazon, Google, Microsoft, Uber, Comcast, AT&T, Cox, Verizon, and several advertising lobbying groups; Facebook withdrew its initial opposition.. EFF calls it "well-intentioned but flawed", and is proposing changes. ISPs and technology companies also want (somewhat different) changes. EPIC's Mark Rotenberg called the bill's passage a "milestone moment". It could well be.


Illustrations: Robber barons overseeing the US Congress (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 8, 2017

Plastures of plenty

Thumbnail image for windows-xp-hilltop.jpegIt was while I was listening to Isabella Henriques talk about children and consumerism at this week's Children's Global Media Summit that it occurred to me that where most people see life happening advertisers see empty space.

Henriques, like Kathryn Montgomery earlier this year, is concerned about abusive advertising practices aimed at children. So much UK rhetoric around children and the internet focuses on pornography and extremism - see, for example, this week's Digital Childhood report calling for a digital environment that is "fit for childhood" - that it's refreshing to hear someone talk about other harms. Such as: teaching kids "consumerism". Under 12, Henriques said, children do not understand the persuasiveness and complexity of advertising. Under six, they don't identify ads (like the toddler who watched 12 minutes of Geico commercials). And even things that are *effectively* ads aren't necessarily easily identifiable as such, even by adults: unboxing videos, product placement, YouTube kids playing with branded toys, and in-app "opportunities" to buy stuff. Henriques' research finds that children influence family purchases by up to 80%. That's not a baby you're expecting; it's a sales promoter.

When we talk about the advertising arms race, we usually mean the expanding presence and intrusiveness of ads in places where we're already used to seeing them. That escalation has been astonishing.

To take one example: a half-hour sitcom episode on US network television in 1965 - specifically, the deservedly famous Coast to Coast Big Mouth episode of The Dick Van Dyke Show - was 25:30 minutes long. A 2017 episode of the top-rated US comedy, The Big Bang Theory, barely ekes out 18. That's over a third less content, double the percentage of time watching ads, or simply seven and a half extra minutes. No wonder people realized automatic ad marking and fast-forwarding would sell.

The internet kicked this into high gear. The lack of regulation and the uncertainty about business models led to legitimate experimentation. But it also led to today's complaints, both about maximally intrusive and attention-demanding ads and the data mining advertisers and their agencies use to target us, and also to increasingly powerful ad blockers - and ad blocker blockers.

The second, more subtle version of the arms race is the one where advertisers see every open space where people congregate as theirs to target. This was summed up for me once at a lunchtime seminar run by the UK's Internet Advertising Bureau in 2003, when a speaker gave an enthusiastic tutorial on marketing via viral email: "It gets us into the office. We've never been able to go there before." You could immediately see what office inboxes looked like to them: vast green fields just waiting to be cultivated. You know, the space we thought of as "work". And we were going to be grateful.

Childhood, as listening to Henriques, Montgomery, and the Campaign for a Commercial-Free Childhood makes plain, is one of those green fields advertisers have long fought to cultivate. On broadcast media, regulators were able to exercise some control. Even online, the Childhood Online Privacy Protection Act has been of some use.

Thumbnail image for isabella-henriques.jpegAdvertisers, like some religions, aim to capture children's affections young, on the basis that the tastes and habits you acquire in childhood are the hardest for an interloper to disrupt. The food industry has long been notorious unhealthy foods into finding ways around regulations that limit how they target children on broadcast and physical-world media. But the internet offers new options: "Smart" toys are one set of examples; Facebook's new Messenger Kids app is another. This arms race variant will escalate as the Internet of Things offers advertisers access to new areas of our lives.

Part of this story is the vastly increased quantities of data that will be available to sell to advertisers for data mining. On the web, "free" has long meant "pay with data". With the Internet of Things, no device will be free, but we will pay with data anyway. The cases we wrote about last week are early examples. As hardware becomes software, replacement life cycles become the manufacturer's choice, not yours. "My" mobile phone is as much mine as "my library book" - and a Tesla is a mobile phone with a chassis and wheels. Think of the advertising opportunities when drivers are superfluous to requirements, , beginning with the self-driving car;s dashboard and windshield. The voice-operated Echo/Home/Dot/whatever is clearly intended to turn homes into marketplaces.

A more important part is the risk of turning our homes into walled gardens, as Geoffrey A. Fowler writes in the Washington Post of his trial of Amazon Key. During the experiment, Fowler found strangers entering his house less disturbing than his sense of being "locked into an all-Amazon world". The Key experiment is, in Fowler's estimation, the first stab at Amazon's goal of becoming "the operating system for your home". Will Amazon, Google, and Apple homes be interoperable?

Henriques is calling for global regulation to limit the targeting of children for food and other advertising. It makes sense: every country is dealing with the same multinational companies, and most of us can agree on what "abusive advertising" means. But then you have to ask: why do they get a pass on the rest of us?


Illustrations: Windows XP start-up screen

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 16, 2012

Grabbing at governance

Someday the development of Internet governance will look like a continuous historical sweep whose outcome, in hindsight, is obvious. At the beginning will be one man, Jon Postel, who in the mid-1990s was, if anyone was, the god of the Internet. At the end will be...well, we don't know yet. And the sad thing is that the road to governance is so long and frankly so dull: years of meetings, committees, proposals, debate, redrafted proposals, diplomatic language, and, worst of all, remote from the mundane experience of everyday Internet users, such as spam and whether they can trust their banks' Web sites.

But if we care about the future of the Internet we must take an interest in what authority should be exercised by the International Telecommunications Union or the Internet Corporation for Assigned Names and Numbers or some other yet-to-be-defined. In fact, we are right on top of a key moment in that developmental history: from December 3 to 14, the ITU is convening the World Conference on International Telecommunications (WCIT, pronounced "wicket"). The big subject for discussion: how and whether to revise the 1988 International Telecommunications Regulations.

Plans for WCIT have been proceeding for years. In May, civil society groups concerned with civil liberties and human rights signed a letter to ITU secretary-general Hamadeoun Touré asking the ITU to open the process to more stakeholders. In June, a couple of frustrated academics changed the game by setting up WCITLeaks asking anyone who had copies of the proposals being submitted to the ITU to send copies. crutiny of those proposals showed the variety and breadth of some countries' desires for regulation. On November 7, the ITU's secretary-general, Hamadoun Touré, wrote an op-ed for Wired arguing that nothing would be passed except by consensus.

On Monday, he got a sort of answer from the International Trade Union Congress secretary, Sharon Burrow who, together with former ICANN head Paul Twomey, and, by video link, Internet pioneer Vint Cerf , launched the Stop the Net Grab campaign. The future of the Internet, they argued, is too important to too many stakeholders to leave decisions about its future up to governments bargaining in secret. The ITU, in its response, argued that Greenpeace and the ITUC have their facts wrong; after the two sides met, the ITUC reiterated its desire for some proposals to be taken off the table.

But stop and think. Opposition to the ITU is coming from Greenpeace and the ITUC?

"This is a watershed," said Twomey. "We have a completely new set of players, nothing to do with money or defending the technology. They're not priests discussing their protocols. We have a new set of experienced international political warriors saying, 'We're interested'."

Explained Burrow, "How on earth is it possible to give the workers of Bahrain or Ghana the solidarity of strategic action if governments decide unions are trouble and limit access to the Internet? We must have legislative political rights and freedoms - and that's not the work of the ITU, if it requires legislation at all."

At heart for all these years, the debate remains the same: who controls the Internet? And does governing the Internet mean regulating who pays whom or controlling what behavior is allowed? As Vint Cerf said, conflating those two is confusing content and infrastructure.

Twomey concluded, "[Certain political forces around the world] see the ITU as the place to have this discussion because it's not structured to be (nor will they let it be) fully multi-stakeholder. They have taken the opportunity of this review to bring up these desires. We should turn the question around: where is the right place to discuss this and who should be involved?"

In the journey from Postel to governance, this is the second watershed. The first step change came in 1996-1997, when it was becoming obvious that governing the Internet - which at the time primarily meant managing the allocation of domain names and numbered Internet addresses (under the aegis of the Internet Assigned Numbers Authority) - was too complex and too significant a job for one man, no matter how respected and trusted. The Internet Society and IANA formed the Internet Ad-Hoc Committee, which, in a published memorandum, outlined its new strategy. And all hell broke loose.

Long-term, the really significant change was that until that moment no one had much objected to either the decisions the Internet pioneers and engineers made or their right to make them. After some pushback, in the end the committee was disbanded and the plan scrapped, and instead a new agreement was hammered out, creating ICANN. But the lesson had been learned: there were now more people who saw themselves as Internet stakeholders than just the engineers who had created it, and they all wanted representation at the table.

In the years since, the make-up of the groups demanding to be heard has remained pretty stable, as Twomey said: engineers and technologists; representatives of civil society groups, usually working in some aspect of human rights, usually civil liberties, such as EFF, ORG, CDT, and Public Knowledge, all of whom signed the May letter. So yes, for labor unions and Greenpeace to decide that Internet freedoms are too fundamental to what they do to not participate in the decision-making about its future, is a watershed.

"We will be active as long as it takes," Burrow said Monday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

September 21, 2012

This is not (just) about Google

We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content0 if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


October 7, 2011

In the club

Sometime around noon on October 8, 2011 I will no longer be a car owner. This is no small thing: like many Americans I started dreaming about my own car when I was 13 and got my license at 16. I have owned a car almost continuously since January 1975. What makes this a suitable topic for net.wars is that without the Internet it wouldn't have happened.

Since 1995, online retailing has progressively removed the need to drive to shops. By now, almost everything I buy is either within a few minutes' walk or online. I can no longer remember the last time I was in a physical supermarket in the UK.

The advent in 2005 of London's technology-reliant congestion charge (number plate recognition, Internet payment) meant a load of Londoners found it convenient to take advantage of the free parking in my area. I don't know what goes on in the heads of people who resent looking down their formerly empty street and seeing some strange cars parked for the day, but they promptly demanded controlled parking zones, even on my street, where daytime parking has never been an issue but the restaurants clog it up from 7pm to midnight. The CPZ made that worse. Result: escalating paranoia about taking the car anywhere in case I couldn't park when I got back.

But the biggest factor is a viable alternative. Car clubs and car-sharing were newspaper stories for some years until earlier this year, while walking a different route to the tube station, I spotted a parking space marked "CAR CLUB ONLY". It turns out that within a few minutes' walk of my house are five or six Streetcars (merging with Zipcar). For £60 a year I can rent one of these by the hour, including maintenance, insurance, tax, emergency breakdown service, congestion charge and, most important, its parking space. At £5.25 an hour it will take nearly 100 hours a year to match the base cost of car ownership - insurance, road tax, test, parking, AA membership, before maintenance. (There is no depreciation on a 24-year-old car!)

The viability of car clubs depends on the existence of both the Internet and mobile phone networks. Sharing expensive resources, even cars, is nothing new, but they would have relied on personal connections. The Internet is enabling sharing among strangers: you book via their Web site or mobile phone up to a few minutes before you want the car, and if necessary extend it by sending an SMS.

And so it was that about a month and a half ago it occurred to me that one day soon I would begin presiding over my well-loved car's slow march to scrap metal. How much should you spend on maintaining a car you hardly ever drive? If I sold it now, some other Nissan Prairie-obsessive could love it to death. A month later it passed its MOT for the cost of a replacement light bulb and promptly went up on eBay.

In journalism, they say one is a story, three is a trend. I am the second person on my street to sell their car and join the club in the last two months. The Liberal Democrat council that created the car club spaces can smirk over this: though some residents have complained in the local paper about the loss of parking for the car-owning public, the upshot will be less congestion overall.

The Internet is not going to kill the car industry, but it is going to reshape the pattern of distribution of car ownership among the population. Until now it's been a binary matter: you owned a car or you didn't. Most likely, the car industry will come out about even or a little ahead: some people who would have bought cars won't, some who wouldn't have bought cars will join a club, the clubs themselves will buy cars. City-dwellers have long been a poor market for car sales - lifelong Manhattanites often never learn how to drive - and today's teens are as likely to derive their feelings of freedom and independence from their mobile phones as from a car. The people who should feel threatened are probably local taxi drivers.

Nonetheless, removing the need to own a car to have quick access to one will remove a lot of excess capacity (as airlines would call it). What just-in-time manufacturing has done for companies like Dell and Wal-Mart, just-in-time ownership can now do for consumers: why have streets full of cars just sitting around all day?

To make it work, of course, consumers will have to defy decades of careful marketing designed to make them self-identify with particular brands and models (the car club cars are not beautiful Nissan Prairies but silly silver lozenges). Also, the club must keep its promise to provide a favorable member:car ratio, and the council must continue to allocate parking spaces.

Still, it's all in how you think about it. Membership in Zipcar in one location gives you access to the cars in all the rest. So instead of owning one car, I now have cars all over the world. Is that cool or what?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 26, 2011

Master of your domain

net.wars: Master of your domain
The IANA is not responsible for deciding what is and what is not a country, wrote Jon Postel in 1994, in the Request for Comments document (RFC 1591) explaining the structure of the domain name system. At the time, the domain name system consisted of seven "generic" top-level domains (gTLDs: .edu, .com, .net, .org, .gov, .mil, and .int), plus the set of two-letter country codes, which Postel took from the ISO-3166 list. "It is extremely unlikely that any other TLDs will be created."

As Buffy said when she aimed the rocket launcher at the Judge, "That was then."

In late June the Internet Corporation for Assigned Names and Numbers announced its program to create new gTLDs, in the process entirely redefining the meaning of "generic", which used to mean a category type. What ICANN is really proposing are big-brand TLDs - because with an application fee of $185,000 and an annual subscription of $25,000 who else can afford one? In Internet terms, the new system will effectively give any company that signs up for one of these things - imagine .ibm, .disney, or .murdochsempire - the status of a country. Given recent reports that Apple has more cash on hand than the US government, that may merely reflect reality. But still.

Postel was writing in the year that the Internet was opened to commercial traffic. By 1995, with domain name registrations flooding into .com and trademark collisions becoming commonplace, discussions began about how to expand the namespace. These discussions eventually culminated in ICANN's creation.

A key element of the competing proposals of the mid-1990s was to professionalize the way the DNS was managed. Everyone trusted Postel, who had managed the DNS since its creation in 1983, but an international platform of the scope the Internet was attaining clearly could not be a one-man band, no matter how trustworthy. And it had become obvious that there was money in selling domain name registrations: formerly a free service, in 1995 registering in .com cost $50. ICANN's creation opened the way to create competing registrars under the control of each top-level domain's registry. As intended, prices dropped.

The other key element was the creation of new gTLDs. Between 2001 and 2003, ICANN introduced 13 hew gTLDs. And I will bet that, like me, you will never have seen most of them in the wild. Because: everyone still wants to be in .com.

Proposal for creating new gTLDs always attract criticism, and usually on the same grounds: the names are confusing, overlapping, and poorly chosen, and do not reflect any clear idea about what the DNS is *for*. "What is the problem we are trying to solve?" Donna Hoffman, an early expert on the commercialization of the Internet asked me in 1997 when I was first writing about the DNS debates. No one has ever proposed a cogent answer. Is the DNS a directory (the phone book's white pages), a system of categories (the yellow pages), a catalogue, or a set of keywords? This is not just a matter of abstruse philosophy, because how that question is answered helps determine the power balance between big operators and the "little guys" Internet pioneers hoped to empower.

You can see this concern in the arguments Esther Dyson makes at Slate opposing the program. But even the commercial interests this proposal is supposed to serve aren't happy. If you're Coca-Cola, can you afford to risk someone else's buying up your trademarked brand names? How many of them do you have to register to feel safe? Coca-Cola, for example, has at least half a dozen variants of its name that all converge on its main Web site: Coca-Cola with and without the hyphen, under .com and .biz, and also coke.com. Many other large companies have done the same kind of preemptive registrations. It may assist consumers who type URLs into their browsers' address bars (a shrinking percentage of Internet users), but otherwise the only benefits of this are financial and accrue to the registries, registrars, and ICANN itself.

All of that is why Dyson calls the new program a protection racket: companies will feel compelled to apply for their own namespaces in order to protect their brands. For it, they will gain nothing: neither new customers nor innovative technologies. But the financial gains to ICANN are substantial. Its draft budget for 2011-2012 (PDF) shows that the organization expects the new gTLD program to add more than $18 million to its bottom line if it goes ahead.

As net.wars has pointed out for some years now the DNS matters less than once it did. Without the user-friendly layer of the DNS email and the Web would never have taken off the way they did. But later technologies such as instant messaging, mobile networks, and many social networks do not require it once you've set up your account (although you use the DNS to find the Web site where you sign up in the first place). And, increasingly, as ReadWriteWeb noted in 2008, users automatically fire up a search engine rather than remember a URL and type it into the address bar. ICANN's competition is...Google. No wonder they need money,

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 25, 2011

Return to the red page district

This week's agreement to create a .xxx generic top-level domain (generic in the sense of not being identified with a particular country) seems like a quaint throwback. Ten or 15 years ago it might have made mattered. Now, for all the stories rehashing the old controversies, it seems to be largely irrelevant to anyone except those who think they can make some money out of it. How can it be a vector for censorship if there is no prohibition on registering pornography sites elsewhere? How can it "validate" the porn industry any more than printers and film producers did? Honestly, if it didn't have sex in the title, who would care?

I think it was about 1995 when a geekish friend said, probably at the Computers, Freedom, and Privacy conference, "I think I have the solution. Just create a top-level domain just for porn."

It sounded like a good idea at the time. Many of the best ideas are simple - with a kind of simplicity mathematicians like to praise with the term "elegant". Unfortunately, many of the worst ideas are also simple - with a kind of simplicity we all like to diss with the term "simplistic". Which this is depends to some extent on when you're making the judgement..

In 1995, the sense was that creating a separate pornography domain would provide an effective alternative to broad-brush filtering. It was the era of Time magazine's Cyberporn cover story, which Netheads thoroughly debunked and leading up to the passage of the Communications Decency Act in 1996. The idea that children would innocently stumble upon pornography was entrenched and not wholly wrong. At that time, as PC Magazine points out while outlining the adult entertainment industry's objections to the new domain, a lot of Web surfing was done by guesswork, which is how the domain whitehouse.com became famous.

A year or two later, I heard that one of the problems was that no one wanted to police domain registrations. Sure. Who could afford the legal liability? Besides, limiting who could register what in which domain was not going well: .com, which was intended to be for international commercial organizations, had become the home for all sorts of things that didn't fit under that description, while the .us country code domain had fallen into disuse. Even today, with organizations controlling every top-level domain, the rules keep having to adapt to user behavior. Basically, the fewer people interested in registering under your domain the more likely it is that your rules will continue to work.

No one has ever managed to settle - again - the question of what the domain name system is for, a debate that's as old as the system itself: its inventor, Paul Mockapetris, still carries the scars of the battles over whether to create .com. (If I remember correctly, he was against it, but finally gave on in that basis that: "What harm can it do?") Is the domain name system a directory, a set of mnemonics, a set of brands/labels, a zoning mechanism, or a free-for-all? ICANN began its life, in part, to manage the answers to this particular controversy; many long-time watchers don't understand why it's taken so long to expand the list of generic top-level domains. Fifteen years ago, finding a consensus and expanding the list would have made a difference to the development of the Net. Now it simply does not matter.

I've written before now that the domain name system has faded somewhat in importance as newer technologies - instant messaging, social networks, iPhone/iPad apps - bypass it altogether. And that is true. When the DNS was young, it was a perfect fit for the Internet applications of the day for which it was devised: Usenet, Web, email, FTP, and so on. But the domain name system enables email and the Web, which are typically the gateways through which people make first contact with those services (you download the client via the Web, email your friend for his ID, use email to verify your account).

The rise of search engines - first Altavista, then primarily Google - did away with much of consumers' need for a directory. Also a factor was branding: businesses wanted memorable domain names they could advertise to their customers. By now, though probably most people don't bother to remember more than a tiny handful of domain names now - Google, Facebook, perhaps one or two more. Anything else they either put into a search engine or get from either a bookmark or, more likely, their browser history.

Then came sites like Facebook, which take an approach akin to CompuServe in the old days or mobile networks now: they want to be your gateway to everything online (Facebook is going to stream movies now, in competition with NetFlix!) If they succeed, would it matter if you had - once - to teach your browser a user-unfriendly long, numbered address?

It is in this sense that the domain name system competes with Google and Facebook as the gateway to the Net. Of all the potential gateways, it is the only one that is intended as a public resource rather than a commercial company. That has to matter, and we should take seriously the threat that all the Net's entrances could become owned by giant commercial interests. But .xxx missed its moment to make history.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 4, 2011

Blackout

They didn't even have to buy ten backhoes.

The most fundamental mythology of the Net goes like this. The Internet was built to withstand bomb outages. Therefore, it can withstand anything. Defy authority. Whee!

This basic line of thinking underlay a lot of early Net hyperbole, most notably Grateful Dead lyricist John Perry Barlow's Declaration of the Independence of Cyberspace. Barlow's declaration was widely derided even at the time; my favorite rebuttal was John Gilmore's riposte at Computers, Freedom, and Privacy 1995, that cyberspace was just a telephone network with pretensions. (Yes, the same John Gilmore who much more famously said, "The Internet perceives censorship as damage, and routes around it.")

Like all the best myths, the idea of the Net's full-bore robustness was both true and not true. It was true in the sense that the first iteration of the Net - ARPAnet - was engineered to share information and enable communications even after a bomb outage. But it was not true in the sense that there have always been gods who could shut down their particular bit of communications heaven. There are, in networking and engineering terms, central points of failure. It is also not true in the sense that a bomb is a single threat model, and the engineering decisions you make to cope with other threat models - such as, say, a government - might be different.

The key to withstanding a bomb outage - or in fact any other kind of outage - is redundancy. There are no service-level agreements for ADSL (at least in the UK), so if your business is utterly dependent on having a continuous Internet connection you have two broadband suppliers and a failover set-up for your router. You have a landline phone and a mobile phone, an email connection and private messaging on a social network, you have a back-up router, and a spare laptop. The Internet's particular form of redundancy comes from the way data is transmitted: the packets that make up every message do not have to follow any particular route when the sender types in a destination address. They just have to get there, just as last year passengers stranded by the Icelandic volcano looked for all sorts of creative alternative routes when their original direct flights were canceled.

Even in 1995, when Barlow and Gilmore were having that argument, the Internet had some clear central points of failure - most notably the domain name system, which relies on updates that ultimately come from a single source. At the physical level, it wouldn't take cutting too many cables - those ten backhoes again - to severely damage data flows.

But back then all of today's big, corporate Net owners were tiny, and the average consumer had many more choices of Internet service provider than today. In many parts of the US consumers are lucky to have two choices; the UK's rather different regulatory regime has created an ecology of small xDSL suppliers - but behind the scenes a great deal of their supply comes from BT. A small number of national ISPs - eight? - seems to be the main reason the Egyptian government was able to shut down access. Former BT Research head Peter Cochrane writes that Egyptians-in-the-street managed to find creative ways to get information out. But if the goal was to block people's ability to use social networks to organize protests, the Egyptian government may indeed have bought itself some time. Though I liked late-night comedian Conan O'Brien's take: "If you want people to stay at home and do nothing, turn the Internet back on."

While everyone is publicly calling foul on Egypt's actions, can there be any doubt that there are plenty of other governments who will be eying the situation with a certain envy? Ironically, the US government is the only one known to be proposing a kill switch. We have to hope that the $110 million the five-day outage is thought to have cost Egypt will give them pause.

In his recent book The Master Switch, Columbia professor Tim Wu uses the examples set by the history of radio, television, and the telephone network to argue that all media started their lives as open experiments but have gone on to become closed and controlled as they mature. The Internet, he says there, and again this week in the press, is likely on the verge of closing.

What would the closed Internet look like? Well, it might look something like Apple's ecology: getting an app into the app store requires central approval, for example. Or it might look something like the walled gardens to which many mobile network operators limit their customers' access. Or perhaps something like Facebook, which seeks to mediate its users' entire online experience: one reason so many people use it for messaging is that it's free of spam. In the history of the Internet, open access has beaten out such approaches every time. CompuServe and AOL's central planning lost to the Web; general purpose computers ruled.

I don't think it's clear which way the Internet will wind up, and it's much less clear whether it will follow the same path in all countries or whether dissidents might begin rebuilding the open Net by cracking out the old modems and NNTP servers. But if closure does happen, this week may have been the proof of concept.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Blackout

They didn't even have to buy ten backhoes.

The most fundamental mythology of the Net goes like this. The Internet was built to withstand bomb outages. Therefore, it can withstand anything. Defy authority. Whee!

This basic line of thinking underlay a lot of early Net hyperbole, most notably Grateful Dead lyricist John Perry Barlow's Declaration of the Independence of Cyberspace. Barlow's declaration was widely derided even at the time; my favorite rebuttal was John Gilmore's riposte at Computers, Freedom, and Privacy 1995, that cyberspace was just a telephone network with pretensions. (Yes, the same John Gilmore who much more famously said, "The Internet perceives censorship as damage, and routes around it.")

Like all the best myths, the idea of the Net's full-bore robustness was both true and not true. It was true in the sense that the first iteration of the Net - ARPAnet - was engineered to share information and enable communications even after a bomb outage. But it was not true in the sense that there have always been gods who could shut down their particular bit of communications heaven. There are, in networking and engineering terms, central points of failure. It is also not true in the sense that a bomb is a single threat model, and the engineering decisions you make to cope with other threat models - such as, say, a government - might be different.

The key to withstanding a bomb outage - or in fact any other kind of outage - is redundancy. There are no service-level agreements for ADSL (at least in the UK), so if your business is utterly dependent on having a continuous Internet connection you have two broadband suppliers and a failover set-up for your router. You have a landline phone and a mobile phone, an email connection and private messaging on a social network, you have a back-up router, and a spare laptop. The Internet's particular form of redundancy comes from the way data is transmitted: the packets that make up every message do not have to follow any particular route when the sender types in a destination address. They just have to get there, just as last year passengers stranded by the Icelandic volcano looked for all sorts of creative alternative routes when their original direct flights were canceled.

Even in 1995, when Barlow and Gilmore were having that argument, the Internet had some clear central points of failure - most notably the domain name system, which relies on updates that ultimately come from a single source. At the physical level, it wouldn't take cutting too many cables - those ten backhoes again - to severely damage data flows.

But back then all of today's big, corporate Net owners were tiny, and the average consumer had many more choices of Internet service provider than today. In many parts of the US consumers are lucky to have two choices; the UK's rather different regulatory regime has created an ecology of small xDSL suppliers - but behind the scenes a great deal of their supply comes from BT. A small number of national ISPs - eight? - seems to be the main reason the Egyptian government was able to shut down access. Former BT Research head Peter Cochrane writes that Egyptians-in-the-street managed to find creative ways to get information out. But if the goal was to block people's ability to use social networks to organize protests, the Egyptian government may indeed have bought itself some time. Though I liked late-night comedian Conan O'Brien's take: "If you want people to stay at home and do nothing, turn the Internet back on."

While everyone is publicly calling foul on Egypt's actions, can there be any doubt that there are plenty of other governments who will be eying the situation with a certain envy? Ironically, the US government is the only one known to be proposing a kill switch. We have to hope that the $110 million the five-day outage is thought to have cost Egypt will give them pause.

In his recent book The Master Switch, Columbia professor Tim Wu uses the examples set by the history of radio, television, and the telephone network to argue that all media started their lives as open experiments but have gone on to become closed and controlled as they mature. The Internet, he says there, and again this week in the press, is likely on the verge of closing.

What would the closed Internet look like? Well, it might look something like Apple's ecology: getting an app into the app store requires central approval, for example. Or it might look something like the walled gardens to which many mobile network operators limit their customers' access. Or perhaps something like Facebook, which seeks to mediate its users' entire online experience: one reason so many people use it for messaging is that it's free of spam. In the history of the Internet, open access has beaten out such approaches every time. CompuServe and AOL's central planning lost to the Web; general purpose computers ruled.

I don't think it's clear which way the Internet will wind up, and it's much less clear whether it will follow the same path in all countries or whether dissidents might begin rebuilding the open Net by cracking out the old modems and NNTP servers. But if closure does happen, this week may have been the proof of concept.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 16, 2010

Data-mining the data miners

The case of murdered Colombian student Anna Maria Chávez Niño, presented at this week's Privacy Open Space, encompasses both extremes of the privacy conundrum posed by a world in which 400 million people post intimate details about themselves and their friends onto a single, corporately owned platform. The gist: Chávez met her murderers on Facebook; her brother tracked them down, also on Facebook.

Speaking via video link to Cédric Laurant, a Brussels-based independent privacy consultant, Juan Camilo Chávez noted that his sister might well have made the same mistake - inviting dangerous strangers into her home - by other means. But without Facebook he might not have been able to identify the killers. Criminals, it turns out, are just as clueless about what they post online as anyone else. Armed with the CCTV images, Chávez trawled Facebook for similar photos. He found the murderers selling off his sister's jacket and guitar. As they say, busted.

This week's PrivacyOS was the fourth in a series of EU-sponsored conferences to collaborate on solutions to that persistent, growing, and increasingly complex problem: how to protect privacy in a digital world. This week's focused on the cloud.

"I don't agree that privacy is disappearing as a social value," said Ian Brown, one of the event's organizers, disputing Mark privacy-is-no-longer-a-social-norm Zuckerberg's claim. The world's social values don't disappear, he added, just because some California teenagers don't care about them.

Do we protect users through regulation? Require subject releases for YouTube or Qik? Require all browsers to ship with cookies turned off? As Lilian Edwards observed, the latter would simply make many users think the Internet is broken. My notion: require social networks to add a field to photo uploads requiring users to enter an expiration date after which it will be deleted.

But, "This is meant to be a free world," Humberto Morán, managing director of Friendly Technologies, protested. Free as in speech, free as in beer, or free as in the bargain we make with our data so we can use Facebook or Google? We have no control over those privacy policy contracts.

"Nothing is for free," observed NEC's Amardeo Sarma. "You pay for it, but you don't know how you pay for it." The key issue.

What frequent flyers know is that they can get free flights once in a while in return for their data. What even the brightest, most diligent, and most paranoid expert cannot tell them is what the consequences of that trade will be 20 years from now, though the Privacy Value Networks project is attempting to quantify this. It's hard: any photographer will tell you that a picture's value is usually highest when it's new, but sometimes suddenly skyrockets decades later when its subject shoots unexpectedly to prominence. Similarly, the value of data, said David Houghton, changes with time and context.

It would be more right to say that it is difficult for users to understand the trade-offs they're making and there are no incentives for government or commerce to make it easy. And, as the recent "You have 0 Friends" episode of South Park neatly captures, the choice for users is often not between being careful and being careless but between being a hermit and participating in modern life.

Better tools ought to be a partial solution. And yet: the market for privacy-enhancing technologies is littered with market failures. Even the W3C's own Platform for Privacy Preferences (P3P), for example, is not deployed in the current generation of browsers - and when it was provided in Internet Explorer users didn't take advantage of it. The projects outlined at PrivacOS - PICOS and PrimeLife - are frustratingly slow to move from concept to prototype. The ideas seem right: providing a way to limit disclosures and authenticate identity to minimize data trails. But, Lilian Edwards asked: is partial consent or partial disclosure really possible? It's not clear that it is, partly because your friends are also now posting information about you. The idea of a decentralized social network, workshopped at one session, is interesting, but might be as likely to expand the problem as modulate it.

And, as it has throughout the 25 years since the first online communities were founded, the problem keeps growing exponentially in size and complexity. The next frontier, said Thomas Roessler: the sensor Web that incorporates location data and input from all sorts of devices throughout our lives. What does it mean to design a privacy-friendly bathroom scale that tweets your current and goal weights? What happens when the data it sends gets mashed up with the site you use to monitor the calories you consume and burn and your online health account? Did you really understand when you gave your initial consent to the site what kind of data it would hold and what the secondary uses might be?

So privacy is hard: to define, to value, to implement. As Seda Gürses, studying how to incorporate privacy into social networks, said, privacy is a process, not an event. "You can't do x and say, Now I have protected privacy."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats non-spam comments for reasons surpassing understanding.

July 3, 2009

What's in an assigned name?

There's a lot I didn't know at the time about the founding of the Internet Corporation for Assigned Names and Numbers, but I do remember the spat that preceded it. Until 1998, the systems for assigning domain names (DNS) and assigning Internet numbers (IANA) were both managed by one guy, Jon Postel, who by all accounts and records was a thoughtful and careful steward and an important contributor to much of the engineering that underpins the Internet even now. Even before he died in October 1998, however, plans were underway to create a successor organization to take over the names and numbers functions.

The first proposal was to turn these bits of management over to the International Telecommunications Union, and a memorandum of understanding was drawn up that many, especially within the ITU, assumed would pass unquestioned. Instead, there was much resentment and many complaints that important stakeholders (consumers, most notably) had been excluded. Eventually, ICANN was created under the auspices of the US Department of Commerce intended to become independent once it had fulfilled certain criteria. We're still waiting.

As you might expect, the US under Bush II wasn't all that interested in handing off control. The US government had some support in this, in part because many in the US seem to have difficulty accepting that the Internet was not actually built by the US alone. So alongside the US government's normal resistance to relinquishing control was an endemic sense that it would be "giving away" something the US had created.

All that aside, the biggest point of contention was not ICANN's connection to the US government, as desirable as that might be to those outside the US. Nor was it the assignment of numbers, which, since numbers are the way the computers find each other, is actually arguably the most important bit of the whole thing. It wasn't even, or at least not completely, the money (PDF), as staggering as it is that ICANN expects to rake in $61 million in revenue this year as its cut of domain name registrations. No, of course it was the names that are meaningful to people: who should be allowed to have what?

All this background is important because on September 30 the joint project agreement with DoC under which ICANN operates expires, and all these debates are being revisited. Surprisingly little has changed in the arguments about ICANN since 1998. Michael Froomkin argued in 2000 (PDF) that ICANN bypassed democratic control and accountability. Many critics have argued in the intervening years that ICANN needs to be reined in: its mission kept to a narrow focus on the DNS, and its structure designed to be transparent and accountable, and kept free of not only US government inteference but that of other governments as well.

Last month, the Center for Democracy and Technology published its comments to that effect. Last year, and in 2006, former elected ICANN board member Karl Auerbach">argued similarly, with much more discussion of ICANN's finances, which he regards as a "tax". Perhaps even more than might have been obvious then: ICANN's new public dashboard has revealed that the company lost $4.6 million on the stock market last year, an amount reporter John Levine equates to the 20-cent fee from 23 million domain name registrations. As Levine asks, if they could afford to lose that amount then they didn't need the money - so why did they collect it from us? There seems to be no doubt that ICANN can keep growing in size and revenues by creating more top-level domains, especially as it expands into long-mooted non-ASCII names (iDNs).

Arguing about money aside, the fact is that we have not progressed much, if at all, since 1998. We are asking the same questions and having the same arguments. What is the DNS for? Should it be a directory, a handy set of mnemonics, a set of labels, a zoning mechanism, or a free-for-all? Do languages matter? Early discussions included the notion that there would be thousands, even tens of thousands of global top-level domains. Why shouldn't Microsoft, Google, or the Electronic Frontier Foundation operate their own registries? Is managing the core of the Internet an engineering, legal, or regulatory problem? And, latterly, given the success and central role of search engines, do we need DNS at all? Personally, I lean toward the view that the DNS has become less important than it was, as many services (Twitter, instant messaging, VOIP) do not require it. Even the Web needs it less than it did. But if what really matters about the DNS is giving people names they can remember, then from the user point of view it matters little how many top-level domains there are. The domain info.microsoft is no less memorable than microsoft.info or microsoft.com.

What matters is that the Internet continues to function and that anyone can reach any part of it. The unfortunate thing is that none of these discussions have solved the problems we really have. Four years after the secured version of DNS (DNSsec) was developed to counteract security threats such as DNS cache poisoning that had been mooted for many more years than that, it's still barely deployed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

September 12, 2008

Slow news

It took a confluence of several different factors for a six-year-old news story to knock 75 percent off the price of United Airlines shares in under an hour earlier this week. The story said that United Airlines was filing for bankruptcy, and of course was true - in 2002. Several media owners are still squabbling about whose fault it was. Trading was halted after that first hour by the systems put in place after the 1987 crash, but even so the company's shares closed 10 percent down on the day. Long-term it shouldn't matter in this case, but given a little more organization and professionalism that sort of drop provides plenty of opportunities for securities fraud.

The factor the companies involved can't sue: human psychology. Any time you encounter a story online you make a quick assessment of its credibility by considering: 1) the source; 2) its likelihood; 3) how many other outlets are saying the same thing. The paranormal investigator and magician James Randi likes to sum this up by saying that if you claimed you had a horse in your back yard he might want a neighbor's confirmation for proof, but if you said you had a unicorn in your back yard he'd also want video footage, samples of the horn, close-up photographs, and so on. The more extraordinary the claim, the more extraordinary the necessary proof. The converse is also true: the less extraordinary the claim and the better the source, the more likely we are to take the story on faith and not bother to check.

Like a lot of other people, I saw the United story on Google News on Monday. There's nothing particularly shocking these days about an airline filing for bankruptcy protection, so the reaction was limited to "What? Again? I thought they were doing better now" and a glance underneath the headline to check the source. Bloomberg. Must be true. Back to reading about the final in prospect between Andy Murray and Roger Federer at the US Open.

That was a perfectly fine approach in the days when all content was screened by humans and media were slow to publish. Even then there were mistakes, like the famous 1993 incident when a shift worker at Sky News saw an internal rehearsal for the Queen Mother's death on a monitor and mentioned it on the phone to his mother in Australia, who in turn passed it on to the media, which took it up and ran with it.

But now in the time that thought process takes daytraders have clicked in and out of positions and automated media systems have begun republishing the story. It was the interaction of several independently owned automated systems made what ought to have been a small mistake into one that hit a real company's real financial standing - with that effect, too, compounded by automated systems. Logically, we should expect to see many more such incidents, because all over the Web 2.0 we are building systems that talk to each other without human intervention or oversight.

A lot of the Net's display choices are based on automated popularity contests: on-the-fly generated lists of the current top ten most viewed stories, Amazon book rankings, Google's page rank algorithm that bumps to the top sites with the most inbound links for a given set of search terms. That's no different from other media: Jacqueline Kennedy and Princess Diana were beloved of magazine covers for the most obvious sale-boosting reasons. What's different is that on the Net these measurements are made and acted upon instantaneously, and sometimes from very small samples, which is why in a very slow news hour on a small site a single click on a 2002 story seems to have bumped it up to the top, where Google spotted it and automatically inserted it into its feed.

The big issue, really - leaving aside the squabble between the Tribune and Google over whether Google should have been crawling its site at all - is the lack of reliable dates. It's always a wonder to me how many Web sites fail to anchor their information in time: the date a story is posted or a page is last updated should always be present. (I long, in fact, for a browser feature that would display at the top of a page the last date a page's main content was modified.)

Because there's another phenomenon that's insufficiently remarked upon: on the Internet, nothing ever fully dies. Every hour someone discovers an old piece of information for the first time and thinks it's new. Most of the time, it doesn't matter: Dave Barry's exploding whale is hilariously entertaining no matter how many times you've read it or seen the TV clip. But Web 2.0 will make new money for endless recycling part of our infrastructure rather than a rare occurrence.

In 1998 I wrote that crude hacker defacement of Web sites was nothing to worry about compared to the prospect of the subtle poisoning of the world's information supply that might become possible as hackers became more sophisticated. This danger is still with us, and the only remedy is to do what journalists used to be paid to do: check your facts. Twice. How do we automate that?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 29, 2008

Bannedwidth

The news that Comcast is openly imposing a monthly 250Gb bandwidth cap for its broadband subscribers sounds, as many have noted, more generous than it is. Comcast doesn't have to lower the cap progressively for customers to feel the crunch; the amount of data everyone shifts around grows inexorably year by year. Just as the 64K 640K Bill Gates denies he ever said was enough for anybodyis today barely an email, soon 250Gb will be peanuts. Comcast's move will more likely pull the market away from all-you-can-eat to arguably logical banded charging.

We should keep that in mind as the European Parliament goes to debate the telecoms package on Tuesday, with a first reading plenary vote scheduled for the Strasbourg session on September 22-25.

Many of the consumer provisions make sense, such as demanding that all users have free access to the EU-wide and national emergency numbers, that there be at least one directory enquiries service, and that there be "adequate" geographical coverage of public payphones. Those surrounded by yapping mobile phones everywhere they go may wonder why we still need payphones, but the day your battery dies, your phone gets lost, stolen, or broken, or you land in a foreign country and discover that for some reason your phone doesn't work, you'll be grateful, trust me.

The other consumer provision everyone has to like is the one that requires greater transparency about pricing. What's unusual about the Comcast announcement is that it's open and straightforward; in the UK so far, both ISPs and "all-you-can-eat" music download services have a history of being coy about exactly what level of use is enough to get you throttled or banned. In credit cards, American Express's "no preset spending limit" is valuable precisely because it gives the consumer greater flexibility than the credit limits imposed by Visa and Mastercard; in online services the flexibility is all on the side of the supplier. Most people would be willing to stay on the south side of a bandwidth cap if only they knew what it was. One must surmise that service providers don't like to disclose the cap because they think knowing what it is will encourage light users to consume more, upsetting the usage models their business plans are based on.

The more contentious areas are, of course, those that relate to copyright infringement. Navigating through the haze of proposed amendments and opinions doesn't really establish exactly what's likely to happen. But in recent months there have been discussions of everything from notice-and-takedown rules to three-strikes-and-you're-offline. Many of these defy the basic principles on which European and American justice is supposed to rest: due process and proportionate punishment. Take, for example, the idea of tossing someone offline and putting them on a blacklist so they can't get an account with another ISP. That fails both principles: either an unrelated rightsholder of the original ISP or both would be acting as a kangaroo court, and being thrown offline would not only disconnect the user from illegal online activities but in many cases make it impossible for that person's whole household to do homework, pay bills, and interact with both government and social circles.

That punishment would be wholly disproportioniate even if you could guarantee there would be no mistakes and all illegal activities would be punished equally. But in fact no one can guarantee that. An ISP cannot scan traffic and automatically identify copyright infringement; and with millions of people engaging in P2P file-sharing (seemingly the target of most of this legislation) any spotting of illegal activity has to be automated. In addition, over time, as legal downloads (Joss Whedon's dr horrible and his sing-a-long blog managed 2.2 million downloads from iTunes in the first week besides crashing its streaming server) outstrip illegal ones, simply being a heavy user won't indicate anything about whether the user's activity is legal or not.

Part of the difficulty is finding the correct analogy. Is the crime of someone who downloads a torrent of The Big Bang Theory and leaves the downloaded copy seeding afterwards the same as that of someone who sets up a factory and puts out millions of counterfeit DVD copies? Is downloading a copy of the series the same as stealing the DVDs from a shop? I would say no: counterfeit DVDs unarguably cost the industry sales in a way that downloading does not, or not necessarily. Similarly, stealing a DVD from a shop has a clearly identifiable victim (the shop itself) in a way that downloading a copy does not. But in both those cases the penalties are generally applied by courts operating under democratically decided procedures. That is clearly not the case when ISPs act on complaints by rightsholders with no penalties imposed upon them for false accusations. A more appropriate punishment would be a fine, and even that should be limited to cases of clear damage, such as the unauthorized release of material that has yet to be commercially launched.

For all these reasons, ISPs should be wary of signing onto the rightsholders' bandwagon when their concern is user demand for bandwidth. We would, I imagine, see very different responses from them if, as I think ought to happen, anti-trust law were invoked to force the separation of content owners from bandwidth providers.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 18, 2008

Like a Virgin

Back in November 2005 the CEO of AT&T, Ed Whitacre, told Business Week that he was tired of big Internet sites like Google and Yahoo! using "my pipes" "for free". With those words he launched the issue of network neutrality onto the front pages and into the public consciousness. At the time, it seemed like what one of my editors used to grandly dismiss as an "American issue". (One such issue, it's entertaining to remember now, was spam. That was in 1997.) The only company dominant enough and possessed of sufficient infrastructure to impose carriage charges on content providers in the UK was BT - and if BT had tried anything like that Ofcom would - probably - have stomped all over it.

But what starts in America usually winds up here a few years later, and this week, the CEO of Virgin Media, Neil Berkett, threatened that video providers who don't pay for faster service may find their traffic being delivered in slow "bus lanes". Network neutrality, he said, was "a load of bollocks".

His PR people recanted - er, clarified a day or two later. We find it hard to see how a comment as direct as "a load of bollocks" could be taken out of context. However. Let's say he was briefly possessed by the spirt of Whitacre, who most certainly meant what he said.

The recharacterization of Berkett's comments: the company isn't really going to deliberately slow down YouTube and the BBC's iPlayer. Instead, it "could offer content providers deals to upgrade their provisioning." I thought this sounded like the wheeze where you're not charged more for using a credit card, you're given a discount for paying cash. But no: what they say they have in mind is direct peering, in which no money changes hands, which they admit could be viewed as a "non-neutral" solution.

But, says Keith Mitchell, a fellow member of the Open Rights Group advisory board, "They are in for a swift education in the way the global transit/peering market works if they try this." Virgin seems huge in the context of the UK, where its ownership of the former ntl/Telewest combine gives it a lock on the consumer cable market - but in the overall scheme of things it's "a very small fish in the pond compared to the Tier 1 transit providers, and the idea that they can buck this model single-handedly is laughable."

Worse, he says, "If Virgin attempts to cost recover for interconnects off content providers on anything other than a sender-keeps-all/non-settlement basis, they'll quickly find themselves in competition with the transit providers, whose significantly larger economies of scale put them in a position to provide a rather cheaper path from the content providers."

What fun. In other words, if you're, say, the BBC, and you're faced with paying extra in some form to get your content out to the Net you'd choose to pay the big trucking company with access to all the best and fastest roads and the international infrastructure rather than the man-with-a-van who roams your local neighborhood.

ISPs versus the iPlayer seems likely to run and run. It's clear, for example, that streaming is growing at a hefty clip. Obviously, within the UK the iPlayer is the biggest single contributor to this; viewers are watching a million programs a week online, sopping up 3 to 5 percent of all Internet traffic in Britain.

We've seen exactly this sort of argument before: file-sharing (music, not video!), online gaming, binary Usenet newsgroups. Why (ancient creaking voice) I remember when the big threat was the advent of the graphical Web, which nearly did kill the Net (/ancient creaking voice). The difference this time is that there is a single organization with nice, deep, taxpayer-funded pockets to dig into. Unlike the voracious spider that was Usenet, the centipede that is file-sharing, or the millipedes who were putting up Web sites, YouTube and the BBC make up an easily manageable number of easily distinguished targets for a protection racket. At the same time, the consolidation of the consumer broadband market from hundreds of dial-up providers into a few very large broadband providers means competition is increasingly mythical.

But the iPlayer is only one small piece of the puzzle. Over the next few years we're going to see many more organizations offering streaming video across the Net. For example, a few weeks ago I signed up for an annual pass for the streaming TV service for the nine biggest men's tennis tournaments of the year. The economics make sense: $70 a year versus £20 a month for Sky Sports - and I have no interest in any of Sky's other offerings - or pay nothing and "watch" really terrible low-resolution video over a free Chinese player offering rebroadcasts of uncertain legality.

The real problem, as several industry insiders have said to me lately, is pricing. "You have a product," said one incredulously, "that people want more and more of, and you can't make any money selling it?" When companies like O2 are offering broadband for £7.50 a month as a loss-leading add-on to mobile phone connections, consumers don't see why they should pay any more than that. Jerky streaming might be just the motivator to fix that.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 14, 2008

Uninformed consent

Apparently the US Congress is now being scripted by Jon Stewart of the Daily Show. In a twist of perfect irony, the House of Representatives has decided to hold its first closed session in 25 years to debate - surveillance.

But it's obvious why they want closed doors: they want to talk about the AT&T case. To recap: AT&T is being sued for its complicity in the Bush administration's warrantless surveillance of US citizens after its technician Mark Klein blew the whistle by taking documents to the Electronic Frontier Foundation (which a couple of weeks ago gave him a Pioneer Award for his trouble).

Bush has, of course, resisted any effort to peer into the innards of his surveillance program by claiming it's all a state secret, and that's part of the point of this Congressional move: the Democrats have fielded a bill that would give the whole program some more oversight and, significantly, reject the idea of giving telecommunications companies - that is, AT&T - immunity from prosecution for breaking the law by participating in warrantless wiretapping. 'Snot fair that they should deprive us of the fun of watching the horse-trading. It can't, surely, be that they think we'll be upset by watching them slag each other off. In an election year?

But it's been a week for irony, as Wikipedia founder Jimmy Wales has had his sex life exposed when he dumped his girlfriendand been accused of - let's call it sloppiness - in his expense accounts. Worse, he stands accused of trading favorable page edits for cash. There's always been a strong element of Schadenpedia around, but the edit-for-cash thing really goes to the heart of what Wikipedia is supposed to be about.

I suspect that nonetheless Wikipedia will survive it: if the foundation has the sense it seems to have, it will display zero tolerance. But the incident has raised valid questions about how Wikipedia can possibly sustain itself financially going forward. The site is big and has enviable masses of traffic; but it sells no advertising, choosing instead to live on hand-outs and the work of volunteers. The idea, I suppose, is that accepting advertising might taint the site's neutral viewpoint, but donations can do the same thing if they're not properly walled off: just ask the US Congress. It seems to me that an automated advertising system they did not control would be, if anything, safer. And then maybe they could pay some of those volunteers, even though it would be a pity to lose some of the site's best entertainment.

With respect to advertising, it's worth noting that Phorm, which we is under increasing pressure. Earlier this week, we had an opportunity to talk to Kent Ertegrul, CEO of Phorm, who continues to maintain that Phorm's system, because it does not store data, is more protective of privacy than today's cookie-driven Web. This may in fact be true.

Less certain is Ertegrul's belief that the system does not contravene the Regulation of Investigatory Powers Act, which lays down rules about interception. Ertegrul has some support from a informal letter from the Home Office whose reasoning seems to be that if users have consented and have been told how they can opt out, it's legal. Well, we'll see; there's a lot of debate going on about this claim and it will be interesting to hear the Information Commissioner's view. If the Home Office's interpretation is correct, it could open a lot of scope for abusive behavior that could be imposed upon users simply by adding it to the terms of service to which they theoretically consent when they sign up, and a UK equivalent of AT&T wanting to assist the government with wholesale warrantless wiretapping would have only to add it to the terms of service.

The real problem is that no one really knows how Phorm's system works. Phorm doesn't retain your IP address, but the ad servers surely have to know it when they're sending you ads. If you opt out but can still opt back in (as Ertegrul said you can), doesn't that mean you still have a cookie on your system and that your data is still passed to Phorm's system, which discards it instead of sending you ads? If that's the case, doesn't that mean you can not opt out of having your data shared? If that isn't how it works, then how does it work? I thought I understood it after talking to Ertegrul, I really did - and then someone asked me to explain how Phorm's cookie's usefulness persisted between sessions, and I wasn't sure any more. I think the Open Rights Group: Phorm should publish details of how its system works for experts to scrutinize. Until Phorm does that the misinformation Ertegrul is so upset about will continue. (More disclosure: I am on ORG's Advisory Council.

But maybe the Home Office is on to something. Bush could solve his whole problem by getting everyone to give consent to being surveilled at the moment they take US citizenship. Surely a newborn baby's footprint is sufficient agreement?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 1, 2008

Microhoo!

Large numbers are always fun, and $44.6 billion is a particularly large number. That's how much Microsoft has offered to pay, half cash, half stock, for Yahoo!

Before we get too impressed, we should remember two things: first, half of it is stock, which isn't an immediate drain on Microsoft's resources. Second, of course, is that money doesn't mean the same thing to Microsoft as it does to everyone else. As of last night, Microsoft had $19.09 billion in a nice cash heap, with more coming in all the time. (We digress to fantasise that somewhere inside Microsoft there's a heavily guarded room where the cash is kept, and where Microsoft employees who've done something particularly clever are allowed to roll naked as a reward.)

Even so, the bid is, shall we say, generous. As of last night, Yahoo!'s market cap was $25.63 billion. Yahoo!'s stock has dropped more than 32 percent in the last year, way outpacing the drop of the broader market. When issued, Microsoft's bid of $31 a share represented a 62 percent premium. That generosity tells us two things. First, since the bid was, in the polite market term, "unsolicited", that Microsoft thought it needed to pay that much to get Yahoo!'s board and biggest shareholders to agree. Second, that Microsoft is serious: it really wants Yahoo! and it doesn't want to have to fight off other contenders.

In some cases – most notably Google's acquisition of YouTube – you get the sense that the acquisition is as much about keeping the acquired company out of the hands of competitors as it is about actually wanting to own that company. If Google wanted a slice of whatever advertising market eventually develops around online video clips, it had to have YouTube. Google Video was too little, too late, and if anyone else had bought YouTube Google would never have been able to catch up.

There's an element of that here, in that MSN seems to have no immediate prospect of catching up with Google in the online advertising market. Last May, when a Microsoft-Yahoo! merger was first mooted, CNN noted that even combined MSN and Yahoo! would trail Google in the search market by a noticeable margin. Google has more than 55 percent of the search market; Yahoo! trails distantly with 17 percent and MSN is even further behind with 13 percent. Better, you can hear Microsoft thinking, to trail with 30 percent of the market than 13 percent; unlike most proposals to merge the numbers two and three players in a market, this merger would create a real competitor to the number one player.

In addition, despite the fact that Yahoo!'s profits dropped by 4.6 percent in the last quarter (year on year), its revenues grew in the same period by 11.8 percent. If Microsoft thought about it like a retail investor (or Warren Buffett), it would note two things: the drop in Yahoo!'s share prices make it a much more attractive buy than it was last May; and Yahoo!'s steady stream of revenues makes a nice return on Microsoft's investment all by itself. One analyst on CNBC estimated that return at 5 percent annually – not bad given today's interest rates.

Back in 2000, at the height of the bubble, when AOL merged with Time-Warner (a marriage both have lived to regret), I did a bit of fantasy matchmaking that regrettably has vanished off the Telegraph's site, pairing dot-coms and old-world companies for mergers. In that round, Amazon.com got Wal-Mart (or, more realistically, K-Mart), E*Trade passed up Dow-Jones, publisher of the Wall Street Journal (and may I just say how preferable that would have been to Rupert Murdoch's having bought it) in favor of greater irony with the lottery operator G-Tech, Microsoft got Disney (to split up the ducks), and Yahoo! was sent off to buy Rupert Murdoch's News International.

Google wasn't in the list; at the time, it was still a privately held geeks' favorite, out of the mainstream. (And, of course, some companies that were in the list – notably eToys and QXL – don't exist any more.) The piece shows off rather clearly, however, the idea of the time, which was that online companies could use their ridiculously inflated stock valuations to score themselves real businesses and real revenues. That was before Google showed the way to crack online advertising and turn visitor numbers into revenues.

It's often said that the hardest thing for a new technology company is to develop a second product. Microsoft is one of the few who succeeded in that. But the history of personal computing is still extremely short, and history may come to look at DOS, Windows, and Office as all one product: commercial software. Microsoft has seen off its commercial competitors, but open-source is a genuine threat to drive the price of commodity software to zero, much like the revenues from long distance telephone calls. Looked at that way, there is no doubt that Microsoft's long-term survival as a major player depends on finding a new approach. It has kept pitching for the right online approach: information service, portal, player/DRM, now search/advertising. And now we get to find out whether Google, like very few companies before it, really can compete with Microsoft. Game on.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 21, 2007

Enter password

Some things, you just can't fake.

A few years ago, a friend of mine got a letter from an old girlfriend of her son's bearing news: my friend had, unknown to both her and his father, a 15-year-old grandson in Australia. The mother had married someone else, that marriage had broken up, and now the son was asking questions about his biological father.

I saw the kid, visiting his grandparents, out playing tennis the other day. It wasn't just the resemblance of face, head shape, and hair; the entire way his body moved as he ran and hit the ball was eerily and precisely like his father.

"You wouldn't need a DNA test," I said, aside, to my friend. She laughed and nodded, and then said, "We did one, though."

Biology: the ultimate identifier.

A few weeks ago, I did a piece on the many problems with passwords. Briefly: there are too many of them. They're hard to think up (at least if they're good ones), remember, and manage, and even when you have those things right you can be screwed by a third-party software supplier who makes the mistakes for you. The immediate precipitating incident for the piece was the Cambridge computer security group's discovery that Google makes a fine password cracker if your software, like Wordpress, stores passwords as MD5 hashes

Some topics you write about draw Pavlovian responses. Anything involving even a tiny threat to Firefox, for example, gets a huge response, as some school officials near where I'm staying have just discovered (kid doctors a detention letter to say he's being punished for not using Firefox and posts it on Digg; school becomes the target of international outrage). Passwords draw PRs for companies with better ideas.

I think the last time I wrote about passwords, the company that called was selling the technology to do those picklists you see on, for example, the Barclaycard site. You don't type in the password; instead, you pick two letters from picklists offered to you. There are a couple of problems with this, as it turns out now. First of all, if your password is a dictionary word the system doesn't really protect all that well against attacks that capture the letters, because it's so easy to plug two letters into a crossword solving program. But the big thing, as usual, is the memory problem. We learn things by using them repeatedly. It's a lot harder to remember the password if you never type the whole thing. I say picklists make it even more likely the password gets written down.

This time round, I got a call from Biopassword, which depends on behavioral biometrics: your personal typing pattern, which is as distinctive to your computer as my friend's grandson's style of movement is to a human. You still don't get to lose the password entirely; the system records the way you type it and your user name and uses that extra identifier to verify that it's you. The technology runs on the server side for Internet applications and enterprise computer systems, so in theory it works no matter where you're logging in from.

Ever used a French keyboard?

"A dramatic change does affect its ability," Biopassword's vice-president of marketing, Doug Wheeler, admitted. "But there are ways to mitigate the risk of failing if you want to provide the capability." These include the usual suspects: asking the person questions no one else is likely to be able to answer correctly, issuing a one-time password (via, for example, a known personal device such as a mobile phone), and so on. But, as he says, the thing companies like about Biopassword is that it identifies you specifically, not your cell phone or your bank statement. "No technology is perfect."

Biopassword starts by collecting nine samples, either all at once or over time, from which it generates a template. Wheeler says the company is working on reducing the number of samples as well as the number of applications and clients the system works with. He also notes that you can have your login rejected for matching too perfectly – to avoid replay attacks.

It's an intriguing idea, certainly. A big selling point is that unlike other ideas in the general move to two-factor identification it doesn't require you to learn or remember anything – or carry anything extra.

But it doesn't solve the key issue: passwords are an intractable problem located at the nexus of security, privacy, human psychology, and computer usability. A password that's easy to remember is often easy to crack. A password that's hard to crack is usually impossible to remember. Authenticating who you are when you type it will help – but these systems still have to have a fallback for when users are grappling with unfamiliar keyboards, broken arms, or unpredictable illness. And no user-facing system will solve the kind of hack that was used against the Cambridge group's installation of Wordpress (though this hole is fixed, now), which involved running a stored password through an MD5 hash and presenting the results to the Web site as a cookie indicating a successful login..

Still, it's good to know they're still out there trying.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 21, 2007

The summer of lost hats

I seem to have spent the summer dodging in and out of science fiction novels featuring four general topics: energy, security, virtual worlds, and what someone at the last conference called "GRAIN" technologies (genetic engineering, robotics, AI, and nanotechnology). So the summer started with doom and gloom and got progressively more optimistic. Along the way, I have mysteriously lost a lot of hats. The phenomena may not be related.

I lost the first hat in June, a Toyota Motor Racing hat (someone else's joke; don't ask) while I was reading the first of many very gloomy books about the end of the world as we know it. Of course, TEOTWAWKI has been oft-predicted, and there is, as Damian Thompson, the Telegraph's former religious correspondent, commented when I was writing about Y2K – a "wonderful and gleeful attention to detail" in these grand warnings. Y2K was a perfect example: a timetable posted to comp.software.year-2000 had the financial system collapsing around April 1999 and the cities starting to burn in October…

Energy books can be logically divided into three categories. One, apocalyptics: fossil fuels are going to run out (and sooner than you think), the world will continue to heat up, billions will die, and the few of us who survive will return to hunting, gathering, and dying young. Two, deniers: fossil fuels aren't going to run out, don't be silly, and we can tackle global warming by cleaning them up a bit. Here. Have some clean coal. Three, optimists: fossil fuels are running out, but technology will help us solve both that and global warming. Have some clean coal and a side order of photovoltaic panels.

I tend, when not wracked with guilt for having read 15 books and written 30,000 words on the energy/climate crisis and then spent the rest of the summer flying approximately 33,000 miles, toward optimism. People can change – and faster than you think. Ten years ago, you'd have been laughed off the British isles for suggesting that in 2007 everyone would be drinking bottled water. Given the will, ten years from now everyone could have a solar collector on their roof.

The difficulty is that at least two of those takes on the future of energy encourage greater consumption. If we're all going to die anyway and the planet is going inevitably to revert to the Stone Age, why not enjoy it while we still can? All kinds of travel will become hideously expensive and difficult; go now! If, on the other hand, you believe that there isn't a problem, well, why change anything? The one group who might be inclined toward caution and saving energy is the optimists – technology may be able to save us, but we need time to create create and deploy it. The more careful we are now, the longer we'll have to do that.

Unfortunately, that's cautious optimism. While technology companies, who have to foot the huge bills for their energy consumption, are frantically trying to go green for the soundest of business reasons, individual technologists don't seem to me to have the same outlook. At Black Hat and Defcon, for example (lost hats number two and three: a red Canada hat and a black Black Hat hat), among all the many security risks that were presented, no one talked about energy as a problem. I mean, yes, we have all those off-site backups. But you can take out a border control system as easily with an electrical power outage as you can by swiping an infected RFID passport across a reader to corrupt the database. What happens if all the lights go out, we can't get them back on again, and everything was online?

Reading all those energy books changes the lens through which you view technical developments somewhat. Singapore's virtual worlds are a case in point (lost hat: a navy-and-tan Las Vegas job): everyone is talking about what kinds of laws should apply to selling magic swords or buying virtual property, and all the time in the back of your mind is the blog posting that calculated that the average Second Life avatar consumes as much energy as the average Brazilian. And emits as much carbon as driving an SUV for 2,000 miles. Bear in mind that most SL avatars aren't figured up that often, and the suggestion that we could curb energy consumption by having virtual conferences instead of physical ones seems less realistic. (Though we could, at least, avoid airport security.) In this, as in so much else, the science fiction writer Vernor Vinge seems to have gotten there first: his book Marooned in Real Time looks at the plight of a bunch of post-Singularity augmented humans knowing their technology is going to run out.

It was left to the most science fictional of the conferences, last week's Center for Responsible Nanotechnology conference (my overview is here) to talk about energy. In wildly optimistic terms: technology will not only save us but make us all rich as well.

This was the one time all summer I didn't lose any hats (red Swiss everyone thought was Red Cross, and a turquoise Arizona I bought just in case). If you can keep your hat while all around you everyone is losing theirs…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 31, 2007

Snouting for bandwidth

Our old non-friend Comcast has been under fire again, this time for turning off Internet access to users it deems to have used too much bandwidth. The kicker? Comcast won't tell those users how much is too much.

Of course, neither bandwidth caps nor secrecy over what constitutes heavy usage is anything new, at least in Britain. ntl brought in a 1Gb per day bandwidth cap as long ago as 2003. BT began capping users in 2004. And Virgin Media, which now owns ntl and apparently every other cable company in the UK, is doing it, too.

As for the secrecy, a few years ago when "unlimited" music download services were the big thing, it wasn't uncommon to hear heavy users complain that they'd been blocked for downloading so much that the service owner concluded they were sharing the account. (Or, maybe hoarding music to play later, I don't know.) That was frustrating enough, but the bigger complaint was that they could never find out how much was too much. They would, they said, play by the rules – if only someone would tell them what those rules were.

This is the game Comcast is now playing. It is actually disconnecting exceptionally heavy users – and then refusing to tell them what usage is safe. Internet service, as provided by Franz Kafka. The problem is that in a fair number of areas of the US consumers have no alternative if they want broadband. Comcast owns the cable market, and DSL provision is patchy. The UK is slightly better off: Virgin Media now owns the cable market, but DSL is widespread, and it's not only sold by BT directly but also by smaller third parties under a variety of arrangements with BT's wholesale department.

I am surprised to find I have some – not a lot, but some – sympathy with Comcast here. I do see that publishing the cap might lead to the entire industry competing on how much you can download a month – which might in turn lead to everyone posting the "unlimited" tag again and having to stick with it. On the other hand, as this Slashdot comment says, subscribers don't have any reliable way of seeing how much they actually are downloading. There is no way to compare your records with the company's equivalent to balancing your check book. But at least you can change banks if the bank keeps making mistakes or your account is being hacked. As already noted, this isn't so much of an option for Comcast subscribers.

This type of issue is resurfacing in the UK as a network neutrality dispute with the advent of the BBC's iPlayer. Several large ISPs want the BBC to pay for bandwidth costs, perhaps especially because its design makes it prospectively a bandwidth hog. It's an outrageous claim when you consider that both consumers and the BBC already pay for their bandwidth.

Except…we don't, quite. The fact is that the economics of ISPs have barely changed since they were all losing money a decade ago. In the early days of the UK online industry, when the men were men, the women were (mostly) men, and Demon was the top-dog ISP, ISPs could afford to offer unlimited use of their dial-up connections for one very simple reason. They knew that the phone bills would throw users offline: British users paid by the minute for local calls in those days. ISPs could, therefore, budget their modem racks and leased lines based on the realistic assessment that most of their users would be offline at any given time.

Cut to today. Sure, users are online all the time with broadband. But most of them go out to work (or, if they're businesses, go home at night), and heavy round-the-clock usage is rare. ISPs know this, and budget accordingly. Pipes from BT are expensive, and their size is, logically, enough, specified based on average use. There isn't a single ISP whose service wouldn't fall over if all its users saturated all their bandwidth 24/7. And at today's market rates, there isn't a single ISP who could afford to provide a service that wouldn't fall over under that level of usage. If an entire nation switches even a sizable minority of its viewing habits to the iPlayer ISPs could legitimately have a problem. Today's bandwidth hogs are a tiny percentage of Internet users, easily controlled. Tomorrow's could be all of us. Well, all of us and the FBI.

Still, there really has to be a middle ground. The best seems to be the ideas in the Slashdot posting linked about: subscribers should be able to monitor the usage on their accounts. Certainly, there are advantages to both sides in having flexible rules rather than rigid ones. But the ultimate sanction really can't be to cut subscribers off for a year, especially if they have no choice of supplier. If that's how Comcast wants to behave, it could at least support plans for municipal wireless. Let the burden of the most prolific users of the Internet, like those of health care, fall on the public purse. Why not?


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 1, 2007

Britney Spears has good news for you

The most entertaining thing I learned at yesterday's Google Developer Day (London branch) was that there is a site that tracks the gap between the BBC's idea of what the most important news is and the stuff people actually read. When Ian Forrester and Matthew Cashmore showed off this BBC Backstage widget, the BBC was only 17 percent "in touch" with what "we're" reading. Today, I see it's 39 percent, so I guess the BBC has good days and bad days.
I note, irrelevantly to this week's headline, that Cashmore also said that putting Britney Spears in the headline moves stories right to the top of the reading list (though to the bottom of the BBC's list).

The widget apparently works by comparing the top stories placed on the BBC News front page with the list the BBC helpfully supplies of the most popular current stories. A nice example of creative use of RSS feeds and data scraping.

See, journalists have mixed feelings about this kind of thing. On the one hand, it's fascinating – fascinating – to see what people actually read. If you are a journalist and a hypocritical intellectual snob, this sort of information gives you professional license to read about Britney Spears (in order to better understand your audience) while simultaneously sneering (for money) at anyone who chooses to do so recreationally. On the other hand, if you're dedicated to writing serious think pieces about difficult topics, you dread the day when the bean counters get hold of those lists and, after several hours' careful study, look up and say brightly, "Hey, I know! Why don't we commission more stories about Britney Spears and forget about all that policy crap?"

(I, of course, do not fall in either category: I write what one of my friends likes to call "boring bollocks about computers", and I have been open for years about my alt.showbiz.gossip habit.)

The BBC guys' presentation was one of a couple of dozen sessions; they were joined by developers from other companies, large and small, and, of course, various "Googlers", most notably Chris di Bona, who runs Google's open source program. In the way of the modern era, there was a "bloggers' lounge", nicely wi-fi'd and strewn with cushions in Google's favorite primary colors. OK, it looked like a playpen with laptops, but we're not here to judge.

There seems to be a certain self-consciousness among Googlers about the company's avowed desire not to be "evil". The in-progress acquisition of the advertising agency DoubleClick has raised a lot of questions recently – though while Google has created an infrastructure that could certainly make it a considerable privacy threat should it choose to go in that direction, so far, it hasn't actually done so.

But the more interesting thing about the Developer Day is that it brings home how much Google (and perhaps also Yahoo! is becoming a software company rather than the search engine service it used to be. One of the keys to Microsoft's success – and that of others before it, all the way back to Radio Shack – was the ecology of developers it built up around its software. We talk a lot about Microsoft's dominance of the desktop, but one of the things that made it successful in the early days was the range of software available to run on it. A company the size Microsoft was then could not have written it all. More important, even if the company could have done it, the number of third parties investing in writing for Windows helped give that software the weight it needed to become dominant. GNU/Linux, last time I looked, had most of the boxes checked, but it's still pretty hard to find fully functional personal finance software, persumably because that requires agreements with banks and brokerage firms over data formats, as well as compliance with a complex of tax laws.

The notion that building a community around your business is key to success on the Internet is an old one (at least in Internet years). Amazon.com succeeded first by publishing user reviews of its products and then by enlisting as an associate anyone who wanted to put a list of books on their Web site. Amazon.com also opened up its store to small, third-party sellers and latterly has started offering hosting services to other business. The size of eBay's user base is of course the key to everything: you put your items for sale where the largest number of people will see them. Yahoo!'s strategy has been putting as many services (search, email, news, weather, sports scores, poker) as possible on its site so that sooner or later they capture a visit from everyone. And, of course, Google itself has based its success in part on enlisting much of the rest of the Web as advertising billboards, for which it gets paid. Becoming embedded into other people's services is a logical next step. It will, though, make dealing with it a lot harder if the company ever does turn eeeevil.

The other fun BBC widget was clearly designed with the BBC newsreader Martyn Lewis in mind. In 1993, Lewis expressed a desire for more good news to be featured on TV. Well, here you go, Martyn: a Mood News Google Gadget that can be tuned to deliver just good news. Keep on the sunny side of life.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. She has an intermittent blog. Readers are welcome to post there or to send email, but please turn off HTML.

April 6, 2007

What's in a 2.0?

"The Web with rounded corners," some of my skeptical friends call it. One reason for going to Emerging Technology was to find out what Web 2.0 was supposed to be when it's at home in OReillyland and whether there's a there, there.

It's really surprising how much "Web 2.0" is talked about and taken for granted as a present, dominant trend in Silicon Valley (OK, etech was in San Diego, but its prevailing ethos lives about 550 miles north of there). In London, you can go to a year's worth of Net-related gatherings without ever hearing the term, and it doesn't seem to be even a minor theme in the technology news generally, even in the 49 states.

The cynical would conclude it's a Silicon Valley term, perhaps designed to attract funding from the local venture capitalists, who respond to buzzwords.

Not at all, said the guys sharing my taxi to the airport. For one thing, you can make assumptions now that you couldn't in "Web 1.0". For example: people know what a browser is; they use email; they know if they see a link that they can click on it and be taken to further information.

To me, this sounds less like a change in the Web and more like simple user education. People can drive cars now, too, where our caveman ancestors couldn't. Yet we don't call ourselves Homo Sapiens 2.0 or claim that we're a different species.

There also seems to be some disagreement about whether it's really right to call it Web 2.0. After all, it isn't like software where you roll out discrete versions with new product launches, or like, say, new versions of Windows, where you usually have to buy a new computer in order to cope with the demands of the new software. (The kids of the 1990s have learned a strange way to count, too: 1.0, 2.0, 3.0, 3.1, 3.11, 95, 97…)

Instead, Web 2.0, like folk music, seems to be a state of mind: it is what you point to when you say it. But if you figure that Web 1.0 was more or less passive point-and-click and Web 3.0 is the "semantic Web" Tim Berners-Lee has been talking about for years in which machines will take intelligently to other machines and humans will reap the benefits, then Web 2.0 is, logically, all that interactive stuff. Social networking, Twitter, interactive communities that leverage their members' experience and data to create new information and services.

Some examples. Wesabe, in which members pool their anonymized financial data, out of which the service produces analyses showing things consumers couldn't easily know before, such as which banks or credit cards typically cost the most. The Sunlight Foundation mines public resources to give US citizens a clearer picture of what their elected representatives are actually doing. The many social networks – Friendster, LinkedIn, Orkut, and so on – of course. And all those mashup things other people seem to have time to do – maps, earths, and other data.

The thing is, TheyWorkForYou has been mining the UK's public data in one form or another since 1998, when some of the same people first set up UpMyStreet. OK, it doesn't have a blog. Does that make it significantly less, you know, modern?

None of this is to say that there isn't genuinely a trend here, or that what's coming out of it isn't useful. Mashups are fun, we know this. And obviously there is real value in mining data or folks like the credit card companies, airlines, supermarkets, insurance companies, and credit scorers wouldn't be so anxious to grab all our data that they pay us with discounts and better treatment just to get it. If they can do it, we can – and there's clearly a lot of public data out there that has never been turned into usable information. Why shouldn't consumers be able to score banks and credit card companies the way they score us?

But adding a blog or a discussion forum doesn't seem to me sufficiently novel to claim that it's a brand new Web. What it does show is that if you give humans connectivity, they will keep building the same kinds of things on whatever platform is available. Every online system that I'm aware of, going back to the proprietary days of CompuServe and BIX (and, no doubt, others before them) has had mail, instant messaging, discussion forums, some form of shopping (however rudimentary), and some ability to post personal thoughts in public. Somewhere, there's probably a PhD dissertation in researching the question of what it says about us that we keep building the same things.

The really big changes are permanent storage and all-encompassing search. When there were many proprietary platforms and they were harder to use, the volume was smaller – but search was ineffective unless you knew exactly where to look or if the data had been deleted after 30 days. And you can't interact with data you can't find.

So we're back to cycnicism. If you want to say that "Web 2.0" is a useful umbrella term for attracting venture capital, well, fine. But let's not pretend it's a giant technological revolution.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her , or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 23, 2007

Double the networks, double the neutralities

Back in 1975, the Ithaca, New York apartment building I was living in had a fire in the basement, and by the time it was out so was my telephone line. The repairman's very first move was to disconnect the $3 30-foot cable I had bought at K-Mart and confiscate it. At the time, AT&T's similar cable cost $25.

In fact, by then AT&T had no right to control what equipment you attached to your phone line because of the Carterfone case, in which the FCC ruled against AT&T's argument that it had to own all the equipment in order to ensure that the network would function properly. But this is how the telco world worked; in Edinburgh in 1983 legally you could only buy a modem from British Telecom. I think it cost about £300 – for 300 baud. Expensive enough that I didn't get online until 1990.

Stories like this are part of why the Internet developed the way it did: the pioneers were determined to avoid a situation where the Internet was controlled like this. In the early 1980s, when the first backbone was being build in the US to connect the five NSF-funded regional computing centers, the feeling was mutual. John Connolly, who wrote the checks for a lot of that work, told me in an interview in 1993 that they had endless meetings with the telcos trying to get them interested, but those companies just couldn't see that there was any money in the Internet.
Well, now here we are, and the Internet is chewing up the telcos' business models and creating havoc for the cable companies who were supposed to be the beneficiaries, and so it's not surprising that the telcos' one wish is to transform the Internet into something more closely approximating the controlled world they used to love.

Which is how we arrived at the issue known as network neutrality. This particular debate has been percolating in the US for at least a year now, and some discussion is beginning in the UK. This week, at a forum held in Westminster on the subject, Ofcom and the DTI said the existing regulatory framework was sufficient.

The basic issue is, of course, money. The traditional telcos are not, of course, having a very good time of things, and it was inevitable that it would occur to some bright CEO – it turned out to be the head of Verizon – that there ought to be some way of "monetizing" all those millions of people going to Google, Yahoo!, and the other top sites. Why not charge a fee to give priority service? That this would also allow the telcos to discriminate against competitor VOIP services and the cablecos (chiefly Comcast) to discrminate against competing online video services is also a plus. These proposals are opposed not only by the big sites in question but by the usual collection of Net rights organization, who tend to believe all sites were created equal – or should be.
Ofcom – and others I've talked to – believes that the situation in the UK is different, in part because although most of the nation's DSL service is provided either directly or indirectly by BT that company has to be cooperative with its competitors or face the threat of regulation. The EU, however, is beginning to take a greater interest in these matters, and has begun legal proceedings against Germany over a law exempting Deutsche Telecom from opening the local loop of its new VDSL network to competitors.

But Timothy Wu, a law professor at Columbia and author of Who Controls the Internet: Illusions of a Borderless World, has pointed out that the current debates are ignoring an important sector of the market: wireless. The mobile market is not now, nor ever has been, neutral. It is less closed in Europe, where you can at least buy a phone and stick any SIM in it; but in the US most phones are hardware-locked to their networks, a situation that could hardly be less consumer-friendly. Apple's new iPod, for example, will be available through only one carrier, AT&T Wireless.

Wu's paper, along with the so-called "Carterfone" decision that forced AT&T to stop confiscating people's phone cords, is cited by Skype in a petition to get the FCC to require mobile phone operators to allow software applications open access. Skype's gripe is easy to comprehend: it can't get its service onto mobile phones. The operators' lack of interest in opening their networks is also easy to comprehend: what consumer is going to call on their expensive tariffs if they can use the Internet data connection to make cheap ones? Wu also documents other cases of features that are added or subtracted according to the network operators' demands: call timers (missing), wi-fi (largely absent), and Bluetooth (often crippled in the US).

The upshot is that because the two markets – wireless phones and the Internet – have developed from opposite directions, we have two network neutrality debates, not one. The wonder is that it took us so long to notice.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 14, 2006

Not too cheap to meter

An old Net joke holds that the best way to kill the Net is to invent a new application everyone wants. The Web nearly killed the Net when it was young. Binaries on Usenet. File-sharing. Video on demand may finally really do it. Not, necessarily, because it swamps servers, consumes all available bandwidth. But because, like spam, it causes people to adopt destructive schemes.

Two such examples turned up this week. The first, from the IP Development Network, the brainchild of Jeremy Penston, formerly of UUnet and Pipex, HD-TV over IP: Who Pays the Bill? (PDF), argues that present pricing models will not work in the HDTV future, and ISPs will need to control or provide their own content. It estimates, for example, that a consumer's single download of a streamed HD movie could cost an ISP £21.13, more than some users pay a month. The report has been criticized, and its key assumption – that the Internet will become the chief or only gateway to high-definition content – is probably wrong. Niche programming will get downloaded because any other type of distribution is uneconomical, but broadcast will survive for mass-market.

The germ that isn't so easily dismissed is the idea that bandwidth is not necessarily going to continue to get cheaper, at least for end users.

Which leads to exhibit B, the story that's gotten more coverage, a press release – the draft discussion paper isn't available yet – from the London-based Association of Independent Music (AIM) proposing that ISPs should be brought "into the official value chain". In other words, ISPs should be required to have and pay for licenses agreed with the music industry and a new "Value Recognition Right" should be created. AIM's reasoning: according to figures they cite from MusicAlly Research, some 60 percent of Internet traffic by data volume is P2P, file-sharing, and music has been the main driver of that. Therefore, ISPs are making money from music. Therefore, AIM wants some.

Let's be plain: this is madness.

First of all, the more correct verb there is "was", and even then it's only partially true. Yes, music was the driver behind Napster eight years ago, and Gnutella six years ago, and the various eHoofers. But now Bittorrent is the biggest bandwidth gobbler, and the biggest proportion of transferred data transferred is video, not music. This ought to be obvious: MP3 4Mb, one-hour TV show 350Mb, movie 700Mb to 4.7Gb. Music downloads started first and have been commercialized first, but that doesn't make it the main driver; it just makes it the historically *first* driver. In any event, music certainly isn't the main reason people get online: that is and was email and the Web.

Second of all, one of the key, underrated problems for any charging mechanism that involves distinguishing one type of bits from another type of bits in order to compensate someone is the loss of privacy. What you read, watch, and listen to is all part of what you think about; surely the inner recesses of your mind should be your own. A regime that requires ISPs to police what their customers do – even if it's in their own financial interests to do so – edges towards Orwell's Thought Police.

Third of all, anyone who believes that ISPs are making money from P2P needs remedial education. Do they seriously think that at something like £20 per month for up to 8mbps ADSL anyone's got much of a margin? P2P is, if anything, the bane of ISPs' existence, since it turns ordinary people into bandwidth hogs. Chris Comley, managing director of Wizards, the small ISP that supplies my service (it resells BT connections), says that although his company applies no usage caps, if users begin maxing out their connections (that is, using all their available bandwidth 24 hours a day, seven days a week), the company will start getting complaining email messages from BT and face having to pay higher charges for the connections it resells. Broadband pricing, like that of dial-up before it (when telephone bills could be relied upon to cap users' online hours), is predicated on the understanding that even users on an "unlimited" service will not in fact consume all the bandwidth that is available to them. In Comley's analogy, the owner of an all-you-can-eat buffet sets his pricing on the assumption that people who walk in for a meal are not in fact going to eat everything in the place.

"The price war over bandwidth is going to have to be reversed," he says, "because we have effectively discounted what the user pays for IP to such a low level that if they start to use it they're in trouble, and they will if they start using video on demand or IPTV."

We began with an old Internet joke. We end with an old Internet saying, generally traced back to the goofy hype of Nicholas Negroponte and George Gilder: that bandwidth is or will be too cheap to meter. It ought to be, given that the price of computing power keeps dropping. But if that's what we want it looks like we'll have to fight for it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 19, 2006

Toll roads

Ever since I read Robert W. McChesney's 1993 book, Telecommunications, Mass Media, and Democracy, I've been wondering if the Net could go the way radio did. As McChesney tells it, 1920s radio was dominated by non-profits, in part because no one believed anyone could ever be persuaded to advertise on the radio. The Telecommunications Act of 1934 changed radio into a commercial medium instead of the great democratizing, educational influence the pioneers expected.

Gigi Sohn, director of Public Knowledge, said unhappily at CFP, that legislative politics around network neutrality are breaking down into Republican versus Democrat. Even if you're a Republican and favor striking down Representative Ed Markey (D-MA)'s Network Neutrality Bill, it has to be bad news if vital decisions about the Internet's technical architecture are going to wind up a matter of partisan politics. If someone can really make the case that allowing, say, Verizon to charge Vonage extra for quality-of-service or faster data throughput would benefit Internet users, well, fair enough. But no one wins if these decisions boil down to politicians scoring points off each other. I'm sure this was always true in most subjects, but it seems particularly clear in the case of the Internet, whose origins are known and whose creators are still alive and working.

On the other hand, it doesn't help, as Danny Weitzner also said at CFP, that the arguments have become so emotional. TCP/IP creator Vint Cerf (now, like apparently half of everyone else on the planet, at Google), has called the telcos' proposals a desire to create a "toll road" in the middle of the Internet, rehetoric that seems to be propagating rapidly. To Net old-timers, that's fighting talk, like "modem tax". Red rag to bulls. Although it is becoming entertaining: rock musicians for network neutrality! And intriguing to see who is joining Save the Internet's coalition: Gun Owners of America and the Christian Coalition on the same list with the American Library Association and ACLU of Iowa.

The other key factor is that no one trusts the telcos (not that we should. Years ago, when I interviewed John Connolly, about his days at the National Science Foundation, where he signed many of the checks that financed the earliest Internet backbone; he talked about the many meetings he spent trying to get the telcos interested, but to no avail, since they couldn't see any way to make money from it. Now that they can, they want to come in and stomp all over it. Plus, there's the whole Verizon-blocking-everyone's-email as part of its anti-spam effort, and there's Comcast's history of blocking VPNs and other connections. And if that weren't enough there's the contention, voiced among others by Lawrence Lessig, that when the telcos were in charge technology stagnated for decades. Probably if they had their way the most innovative thing we could do even now would be to attach an answering machine to the end of their wire. And I'm old enough to remember a time when the telephone company would confiscate an extension cord if you installed one yourself and they found out about it. Will they be confiscating my Vosky next?

In their paper on the subject, Lessig and Tim Wu from the University of Virginia School of Law argue that what needs attention is not so much fair competition in infrastructure provision but fair competition at the application layer: access providers should not be allowed to favor one application over another, comparing it to the neutrality of the electrical network.

It seems to me that the argument for some kind of legally mandated network neutrality ought to follow logically from the earliest antitrust decisions under the Sherman Act: to ensure fair competition, content providers should not own or be able to control the channel of distribution. That logic required the movie studios to divest themselves of theater chains and Standard Oil to sell off its gas stations. Unfortunately, convergence makes that nuclear solution difficult. AOL sells online access and is owned by a major publisher that owns cable and satellite channels as well as magazines and movie studios. Comcast is the dominant cable broadband provider, and it provides (a relatively small amount of local) original TV programming. In the case of the telcos, their equivalent of "content" would be voice telephone calls. And if the analogy hadn't already broken down, the telcos' situation would kill it, because it would mean forcing them to choose between their traditional business (selling phone calls, a business whose revenues are vanishing) and their future business (selling the use of fat pipes and value-added services).

What no one is talking about – yet – is the international factor. It seems very unlikely that British or European telcos will be able to make the same kind of demands as AT&T, Qwest, and BellSouth. The only ones in a position to institute differential pricing and make it stick are the incumbents – and they would be heavily stomped on if they tried it. What would the Internet look like if there are "toll roads" in the US but network neutrality (in the best public service tradition of TV/radio broadcasting) everywhere else?

Wendy M. Grossman is author of (if NYU Press ever get it working again), From Anarchy to Power: the Net Comes of Age, and The Daily Telegraph A-Z Guide to the Internet. Her Web site also has an archive of all the earlier columns in this series.