Main

November 16, 2012

Grabbing at governance

Someday the development of Internet governance will look like a continuous historical sweep whose outcome, in hindsight, is obvious. At the beginning will be one man, Jon Postel, who in the mid-1990s was, if anyone was, the god of the Internet. At the end will be...well, we don't know yet. And the sad thing is that the road to governance is so long and frankly so dull: years of meetings, committees, proposals, debate, redrafted proposals, diplomatic language, and, worst of all, remote from the mundane experience of everyday Internet users, such as spam and whether they can trust their banks' Web sites.

But if we care about the future of the Internet we must take an interest in what authority should be exercised by the International Telecommunications Union or the Internet Corporation for Assigned Names and Numbers or some other yet-to-be-defined. In fact, we are right on top of a key moment in that developmental history: from December 3 to 14, the ITU is convening the World Conference on International Telecommunications (WCIT, pronounced "wicket"). The big subject for discussion: how and whether to revise the 1988 International Telecommunications Regulations.

Plans for WCIT have been proceeding for years. In May, civil society groups concerned with civil liberties and human rights signed a letter to ITU secretary-general Hamadeoun Touré asking the ITU to open the process to more stakeholders. In June, a couple of frustrated academics changed the game by setting up WCITLeaks asking anyone who had copies of the proposals being submitted to the ITU to send copies. crutiny of those proposals showed the variety and breadth of some countries' desires for regulation. On November 7, the ITU's secretary-general, Hamadoun Touré, wrote an op-ed for Wired arguing that nothing would be passed except by consensus.

On Monday, he got a sort of answer from the International Trade Union Congress secretary, Sharon Burrow who, together with former ICANN head Paul Twomey, and, by video link, Internet pioneer Vint Cerf , launched the Stop the Net Grab campaign. The future of the Internet, they argued, is too important to too many stakeholders to leave decisions about its future up to governments bargaining in secret. The ITU, in its response, argued that Greenpeace and the ITUC have their facts wrong; after the two sides met, the ITUC reiterated its desire for some proposals to be taken off the table.

But stop and think. Opposition to the ITU is coming from Greenpeace and the ITUC?

"This is a watershed," said Twomey. "We have a completely new set of players, nothing to do with money or defending the technology. They're not priests discussing their protocols. We have a new set of experienced international political warriors saying, 'We're interested'."

Explained Burrow, "How on earth is it possible to give the workers of Bahrain or Ghana the solidarity of strategic action if governments decide unions are trouble and limit access to the Internet? We must have legislative political rights and freedoms - and that's not the work of the ITU, if it requires legislation at all."

At heart for all these years, the debate remains the same: who controls the Internet? And does governing the Internet mean regulating who pays whom or controlling what behavior is allowed? As Vint Cerf said, conflating those two is confusing content and infrastructure.

Twomey concluded, "[Certain political forces around the world] see the ITU as the place to have this discussion because it's not structured to be (nor will they let it be) fully multi-stakeholder. They have taken the opportunity of this review to bring up these desires. We should turn the question around: where is the right place to discuss this and who should be involved?"

In the journey from Postel to governance, this is the second watershed. The first step change came in 1996-1997, when it was becoming obvious that governing the Internet - which at the time primarily meant managing the allocation of domain names and numbered Internet addresses (under the aegis of the Internet Assigned Numbers Authority) - was too complex and too significant a job for one man, no matter how respected and trusted. The Internet Society and IANA formed the Internet Ad-Hoc Committee, which, in a published memorandum, outlined its new strategy. And all hell broke loose.

Long-term, the really significant change was that until that moment no one had much objected to either the decisions the Internet pioneers and engineers made or their right to make them. After some pushback, in the end the committee was disbanded and the plan scrapped, and instead a new agreement was hammered out, creating ICANN. But the lesson had been learned: there were now more people who saw themselves as Internet stakeholders than just the engineers who had created it, and they all wanted representation at the table.

In the years since, the make-up of the groups demanding to be heard has remained pretty stable, as Twomey said: engineers and technologists; representatives of civil society groups, usually working in some aspect of human rights, usually civil liberties, such as EFF, ORG, CDT, and Public Knowledge, all of whom signed the May letter. So yes, for labor unions and Greenpeace to decide that Internet freedoms are too fundamental to what they do to not participate in the decision-making about its future, is a watershed.

"We will be active as long as it takes," Burrow said Monday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

September 21, 2012

This is not (just) about Google

We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content0 if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


October 7, 2011

In the club

Sometime around noon on October 8, 2011 I will no longer be a car owner. This is no small thing: like many Americans I started dreaming about my own car when I was 13 and got my license at 16. I have owned a car almost continuously since January 1975. What makes this a suitable topic for net.wars is that without the Internet it wouldn't have happened.

Since 1995, online retailing has progressively removed the need to drive to shops. By now, almost everything I buy is either within a few minutes' walk or online. I can no longer remember the last time I was in a physical supermarket in the UK.

The advent in 2005 of London's technology-reliant congestion charge (number plate recognition, Internet payment) meant a load of Londoners found it convenient to take advantage of the free parking in my area. I don't know what goes on in the heads of people who resent looking down their formerly empty street and seeing some strange cars parked for the day, but they promptly demanded controlled parking zones, even on my street, where daytime parking has never been an issue but the restaurants clog it up from 7pm to midnight. The CPZ made that worse. Result: escalating paranoia about taking the car anywhere in case I couldn't park when I got back.

But the biggest factor is a viable alternative. Car clubs and car-sharing were newspaper stories for some years until earlier this year, while walking a different route to the tube station, I spotted a parking space marked "CAR CLUB ONLY". It turns out that within a few minutes' walk of my house are five or six Streetcars (merging with Zipcar). For £60 a year I can rent one of these by the hour, including maintenance, insurance, tax, emergency breakdown service, congestion charge and, most important, its parking space. At £5.25 an hour it will take nearly 100 hours a year to match the base cost of car ownership - insurance, road tax, test, parking, AA membership, before maintenance. (There is no depreciation on a 24-year-old car!)

The viability of car clubs depends on the existence of both the Internet and mobile phone networks. Sharing expensive resources, even cars, is nothing new, but they would have relied on personal connections. The Internet is enabling sharing among strangers: you book via their Web site or mobile phone up to a few minutes before you want the car, and if necessary extend it by sending an SMS.

And so it was that about a month and a half ago it occurred to me that one day soon I would begin presiding over my well-loved car's slow march to scrap metal. How much should you spend on maintaining a car you hardly ever drive? If I sold it now, some other Nissan Prairie-obsessive could love it to death. A month later it passed its MOT for the cost of a replacement light bulb and promptly went up on eBay.

In journalism, they say one is a story, three is a trend. I am the second person on my street to sell their car and join the club in the last two months. The Liberal Democrat council that created the car club spaces can smirk over this: though some residents have complained in the local paper about the loss of parking for the car-owning public, the upshot will be less congestion overall.

The Internet is not going to kill the car industry, but it is going to reshape the pattern of distribution of car ownership among the population. Until now it's been a binary matter: you owned a car or you didn't. Most likely, the car industry will come out about even or a little ahead: some people who would have bought cars won't, some who wouldn't have bought cars will join a club, the clubs themselves will buy cars. City-dwellers have long been a poor market for car sales - lifelong Manhattanites often never learn how to drive - and today's teens are as likely to derive their feelings of freedom and independence from their mobile phones as from a car. The people who should feel threatened are probably local taxi drivers.

Nonetheless, removing the need to own a car to have quick access to one will remove a lot of excess capacity (as airlines would call it). What just-in-time manufacturing has done for companies like Dell and Wal-Mart, just-in-time ownership can now do for consumers: why have streets full of cars just sitting around all day?

To make it work, of course, consumers will have to defy decades of careful marketing designed to make them self-identify with particular brands and models (the car club cars are not beautiful Nissan Prairies but silly silver lozenges). Also, the club must keep its promise to provide a favorable member:car ratio, and the council must continue to allocate parking spaces.

Still, it's all in how you think about it. Membership in Zipcar in one location gives you access to the cars in all the rest. So instead of owning one car, I now have cars all over the world. Is that cool or what?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 26, 2011

Master of your domain

net.wars: Master of your domain
The IANA is not responsible for deciding what is and what is not a country, wrote Jon Postel in 1994, in the Request for Comments document (RFC 1591) explaining the structure of the domain name system. At the time, the domain name system consisted of seven "generic" top-level domains (gTLDs: .edu, .com, .net, .org, .gov, .mil, and .int), plus the set of two-letter country codes, which Postel took from the ISO-3166 list. "It is extremely unlikely that any other TLDs will be created."

As Buffy said when she aimed the rocket launcher at the Judge, "That was then."

In late June the Internet Corporation for Assigned Names and Numbers announced its program to create new gTLDs, in the process entirely redefining the meaning of "generic", which used to mean a category type. What ICANN is really proposing are big-brand TLDs - because with an application fee of $185,000 and an annual subscription of $25,000 who else can afford one? In Internet terms, the new system will effectively give any company that signs up for one of these things - imagine .ibm, .disney, or .murdochsempire - the status of a country. Given recent reports that Apple has more cash on hand than the US government, that may merely reflect reality. But still.

Postel was writing in the year that the Internet was opened to commercial traffic. By 1995, with domain name registrations flooding into .com and trademark collisions becoming commonplace, discussions began about how to expand the namespace. These discussions eventually culminated in ICANN's creation.

A key element of the competing proposals of the mid-1990s was to professionalize the way the DNS was managed. Everyone trusted Postel, who had managed the DNS since its creation in 1983, but an international platform of the scope the Internet was attaining clearly could not be a one-man band, no matter how trustworthy. And it had become obvious that there was money in selling domain name registrations: formerly a free service, in 1995 registering in .com cost $50. ICANN's creation opened the way to create competing registrars under the control of each top-level domain's registry. As intended, prices dropped.

The other key element was the creation of new gTLDs. Between 2001 and 2003, ICANN introduced 13 hew gTLDs. And I will bet that, like me, you will never have seen most of them in the wild. Because: everyone still wants to be in .com.

Proposal for creating new gTLDs always attract criticism, and usually on the same grounds: the names are confusing, overlapping, and poorly chosen, and do not reflect any clear idea about what the DNS is *for*. "What is the problem we are trying to solve?" Donna Hoffman, an early expert on the commercialization of the Internet asked me in 1997 when I was first writing about the DNS debates. No one has ever proposed a cogent answer. Is the DNS a directory (the phone book's white pages), a system of categories (the yellow pages), a catalogue, or a set of keywords? This is not just a matter of abstruse philosophy, because how that question is answered helps determine the power balance between big operators and the "little guys" Internet pioneers hoped to empower.

You can see this concern in the arguments Esther Dyson makes at Slate opposing the program. But even the commercial interests this proposal is supposed to serve aren't happy. If you're Coca-Cola, can you afford to risk someone else's buying up your trademarked brand names? How many of them do you have to register to feel safe? Coca-Cola, for example, has at least half a dozen variants of its name that all converge on its main Web site: Coca-Cola with and without the hyphen, under .com and .biz, and also coke.com. Many other large companies have done the same kind of preemptive registrations. It may assist consumers who type URLs into their browsers' address bars (a shrinking percentage of Internet users), but otherwise the only benefits of this are financial and accrue to the registries, registrars, and ICANN itself.

All of that is why Dyson calls the new program a protection racket: companies will feel compelled to apply for their own namespaces in order to protect their brands. For it, they will gain nothing: neither new customers nor innovative technologies. But the financial gains to ICANN are substantial. Its draft budget for 2011-2012 (PDF) shows that the organization expects the new gTLD program to add more than $18 million to its bottom line if it goes ahead.

As net.wars has pointed out for some years now the DNS matters less than once it did. Without the user-friendly layer of the DNS email and the Web would never have taken off the way they did. But later technologies such as instant messaging, mobile networks, and many social networks do not require it once you've set up your account (although you use the DNS to find the Web site where you sign up in the first place). And, increasingly, as ReadWriteWeb noted in 2008, users automatically fire up a search engine rather than remember a URL and type it into the address bar. ICANN's competition is...Google. No wonder they need money,

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 25, 2011

Return to the red page district

This week's agreement to create a .xxx generic top-level domain (generic in the sense of not being identified with a particular country) seems like a quaint throwback. Ten or 15 years ago it might have made mattered. Now, for all the stories rehashing the old controversies, it seems to be largely irrelevant to anyone except those who think they can make some money out of it. How can it be a vector for censorship if there is no prohibition on registering pornography sites elsewhere? How can it "validate" the porn industry any more than printers and film producers did? Honestly, if it didn't have sex in the title, who would care?

I think it was about 1995 when a geekish friend said, probably at the Computers, Freedom, and Privacy conference, "I think I have the solution. Just create a top-level domain just for porn."

It sounded like a good idea at the time. Many of the best ideas are simple - with a kind of simplicity mathematicians like to praise with the term "elegant". Unfortunately, many of the worst ideas are also simple - with a kind of simplicity we all like to diss with the term "simplistic". Which this is depends to some extent on when you're making the judgement..

In 1995, the sense was that creating a separate pornography domain would provide an effective alternative to broad-brush filtering. It was the era of Time magazine's Cyberporn cover story, which Netheads thoroughly debunked and leading up to the passage of the Communications Decency Act in 1996. The idea that children would innocently stumble upon pornography was entrenched and not wholly wrong. At that time, as PC Magazine points out while outlining the adult entertainment industry's objections to the new domain, a lot of Web surfing was done by guesswork, which is how the domain whitehouse.com became famous.

A year or two later, I heard that one of the problems was that no one wanted to police domain registrations. Sure. Who could afford the legal liability? Besides, limiting who could register what in which domain was not going well: .com, which was intended to be for international commercial organizations, had become the home for all sorts of things that didn't fit under that description, while the .us country code domain had fallen into disuse. Even today, with organizations controlling every top-level domain, the rules keep having to adapt to user behavior. Basically, the fewer people interested in registering under your domain the more likely it is that your rules will continue to work.

No one has ever managed to settle - again - the question of what the domain name system is for, a debate that's as old as the system itself: its inventor, Paul Mockapetris, still carries the scars of the battles over whether to create .com. (If I remember correctly, he was against it, but finally gave on in that basis that: "What harm can it do?") Is the domain name system a directory, a set of mnemonics, a set of brands/labels, a zoning mechanism, or a free-for-all? ICANN began its life, in part, to manage the answers to this particular controversy; many long-time watchers don't understand why it's taken so long to expand the list of generic top-level domains. Fifteen years ago, finding a consensus and expanding the list would have made a difference to the development of the Net. Now it simply does not matter.

I've written before now that the domain name system has faded somewhat in importance as newer technologies - instant messaging, social networks, iPhone/iPad apps - bypass it altogether. And that is true. When the DNS was young, it was a perfect fit for the Internet applications of the day for which it was devised: Usenet, Web, email, FTP, and so on. But the domain name system enables email and the Web, which are typically the gateways through which people make first contact with those services (you download the client via the Web, email your friend for his ID, use email to verify your account).

The rise of search engines - first Altavista, then primarily Google - did away with much of consumers' need for a directory. Also a factor was branding: businesses wanted memorable domain names they could advertise to their customers. By now, though probably most people don't bother to remember more than a tiny handful of domain names now - Google, Facebook, perhaps one or two more. Anything else they either put into a search engine or get from either a bookmark or, more likely, their browser history.

Then came sites like Facebook, which take an approach akin to CompuServe in the old days or mobile networks now: they want to be your gateway to everything online (Facebook is going to stream movies now, in competition with NetFlix!) If they succeed, would it matter if you had - once - to teach your browser a user-unfriendly long, numbered address?

It is in this sense that the domain name system competes with Google and Facebook as the gateway to the Net. Of all the potential gateways, it is the only one that is intended as a public resource rather than a commercial company. That has to matter, and we should take seriously the threat that all the Net's entrances could become owned by giant commercial interests. But .xxx missed its moment to make history.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 4, 2011

Blackout

They didn't even have to buy ten backhoes.

The most fundamental mythology of the Net goes like this. The Internet was built to withstand bomb outages. Therefore, it can withstand anything. Defy authority. Whee!

This basic line of thinking underlay a lot of early Net hyperbole, most notably Grateful Dead lyricist John Perry Barlow's Declaration of the Independence of Cyberspace. Barlow's declaration was widely derided even at the time; my favorite rebuttal was John Gilmore's riposte at Computers, Freedom, and Privacy 1995, that cyberspace was just a telephone network with pretensions. (Yes, the same John Gilmore who much more famously said, "The Internet perceives censorship as damage, and routes around it.")

Like all the best myths, the idea of the Net's full-bore robustness was both true and not true. It was true in the sense that the first iteration of the Net - ARPAnet - was engineered to share information and enable communications even after a bomb outage. But it was not true in the sense that there have always been gods who could shut down their particular bit of communications heaven. There are, in networking and engineering terms, central points of failure. It is also not true in the sense that a bomb is a single threat model, and the engineering decisions you make to cope with other threat models - such as, say, a government - might be different.

The key to withstanding a bomb outage - or in fact any other kind of outage - is redundancy. There are no service-level agreements for ADSL (at least in the UK), so if your business is utterly dependent on having a continuous Internet connection you have two broadband suppliers and a failover set-up for your router. You have a landline phone and a mobile phone, an email connection and private messaging on a social network, you have a back-up router, and a spare laptop. The Internet's particular form of redundancy comes from the way data is transmitted: the packets that make up every message do not have to follow any particular route when the sender types in a destination address. They just have to get there, just as last year passengers stranded by the Icelandic volcano looked for all sorts of creative alternative routes when their original direct flights were canceled.

Even in 1995, when Barlow and Gilmore were having that argument, the Internet had some clear central points of failure - most notably the domain name system, which relies on updates that ultimately come from a single source. At the physical level, it wouldn't take cutting too many cables - those ten backhoes again - to severely damage data flows.

But back then all of today's big, corporate Net owners were tiny, and the average consumer had many more choices of Internet service provider than today. In many parts of the US consumers are lucky to have two choices; the UK's rather different regulatory regime has created an ecology of small xDSL suppliers - but behind the scenes a great deal of their supply comes from BT. A small number of national ISPs - eight? - seems to be the main reason the Egyptian government was able to shut down access. Former BT Research head Peter Cochrane writes that Egyptians-in-the-street managed to find creative ways to get information out. But if the goal was to block people's ability to use social networks to organize protests, the Egyptian government may indeed have bought itself some time. Though I liked late-night comedian Conan O'Brien's take: "If you want people to stay at home and do nothing, turn the Internet back on."

While everyone is publicly calling foul on Egypt's actions, can there be any doubt that there are plenty of other governments who will be eying the situation with a certain envy? Ironically, the US government is the only one known to be proposing a kill switch. We have to hope that the $110 million the five-day outage is thought to have cost Egypt will give them pause.

In his recent book The Master Switch, Columbia professor Tim Wu uses the examples set by the history of radio, television, and the telephone network to argue that all media started their lives as open experiments but have gone on to become closed and controlled as they mature. The Internet, he says there, and again this week in the press, is likely on the verge of closing.

What would the closed Internet look like? Well, it might look something like Apple's ecology: getting an app into the app store requires central approval, for example. Or it might look something like the walled gardens to which many mobile network operators limit their customers' access. Or perhaps something like Facebook, which seeks to mediate its users' entire online experience: one reason so many people use it for messaging is that it's free of spam. In the history of the Internet, open access has beaten out such approaches every time. CompuServe and AOL's central planning lost to the Web; general purpose computers ruled.

I don't think it's clear which way the Internet will wind up, and it's much less clear whether it will follow the same path in all countries or whether dissidents might begin rebuilding the open Net by cracking out the old modems and NNTP servers. But if closure does happen, this week may have been the proof of concept.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Blackout

They didn't even have to buy ten backhoes.

The most fundamental mythology of the Net goes like this. The Internet was built to withstand bomb outages. Therefore, it can withstand anything. Defy authority. Whee!

This basic line of thinking underlay a lot of early Net hyperbole, most notably Grateful Dead lyricist John Perry Barlow's Declaration of the Independence of Cyberspace. Barlow's declaration was widely derided even at the time; my favorite rebuttal was John Gilmore's riposte at Computers, Freedom, and Privacy 1995, that cyberspace was just a telephone network with pretensions. (Yes, the same John Gilmore who much more famously said, "The Internet perceives censorship as damage, and routes around it.")

Like all the best myths, the idea of the Net's full-bore robustness was both true and not true. It was true in the sense that the first iteration of the Net - ARPAnet - was engineered to share information and enable communications even after a bomb outage. But it was not true in the sense that there have always been gods who could shut down their particular bit of communications heaven. There are, in networking and engineering terms, central points of failure. It is also not true in the sense that a bomb is a single threat model, and the engineering decisions you make to cope with other threat models - such as, say, a government - might be different.

The key to withstanding a bomb outage - or in fact any other kind of outage - is redundancy. There are no service-level agreements for ADSL (at least in the UK), so if your business is utterly dependent on having a continuous Internet connection you have two broadband suppliers and a failover set-up for your router. You have a landline phone and a mobile phone, an email connection and private messaging on a social network, you have a back-up router, and a spare laptop. The Internet's particular form of redundancy comes from the way data is transmitted: the packets that make up every message do not have to follow any particular route when the sender types in a destination address. They just have to get there, just as last year passengers stranded by the Icelandic volcano looked for all sorts of creative alternative routes when their original direct flights were canceled.

Even in 1995, when Barlow and Gilmore were having that argument, the Internet had some clear central points of failure - most notably the domain name system, which relies on updates that ultimately come from a single source. At the physical level, it wouldn't take cutting too many cables - those ten backhoes again - to severely damage data flows.

But back then all of today's big, corporate Net owners were tiny, and the average consumer had many more choices of Internet service provider than today. In many parts of the US consumers are lucky to have two choices; the UK's rather different regulatory regime has created an ecology of small xDSL suppliers - but behind the scenes a great deal of their supply comes from BT. A small number of national ISPs - eight? - seems to be the main reason the Egyptian government was able to shut down access. Former BT Research head Peter Cochrane writes that Egyptians-in-the-street managed to find creative ways to get information out. But if the goal was to block people's ability to use social networks to organize protests, the Egyptian government may indeed have bought itself some time. Though I liked late-night comedian Conan O'Brien's take: "If you want people to stay at home and do nothing, turn the Internet back on."

While everyone is publicly calling foul on Egypt's actions, can there be any doubt that there are plenty of other governments who will be eying the situation with a certain envy? Ironically, the US government is the only one known to be proposing a kill switch. We have to hope that the $110 million the five-day outage is thought to have cost Egypt will give them pause.

In his recent book The Master Switch, Columbia professor Tim Wu uses the examples set by the history of radio, television, and the telephone network to argue that all media started their lives as open experiments but have gone on to become closed and controlled as they mature. The Internet, he says there, and again this week in the press, is likely on the verge of closing.

What would the closed Internet look like? Well, it might look something like Apple's ecology: getting an app into the app store requires central approval, for example. Or it might look something like the walled gardens to which many mobile network operators limit their customers' access. Or perhaps something like Facebook, which seeks to mediate its users' entire online experience: one reason so many people use it for messaging is that it's free of spam. In the history of the Internet, open access has beaten out such approaches every time. CompuServe and AOL's central planning lost to the Web; general purpose computers ruled.

I don't think it's clear which way the Internet will wind up, and it's much less clear whether it will follow the same path in all countries or whether dissidents might begin rebuilding the open Net by cracking out the old modems and NNTP servers. But if closure does happen, this week may have been the proof of concept.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 16, 2010

Data-mining the data miners

The case of murdered Colombian student Anna Maria Chávez Niño, presented at this week's Privacy Open Space, encompasses both extremes of the privacy conundrum posed by a world in which 400 million people post intimate details about themselves and their friends onto a single, corporately owned platform. The gist: Chávez met her murderers on Facebook; her brother tracked them down, also on Facebook.

Speaking via video link to Cédric Laurant, a Brussels-based independent privacy consultant, Juan Camilo Chávez noted that his sister might well have made the same mistake - inviting dangerous strangers into her home - by other means. But without Facebook he might not have been able to identify the killers. Criminals, it turns out, are just as clueless about what they post online as anyone else. Armed with the CCTV images, Chávez trawled Facebook for similar photos. He found the murderers selling off his sister's jacket and guitar. As they say, busted.

This week's PrivacyOS was the fourth in a series of EU-sponsored conferences to collaborate on solutions to that persistent, growing, and increasingly complex problem: how to protect privacy in a digital world. This week's focused on the cloud.

"I don't agree that privacy is disappearing as a social value," said Ian Brown, one of the event's organizers, disputing Mark privacy-is-no-longer-a-social-norm Zuckerberg's claim. The world's social values don't disappear, he added, just because some California teenagers don't care about them.

Do we protect users through regulation? Require subject releases for YouTube or Qik? Require all browsers to ship with cookies turned off? As Lilian Edwards observed, the latter would simply make many users think the Internet is broken. My notion: require social networks to add a field to photo uploads requiring users to enter an expiration date after which it will be deleted.

But, "This is meant to be a free world," Humberto Morán, managing director of Friendly Technologies, protested. Free as in speech, free as in beer, or free as in the bargain we make with our data so we can use Facebook or Google? We have no control over those privacy policy contracts.

"Nothing is for free," observed NEC's Amardeo Sarma. "You pay for it, but you don't know how you pay for it." The key issue.

What frequent flyers know is that they can get free flights once in a while in return for their data. What even the brightest, most diligent, and most paranoid expert cannot tell them is what the consequences of that trade will be 20 years from now, though the Privacy Value Networks project is attempting to quantify this. It's hard: any photographer will tell you that a picture's value is usually highest when it's new, but sometimes suddenly skyrockets decades later when its subject shoots unexpectedly to prominence. Similarly, the value of data, said David Houghton, changes with time and context.

It would be more right to say that it is difficult for users to understand the trade-offs they're making and there are no incentives for government or commerce to make it easy. And, as the recent "You have 0 Friends" episode of South Park neatly captures, the choice for users is often not between being careful and being careless but between being a hermit and participating in modern life.

Better tools ought to be a partial solution. And yet: the market for privacy-enhancing technologies is littered with market failures. Even the W3C's own Platform for Privacy Preferences (P3P), for example, is not deployed in the current generation of browsers - and when it was provided in Internet Explorer users didn't take advantage of it. The projects outlined at PrivacOS - PICOS and PrimeLife - are frustratingly slow to move from concept to prototype. The ideas seem right: providing a way to limit disclosures and authenticate identity to minimize data trails. But, Lilian Edwards asked: is partial consent or partial disclosure really possible? It's not clear that it is, partly because your friends are also now posting information about you. The idea of a decentralized social network, workshopped at one session, is interesting, but might be as likely to expand the problem as modulate it.

And, as it has throughout the 25 years since the first online communities were founded, the problem keeps growing exponentially in size and complexity. The next frontier, said Thomas Roessler: the sensor Web that incorporates location data and input from all sorts of devices throughout our lives. What does it mean to design a privacy-friendly bathroom scale that tweets your current and goal weights? What happens when the data it sends gets mashed up with the site you use to monitor the calories you consume and burn and your online health account? Did you really understand when you gave your initial consent to the site what kind of data it would hold and what the secondary uses might be?

So privacy is hard: to define, to value, to implement. As Seda Gürses, studying how to incorporate privacy into social networks, said, privacy is a process, not an event. "You can't do x and say, Now I have protected privacy."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats non-spam comments for reasons surpassing understanding.

July 3, 2009

What's in an assigned name?

There's a lot I didn't know at the time about the founding of the Internet Corporation for Assigned Names and Numbers, but I do remember the spat that preceded it. Until 1998, the systems for assigning domain names (DNS) and assigning Internet numbers (IANA) were both managed by one guy, Jon Postel, who by all accounts and records was a thoughtful and careful steward and an important contributor to much of the engineering that underpins the Internet even now. Even before he died in October 1998, however, plans were underway to create a successor organization to take over the names and numbers functions.

The first proposal was to turn these bits of management over to the International Telecommunications Union, and a memorandum of understanding was drawn up that many, especially within the ITU, assumed would pass unquestioned. Instead, there was much resentment and many complaints that important stakeholders (consumers, most notably) had been excluded. Eventually, ICANN was created under the auspices of the US Department of Commerce intended to become independent once it had fulfilled certain criteria. We're still waiting.

As you might expect, the US under Bush II wasn't all that interested in handing off control. The US government had some support in this, in part because many in the US seem to have difficulty accepting that the Internet was not actually built by the US alone. So alongside the US government's normal resistance to relinquishing control was an endemic sense that it would be "giving away" something the US had created.

All that aside, the biggest point of contention was not ICANN's connection to the US government, as desirable as that might be to those outside the US. Nor was it the assignment of numbers, which, since numbers are the way the computers find each other, is actually arguably the most important bit of the whole thing. It wasn't even, or at least not completely, the money (PDF), as staggering as it is that ICANN expects to rake in $61 million in revenue this year as its cut of domain name registrations. No, of course it was the names that are meaningful to people: who should be allowed to have what?

All this background is important because on September 30 the joint project agreement with DoC under which ICANN operates expires, and all these debates are being revisited. Surprisingly little has changed in the arguments about ICANN since 1998. Michael Froomkin argued in 2000 (PDF) that ICANN bypassed democratic control and accountability. Many critics have argued in the intervening years that ICANN needs to be reined in: its mission kept to a narrow focus on the DNS, and its structure designed to be transparent and accountable, and kept free of not only US government inteference but that of other governments as well.

Last month, the Center for Democracy and Technology published its comments to that effect. Last year, and in 2006, former elected ICANN board member Karl Auerbach">argued similarly, with much more discussion of ICANN's finances, which he regards as a "tax". Perhaps even more than might have been obvious then: ICANN's new public dashboard has revealed that the company lost $4.6 million on the stock market last year, an amount reporter John Levine equates to the 20-cent fee from 23 million domain name registrations. As Levine asks, if they could afford to lose that amount then they didn't need the money - so why did they collect it from us? There seems to be no doubt that ICANN can keep growing in size and revenues by creating more top-level domains, especially as it expands into long-mooted non-ASCII names (iDNs).

Arguing about money aside, the fact is that we have not progressed much, if at all, since 1998. We are asking the same questions and having the same arguments. What is the DNS for? Should it be a directory, a handy set of mnemonics, a set of labels, a zoning mechanism, or a free-for-all? Do languages matter? Early discussions included the notion that there would be thousands, even tens of thousands of global top-level domains. Why shouldn't Microsoft, Google, or the Electronic Frontier Foundation operate their own registries? Is managing the core of the Internet an engineering, legal, or regulatory problem? And, latterly, given the success and central role of search engines, do we need DNS at all? Personally, I lean toward the view that the DNS has become less important than it was, as many services (Twitter, instant messaging, VOIP) do not require it. Even the Web needs it less than it did. But if what really matters about the DNS is giving people names they can remember, then from the user point of view it matters little how many top-level domains there are. The domain info.microsoft is no less memorable than microsoft.info or microsoft.com.

What matters is that the Internet continues to function and that anyone can reach any part of it. The unfortunate thing is that none of these discussions have solved the problems we really have. Four years after the secured version of DNS (DNSsec) was developed to counteract security threats such as DNS cache poisoning that had been mooted for many more years than that, it's still barely deployed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

September 12, 2008

Slow news

It took a confluence of several different factors for a six-year-old news story to knock 75 percent off the price of United Airlines shares in under an hour earlier this week. The story said that United Airlines was filing for bankruptcy, and of course was true - in 2002. Several media owners are still squabbling about whose fault it was. Trading was halted after that first hour by the systems put in place after the 1987 crash, but even so the company's shares closed 10 percent down on the day. Long-term it shouldn't matter in this case, but given a little more organization and professionalism that sort of drop provides plenty of opportunities for securities fraud.

The factor the companies involved can't sue: human psychology. Any time you encounter a story online you make a quick assessment of its credibility by considering: 1) the source; 2) its likelihood; 3) how many other outlets are saying the same thing. The paranormal investigator and magician James Randi likes to sum this up by saying that if you claimed you had a horse in your back yard he might want a neighbor's confirmation for proof, but if you said you had a unicorn in your back yard he'd also want video footage, samples of the horn, close-up photographs, and so on. The more extraordinary the claim, the more extraordinary the necessary proof. The converse is also true: the less extraordinary the claim and the better the source, the more likely we are to take the story on faith and not bother to check.

Like a lot of other people, I saw the United story on Google News on Monday. There's nothing particularly shocking these days about an airline filing for bankruptcy protection, so the reaction was limited to "What? Again? I thought they were doing better now" and a glance underneath the headline to check the source. Bloomberg. Must be true. Back to reading about the final in prospect between Andy Murray and Roger Federer at the US Open.

That was a perfectly fine approach in the days when all content was screened by humans and media were slow to publish. Even then there were mistakes, like the famous 1993 incident when a shift worker at Sky News saw an internal rehearsal for the Queen Mother's death on a monitor and mentioned it on the phone to his mother in Australia, who in turn passed it on to the media, which took it up and ran with it.

But now in the time that thought process takes daytraders have clicked in and out of positions and automated media systems have begun republishing the story. It was the interaction of several independently owned automated systems made what ought to have been a small mistake into one that hit a real company's real financial standing - with that effect, too, compounded by automated systems. Logically, we should expect to see many more such incidents, because all over the Web 2.0 we are building systems that talk to each other without human intervention or oversight.

A lot of the Net's display choices are based on automated popularity contests: on-the-fly generated lists of the current top ten most viewed stories, Amazon book rankings, Google's page rank algorithm that bumps to the top sites with the most inbound links for a given set of search terms. That's no different from other media: Jacqueline Kennedy and Princess Diana were beloved of magazine covers for the most obvious sale-boosting reasons. What's different is that on the Net these measurements are made and acted upon instantaneously, and sometimes from very small samples, which is why in a very slow news hour on a small site a single click on a 2002 story seems to have bumped it up to the top, where Google spotted it and automatically inserted it into its feed.

The big issue, really - leaving aside the squabble between the Tribune and Google over whether Google should have been crawling its site at all - is the lack of reliable dates. It's always a wonder to me how many Web sites fail to anchor their information in time: the date a story is posted or a page is last updated should always be present. (I long, in fact, for a browser feature that would display at the top of a page the last date a page's main content was modified.)

Because there's another phenomenon that's insufficiently remarked upon: on the Internet, nothing ever fully dies. Every hour someone discovers an old piece of information for the first time and thinks it's new. Most of the time, it doesn't matter: Dave Barry's exploding whale is hilariously entertaining no matter how many times you've read it or seen the TV clip. But Web 2.0 will make new money for endless recycling part of our infrastructure rather than a rare occurrence.

In 1998 I wrote that crude hacker defacement of Web sites was nothing to worry about compared to the prospect of the subtle poisoning of the world's information supply that might become possible as hackers became more sophisticated. This danger is still with us, and the only remedy is to do what journalists used to be paid to do: check your facts. Twice. How do we automate that?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 29, 2008

Bannedwidth

The news that Comcast is openly imposing a monthly 250Gb bandwidth cap for its broadband subscribers sounds, as many have noted, more generous than it is. Comcast doesn't have to lower the cap progressively for customers to feel the crunch; the amount of data everyone shifts around grows inexorably year by year. Just as the 64K 640K Bill Gates denies he ever said was enough for anybodyis today barely an email, soon 250Gb will be peanuts. Comcast's move will more likely pull the market away from all-you-can-eat to arguably logical banded charging.

We should keep that in mind as the European Parliament goes to debate the telecoms package on Tuesday, with a first reading plenary vote scheduled for the Strasbourg session on September 22-25.

Many of the consumer provisions make sense, such as demanding that all users have free access to the EU-wide and national emergency numbers, that there be at least one directory enquiries service, and that there be "adequate" geographical coverage of public payphones. Those surrounded by yapping mobile phones everywhere they go may wonder why we still need payphones, but the day your battery dies, your phone gets lost, stolen, or broken, or you land in a foreign country and discover that for some reason your phone doesn't work, you'll be grateful, trust me.

The other consumer provision everyone has to like is the one that requires greater transparency about pricing. What's unusual about the Comcast announcement is that it's open and straightforward; in the UK so far, both ISPs and "all-you-can-eat" music download services have a history of being coy about exactly what level of use is enough to get you throttled or banned. In credit cards, American Express's "no preset spending limit" is valuable precisely because it gives the consumer greater flexibility than the credit limits imposed by Visa and Mastercard; in online services the flexibility is all on the side of the supplier. Most people would be willing to stay on the south side of a bandwidth cap if only they knew what it was. One must surmise that service providers don't like to disclose the cap because they think knowing what it is will encourage light users to consume more, upsetting the usage models their business plans are based on.

The more contentious areas are, of course, those that relate to copyright infringement. Navigating through the haze of proposed amendments and opinions doesn't really establish exactly what's likely to happen. But in recent months there have been discussions of everything from notice-and-takedown rules to three-strikes-and-you're-offline. Many of these defy the basic principles on which European and American justice is supposed to rest: due process and proportionate punishment. Take, for example, the idea of tossing someone offline and putting them on a blacklist so they can't get an account with another ISP. That fails both principles: either an unrelated rightsholder of the original ISP or both would be acting as a kangaroo court, and being thrown offline would not only disconnect the user from illegal online activities but in many cases make it impossible for that person's whole household to do homework, pay bills, and interact with both government and social circles.

That punishment would be wholly disproportioniate even if you could guarantee there would be no mistakes and all illegal activities would be punished equally. But in fact no one can guarantee that. An ISP cannot scan traffic and automatically identify copyright infringement; and with millions of people engaging in P2P file-sharing (seemingly the target of most of this legislation) any spotting of illegal activity has to be automated. In addition, over time, as legal downloads (Joss Whedon's dr horrible and his sing-a-long blog managed 2.2 million downloads from iTunes in the first week besides crashing its streaming server) outstrip illegal ones, simply being a heavy user won't indicate anything about whether the user's activity is legal or not.

Part of the difficulty is finding the correct analogy. Is the crime of someone who downloads a torrent of The Big Bang Theory and leaves the downloaded copy seeding afterwards the same as that of someone who sets up a factory and puts out millions of counterfeit DVD copies? Is downloading a copy of the series the same as stealing the DVDs from a shop? I would say no: counterfeit DVDs unarguably cost the industry sales in a way that downloading does not, or not necessarily. Similarly, stealing a DVD from a shop has a clearly identifiable victim (the shop itself) in a way that downloading a copy does not. But in both those cases the penalties are generally applied by courts operating under democratically decided procedures. That is clearly not the case when ISPs act on complaints by rightsholders with no penalties imposed upon them for false accusations. A more appropriate punishment would be a fine, and even that should be limited to cases of clear damage, such as the unauthorized release of material that has yet to be commercially launched.

For all these reasons, ISPs should be wary of signing onto the rightsholders' bandwagon when their concern is user demand for bandwidth. We would, I imagine, see very different responses from them if, as I think ought to happen, anti-trust law were invoked to force the separation of content owners from bandwidth providers.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 18, 2008

Like a Virgin

Back in November 2005 the CEO of AT&T, Ed Whitacre, told Business Week that he was tired of big Internet sites like Google and Yahoo! using "my pipes" "for free". With those words he launched the issue of network neutrality onto the front pages and into the public consciousness. At the time, it seemed like what one of my editors used to grandly dismiss as an "American issue". (One such issue, it's entertaining to remember now, was spam. That was in 1997.) The only company dominant enough and possessed of sufficient infrastructure to impose carriage charges on content providers in the UK was BT - and if BT had tried anything like that Ofcom would - probably - have stomped all over it.

But what starts in America usually winds up here a few years later, and this week, the CEO of Virgin Media, Neil Berkett, threatened that video providers who don't pay for faster service may find their traffic being delivered in slow "bus lanes". Network neutrality, he said, was "a load of bollocks".

His PR people recanted - er, clarified a day or two later. We find it hard to see how a comment as direct as "a load of bollocks" could be taken out of context. However. Let's say he was briefly possessed by the spirt of Whitacre, who most certainly meant what he said.

The recharacterization of Berkett's comments: the company isn't really going to deliberately slow down YouTube and the BBC's iPlayer. Instead, it "could offer content providers deals to upgrade their provisioning." I thought this sounded like the wheeze where you're not charged more for using a credit card, you're given a discount for paying cash. But no: what they say they have in mind is direct peering, in which no money changes hands, which they admit could be viewed as a "non-neutral" solution.

But, says Keith Mitchell, a fellow member of the Open Rights Group advisory board, "They are in for a swift education in the way the global transit/peering market works if they try this." Virgin seems huge in the context of the UK, where its ownership of the former ntl/Telewest combine gives it a lock on the consumer cable market - but in the overall scheme of things it's "a very small fish in the pond compared to the Tier 1 transit providers, and the idea that they can buck this model single-handedly is laughable."

Worse, he says, "If Virgin attempts to cost recover for interconnects off content providers on anything other than a sender-keeps-all/non-settlement basis, they'll quickly find themselves in competition with the transit providers, whose significantly larger economies of scale put them in a position to provide a rather cheaper path from the content providers."

What fun. In other words, if you're, say, the BBC, and you're faced with paying extra in some form to get your content out to the Net you'd choose to pay the big trucking company with access to all the best and fastest roads and the international infrastructure rather than the man-with-a-van who roams your local neighborhood.

ISPs versus the iPlayer seems likely to run and run. It's clear, for example, that streaming is growing at a hefty clip. Obviously, within the UK the iPlayer is the biggest single contributor to this; viewers are watching a million programs a week online, sopping up 3 to 5 percent of all Internet traffic in Britain.

We've seen exactly this sort of argument before: file-sharing (music, not video!), online gaming, binary Usenet newsgroups. Why (ancient creaking voice) I remember when the big threat was the advent of the graphical Web, which nearly did kill the Net (/ancient creaking voice). The difference this time is that there is a single organization with nice, deep, taxpayer-funded pockets to dig into. Unlike the voracious spider that was Usenet, the centipede that is file-sharing, or the millipedes who were putting up Web sites, YouTube and the BBC make up an easily manageable number of easily distinguished targets for a protection racket. At the same time, the consolidation of the consumer broadband market from hundreds of dial-up providers into a few very large broadband providers means competition is increasingly mythical.

But the iPlayer is only one small piece of the puzzle. Over the next few years we're going to see many more organizations offering streaming video across the Net. For example, a few weeks ago I signed up for an annual pass for the streaming TV service for the nine biggest men's tennis tournaments of the year. The economics make sense: $70 a year versus £20 a month for Sky Sports - and I have no interest in any of Sky's other offerings - or pay nothing and "watch" really terrible low-resolution video over a free Chinese player offering rebroadcasts of uncertain legality.

The real problem, as several industry insiders have said to me lately, is pricing. "You have a product," said one incredulously, "that people want more and more of, and you can't make any money selling it?" When companies like O2 are offering broadband for £7.50 a month as a loss-leading add-on to mobile phone connections, consumers don't see why they should pay any more than that. Jerky streaming might be just the motivator to fix that.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 14, 2008

Uninformed consent

Apparently the US Congress is now being scripted by Jon Stewart of the Daily Show. In a twist of perfect irony, the House of Representatives has decided to hold its first closed session in 25 years to debate - surveillance.

But it's obvious why they want closed doors: they want to talk about the AT&T case. To recap: AT&T is being sued for its complicity in the Bush administration's warrantless surveillance of US citizens after its technician Mark Klein blew the whistle by taking documents to the Electronic Frontier Foundation (which a couple of weeks ago gave him a Pioneer Award for his trouble).

Bush has, of course, resisted any effort to peer into the innards of his surveillance program by claiming it's all a state secret, and that's part of the point of this Congressional move: the Democrats have fielded a bill that would give the whole program some more oversight and, significantly, reject the idea of giving telecommunications companies - that is, AT&T - immunity from prosecution for breaking the law by participating in warrantless wiretapping. 'Snot fair that they should deprive us of the fun of watching the horse-trading. It can't, surely, be that they think we'll be upset by watching them slag each other off. In an election year?

But it's been a week for irony, as Wikipedia founder Jimmy Wales has had his sex life exposed when he dumped his girlfriendand been accused of - let's call it sloppiness - in his expense accounts. Worse, he stands accused of trading favorable page edits for cash. There's always been a strong element of Schadenpedia around, but the edit-for-cash thing really goes to the heart of what Wikipedia is supposed to be about.

I suspect that nonetheless Wikipedia will survive it: if the foundation has the sense it seems to have, it will display zero tolerance. But the incident has raised valid questions about how Wikipedia can possibly sustain itself financially going forward. The site is big and has enviable masses of traffic; but it sells no advertising, choosing instead to live on hand-outs and the work of volunteers. The idea, I suppose, is that accepting advertising might taint the site's neutral viewpoint, but donations can do the same thing if they're not properly walled off: just ask the US Congress. It seems to me that an automated advertising system they did not control would be, if anything, safer. And then maybe they could pay some of those volunteers, even though it would be a pity to lose some of the site's best entertainment.

With respect to advertising, it's worth noting that Phorm, which we is under increasing pressure. Earlier this week, we had an opportunity to talk to Kent Ertegrul, CEO of Phorm, who continues to maintain that Phorm's system, because it does not store data, is more protective of privacy than today's cookie-driven Web. This may in fact be true.

Less certain is Ertegrul's belief that the system does not contravene the Regulation of Investigatory Powers Act, which lays down rules about interception. Ertegrul has some support from a informal letter from the Home Office whose reasoning seems to be that if users have consented and have been told how they can opt out, it's legal. Well, we'll see; there's a lot of debate going on about this claim and it will be interesting to hear the Information Commissioner's view. If the Home Office's interpretation is correct, it could open a lot of scope for abusive behavior that could be imposed upon users simply by adding it to the terms of service to which they theoretically consent when they sign up, and a UK equivalent of AT&T wanting to assist the government with wholesale warrantless wiretapping would have only to add it to the terms of service.

The real problem is that no one really knows how Phorm's system works. Phorm doesn't retain your IP address, but the ad servers surely have to know it when they're sending you ads. If you opt out but can still opt back in (as Ertegrul said you can), doesn't that mean you still have a cookie on your system and that your data is still passed to Phorm's system, which discards it instead of sending you ads? If that's the case, doesn't that mean you can not opt out of having your data shared? If that isn't how it works, then how does it work? I thought I understood it after talking to Ertegrul, I really did - and then someone asked me to explain how Phorm's cookie's usefulness persisted between sessions, and I wasn't sure any more. I think the Open Rights Group: Phorm should publish details of how its system works for experts to scrutinize. Until Phorm does that the misinformation Ertegrul is so upset about will continue. (More disclosure: I am on ORG's Advisory Council.

But maybe the Home Office is on to something. Bush could solve his whole problem by getting everyone to give consent to being surveilled at the moment they take US citizenship. Surely a newborn baby's footprint is sufficient agreement?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 1, 2008

Microhoo!

Large numbers are always fun, and $44.6 billion is a particularly large number. That's how much Microsoft has offered to pay, half cash, half stock, for Yahoo!

Before we get too impressed, we should remember two things: first, half of it is stock, which isn't an immediate drain on Microsoft's resources. Second, of course, is that money doesn't mean the same thing to Microsoft as it does to everyone else. As of last night, Microsoft had $19.09 billion in a nice cash heap, with more coming in all the time. (We digress to fantasise that somewhere inside Microsoft there's a heavily guarded room where the cash is kept, and where Microsoft employees who've done something particularly clever are allowed to roll naked as a reward.)

Even so, the bid is, shall we say, generous. As of last night, Yahoo!'s market cap was $25.63 billion. Yahoo!'s stock has dropped more than 32 percent in the last year, way outpacing the drop of the broader market. When issued, Microsoft's bid of $31 a share represented a 62 percent premium. That generosity tells us two things. First, since the bid was, in the polite market term, "unsolicited", that Microsoft thought it needed to pay that much to get Yahoo!'s board and biggest shareholders to agree. Second, that Microsoft is serious: it really wants Yahoo! and it doesn't want to have to fight off other contenders.

In some cases – most notably Google's acquisition of YouTube – you get the sense that the acquisition is as much about keeping the acquired company out of the hands of competitors as it is about actually wanting to own that company. If Google wanted a slice of whatever advertising market eventually develops around online video clips, it had to have YouTube. Google Video was too little, too late, and if anyone else had bought YouTube Google would never have been able to catch up.

There's an element of that here, in that MSN seems to have no immediate prospect of catching up with Google in the online advertising market. Last May, when a Microsoft-Yahoo! merger was first mooted, CNN noted that even combined MSN and Yahoo! would trail Google in the search market by a noticeable margin. Google has more than 55 percent of the search market; Yahoo! trails distantly with 17 percent and MSN is even further behind with 13 percent. Better, you can hear Microsoft thinking, to trail with 30 percent of the market than 13 percent; unlike most proposals to merge the numbers two and three players in a market, this merger would create a real competitor to the number one player.

In addition, despite the fact that Yahoo!'s profits dropped by 4.6 percent in the last quarter (year on year), its revenues grew in the same period by 11.8 percent. If Microsoft thought about it like a retail investor (or Warren Buffett), it would note two things: the drop in Yahoo!'s share prices make it a much more attractive buy than it was last May; and Yahoo!'s steady stream of revenues makes a nice return on Microsoft's investment all by itself. One analyst on CNBC estimated that return at 5 percent annually – not bad given today's interest rates.

Back in 2000, at the height of the bubble, when AOL merged with Time-Warner (a marriage both have lived to regret), I did a bit of fantasy matchmaking that regrettably has vanished off the Telegraph's site, pairing dot-coms and old-world companies for mergers. In that round, Amazon.com got Wal-Mart (or, more realistically, K-Mart), E*Trade passed up Dow-Jones, publisher of the Wall Street Journal (and may I just say how preferable that would have been to Rupert Murdoch's having bought it) in favor of greater irony with the lottery operator G-Tech, Microsoft got Disney (to split up the ducks), and Yahoo! was sent off to buy Rupert Murdoch's News International.

Google wasn't in the list; at the time, it was still a privately held geeks' favorite, out of the mainstream. (And, of course, some companies that were in the list – notably eToys and QXL – don't exist any more.) The piece shows off rather clearly, however, the idea of the time, which was that online companies could use their ridiculously inflated stock valuations to score themselves real businesses and real revenues. That was before Google showed the way to crack online advertising and turn visitor numbers into revenues.

It's often said that the hardest thing for a new technology company is to develop a second product. Microsoft is one of the few who succeeded in that. But the history of personal computing is still extremely short, and history may come to look at DOS, Windows, and Office as all one product: commercial software. Microsoft has seen off its commercial competitors, but open-source is a genuine threat to drive the price of commodity software to zero, much like the revenues from long distance telephone calls. Looked at that way, there is no doubt that Microsoft's long-term survival as a major player depends on finding a new approach. It has kept pitching for the right online approach: information service, portal, player/DRM, now search/advertising. And now we get to find out whether Google, like very few companies before it, really can compete with Microsoft. Game on.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 21, 2007

Enter password

Some things, you just can't fake.

A few years ago, a friend of mine got a letter from an old girlfriend of her son's bearing news: my friend had, unknown to both her and his father, a 15-year-old grandson in Australia. The mother had married someone else, that marriage had broken up, and now the son was asking questions about his biological father.

I saw the kid, visiting his grandparents, out playing tennis the other day. It wasn't just the resemblance of face, head shape, and hair; the entire way his body moved as he ran and hit the ball was eerily and precisely like his father.

"You wouldn't need a DNA test," I said, aside, to my friend. She laughed and nodded, and then said, "We did one, though."

Biology: the ultimate identifier.

A few weeks ago, I did a piece on the many problems with passwords. Briefly: there are too many of them. They're hard to think up (at least if they're good ones), remember, and manage, and even when you have those things right you can be screwed by a third-party software supplier who makes the mistakes for you. The immediate precipitating incident for the piece was the Cambridge computer security group's discovery that Google makes a fine password cracker if your software, like Wordpress, stores passwords as MD5 hashes

Some topics you write about draw Pavlovian responses. Anything involving even a tiny threat to Firefox, for example, gets a huge response, as some school officials near where I'm staying have just discovered (kid doctors a detention letter to say he's being punished for not using Firefox and posts it on Digg; school becomes the target of international outrage). Passwords draw PRs for companies with better ideas.

I think the last time I wrote about passwords, the company that called was selling the technology to do those picklists you see on, for example, the Barclaycard site. You don't type in the password; instead, you pick two letters from picklists offered to you. There are a couple of problems with this, as it turns out now. First of all, if your password is a dictionary word the system doesn't really protect all that well against attacks that capture the letters, because it's so easy to plug two letters into a crossword solving program. But the big thing, as usual, is the memory problem. We learn things by using them repeatedly. It's a lot harder to remember the password if you never type the whole thing. I say picklists make it even more likely the password gets written down.

This time round, I got a call from Biopassword, which depends on behavioral biometrics: your personal typing pattern, which is as distinctive to your computer as my friend's grandson's style of movement is to a human. You still don't get to lose the password entirely; the system records the way you type it and your user name and uses that extra identifier to verify that it's you. The technology runs on the server side for Internet applications and enterprise computer systems, so in theory it works no matter where you're logging in from.

Ever used a French keyboard?

"A dramatic change does affect its ability," Biopassword's vice-president of marketing, Doug Wheeler, admitted. "But there are ways to mitigate the risk of failing if you want to provide the capability." These include the usual suspects: asking the person questions no one else is likely to be able to answer correctly, issuing a one-time password (via, for example, a known personal device such as a mobile phone), and so on. But, as he says, the thing companies like about Biopassword is that it identifies you specifically, not your cell phone or your bank statement. "No technology is perfect."

Biopassword starts by collecting nine samples, either all at once or over time, from which it generates a template. Wheeler says the company is working on reducing the number of samples as well as the number of applications and clients the system works with. He also notes that you can have your login rejected for matching too perfectly – to avoid replay attacks.

It's an intriguing idea, certainly. A big selling point is that unlike other ideas in the general move to two-factor identification it doesn't require you to learn or remember anything – or carry anything extra.

But it doesn't solve the key issue: passwords are an intractable problem located at the nexus of security, privacy, human psychology, and computer usability. A password that's easy to remember is often easy to crack. A password that's hard to crack is usually impossible to remember. Authenticating who you are when you type it will help – but these systems still have to have a fallback for when users are grappling with unfamiliar keyboards, broken arms, or unpredictable illness. And no user-facing system will solve the kind of hack that was used against the Cambridge group's installation of Wordpress (though this hole is fixed, now), which involved running a stored password through an MD5 hash and presenting the results to the Web site as a cookie indicating a successful login..

Still, it's good to know they're still out there trying.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 21, 2007

The summer of lost hats

I seem to have spent the summer dodging in and out of science fiction novels featuring four general topics: energy, security, virtual worlds, and what someone at the last conference called "GRAIN" technologies (genetic engineering, robotics, AI, and nanotechnology). So the summer started with doom and gloom and got progressively more optimistic. Along the way, I have mysteriously lost a lot of hats. The phenomena may not be related.

I lost the first hat in June, a Toyota Motor Racing hat (someone else's joke; don't ask) while I was reading the first of many very gloomy books about the end of the world as we know it. Of course, TEOTWAWKI has been oft-predicted, and there is, as Damian Thompson, the Telegraph's former religious correspondent, commented when I was writing about Y2K – a "wonderful and gleeful attention to detail" in these grand warnings. Y2K was a perfect example: a timetable posted to comp.software.year-2000 had the financial system collapsing around April 1999 and the cities starting to burn in October…

Energy books can be logically divided into three categories. One, apocalyptics: fossil fuels are going to run out (and sooner than you think), the world will continue to heat up, billions will die, and the few of us who survive will return to hunting, gathering, and dying young. Two, deniers: fossil fuels aren't going to run out, don't be silly, and we can tackle global warming by cleaning them up a bit. Here. Have some clean coal. Three, optimists: fossil fuels are running out, but technology will help us solve both that and global warming. Have some clean coal and a side order of photovoltaic panels.

I tend, when not wracked with guilt for having read 15 books and written 30,000 words on the energy/climate crisis and then spent the rest of the summer flying approximately 33,000 miles, toward optimism. People can change – and faster than you think. Ten years ago, you'd have been laughed off the British isles for suggesting that in 2007 everyone would be drinking bottled water. Given the will, ten years from now everyone could have a solar collector on their roof.

The difficulty is that at least two of those takes on the future of energy encourage greater consumption. If we're all going to die anyway and the planet is going inevitably to revert to the Stone Age, why not enjoy it while we still can? All kinds of travel will become hideously expensive and difficult; go now! If, on the other hand, you believe that there isn't a problem, well, why change anything? The one group who might be inclined toward caution and saving energy is the optimists – technology may be able to save us, but we need time to create create and deploy it. The more careful we are now, the longer we'll have to do that.

Unfortunately, that's cautious optimism. While technology companies, who have to foot the huge bills for their energy consumption, are frantically trying to go green for the soundest of business reasons, individual technologists don't seem to me to have the same outlook. At Black Hat and Defcon, for example (lost hats number two and three: a red Canada hat and a black Black Hat hat), among all the many security risks that were presented, no one talked about energy as a problem. I mean, yes, we have all those off-site backups. But you can take out a border control system as easily with an electrical power outage as you can by swiping an infected RFID passport across a reader to corrupt the database. What happens if all the lights go out, we can't get them back on again, and everything was online?

Reading all those energy books changes the lens through which you view technical developments somewhat. Singapore's virtual worlds are a case in point (lost hat: a navy-and-tan Las Vegas job): everyone is talking about what kinds of laws should apply to selling magic swords or buying virtual property, and all the time in the back of your mind is the blog posting that calculated that the average Second Life avatar consumes as much energy as the average Brazilian. And emits as much carbon as driving an SUV for 2,000 miles. Bear in mind that most SL avatars aren't figured up that often, and the suggestion that we could curb energy consumption by having virtual conferences instead of physical ones seems less realistic. (Though we could, at least, avoid airport security.) In this, as in so much else, the science fiction writer Vernor Vinge seems to have gotten there first: his book Marooned in Real Time looks at the plight of a bunch of post-Singularity augmented humans knowing their technology is going to run out.

It was left to the most science fictional of the conferences, last week's Center for Responsible Nanotechnology conference (my overview is here) to talk about energy. In wildly optimistic terms: technology will not only save us but make us all rich as well.

This was the one time all summer I didn't lose any hats (red Swiss everyone thought was Red Cross, and a turquoise Arizona I bought just in case). If you can keep your hat while all around you everyone is losing theirs…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 31, 2007

Snouting for bandwidth

Our old non-friend Comcast has been under fire again, this time for turning off Internet access to users it deems to have used too much bandwidth. The kicker? Comcast won't tell those users how much is too much.

Of course, neither bandwidth caps nor secrecy over what constitutes heavy usage is anything new, at least in Britain. ntl brought in a 1Gb per day bandwidth cap as long ago as 2003. BT began capping users in 2004. And Virgin Media, which now owns ntl and apparently every other cable company in the UK, is doing it, too.

As for the secrecy, a few years ago when "unlimited" music download services were the big thing, it wasn't uncommon to hear heavy users complain that they'd been blocked for downloading so much that the service owner concluded they were sharing the account. (Or, maybe hoarding music to play later, I don't know.) That was frustrating enough, but the bigger complaint was that they could never find out how much was too much. They would, they said, play by the rules – if only someone would tell them what those rules were.

This is the game Comcast is now playing. It is actually disconnecting exceptionally heavy users – and then refusing to tell them what usage is safe. Internet service, as provided by Franz Kafka. The problem is that in a fair number of areas of the US consumers have no alternative if they want broadband. Comcast owns the cable market, and DSL provision is patchy. The UK is slightly better off: Virgin Media now owns the cable market, but DSL is widespread, and it's not only sold by BT directly but also by smaller third parties under a variety of arrangements with BT's wholesale department.

I am surprised to find I have some – not a lot, but some – sympathy with Comcast here. I do see that publishing the cap might lead to the entire industry competing on how much you can download a month – which might in turn lead to everyone posting the "unlimited" tag again and having to stick with it. On the other hand, as this Slashdot comment says, subscribers don't have any reliable way of seeing how much they actually are downloading. There is no way to compare your records with the company's equivalent to balancing your check book. But at least you can change banks if the bank keeps making mistakes or your account is being hacked. As already noted, this isn't so much of an option for Comcast subscribers.

This type of issue is resurfacing in the UK as a network neutrality dispute with the advent of the BBC's iPlayer. Several large ISPs want the BBC to pay for bandwidth costs, perhaps especially because its design makes it prospectively a bandwidth hog. It's an outrageous claim when you consider that both consumers and the BBC already pay for their bandwidth.

Except…we don't, quite. The fact is that the economics of ISPs have barely changed since they were all losing money a decade ago. In the early days of the UK online industry, when the men were men, the women were (mostly) men, and Demon was the top-dog ISP, ISPs could afford to offer unlimited use of their dial-up connections for one very simple reason. They knew that the phone bills would throw users offline: British users paid by the minute for local calls in those days. ISPs could, therefore, budget their modem racks and leased lines based on the realistic assessment that most of their users would be offline at any given time.

Cut to today. Sure, users are online all the time with broadband. But most of them go out to work (or, if they're businesses, go home at night), and heavy round-the-clock usage is rare. ISPs know this, and budget accordingly. Pipes from BT are expensive, and their size is, logically, enough, specified based on average use. There isn't a single ISP whose service wouldn't fall over if all its users saturated all their bandwidth 24/7. And at today's market rates, there isn't a single ISP who could afford to provide a service that wouldn't fall over under that level of usage. If an entire nation switches even a sizable minority of its viewing habits to the iPlayer ISPs could legitimately have a problem. Today's bandwidth hogs are a tiny percentage of Internet users, easily controlled. Tomorrow's could be all of us. Well, all of us and the FBI.

Still, there really has to be a middle ground. The best seems to be the ideas in the Slashdot posting linked about: subscribers should be able to monitor the usage on their accounts. Certainly, there are advantages to both sides in having flexible rules rather than rigid ones. But the ultimate sanction really can't be to cut subscribers off for a year, especially if they have no choice of supplier. If that's how Comcast wants to behave, it could at least support plans for municipal wireless. Let the burden of the most prolific users of the Internet, like those of health care, fall on the public purse. Why not?


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 1, 2007

Britney Spears has good news for you

The most entertaining thing I learned at yesterday's Google Developer Day (London branch) was that there is a site that tracks the gap between the BBC's idea of what the most important news is and the stuff people actually read. When Ian Forrester and Matthew Cashmore showed off this BBC Backstage widget, the BBC was only 17 percent "in touch" with what "we're" reading. Today, I see it's 39 percent, so I guess the BBC has good days and bad days.
I note, irrelevantly to this week's headline, that Cashmore also said that putting Britney Spears in the headline moves stories right to the top of the reading list (though to the bottom of the BBC's list).

The widget apparently works by comparing the top stories placed on the BBC News front page with the list the BBC helpfully supplies of the most popular current stories. A nice example of creative use of RSS feeds and data scraping.

See, journalists have mixed feelings about this kind of thing. On the one hand, it's fascinating – fascinating – to see what people actually read. If you are a journalist and a hypocritical intellectual snob, this sort of information gives you professional license to read about Britney Spears (in order to better understand your audience) while simultaneously sneering (for money) at anyone who chooses to do so recreationally. On the other hand, if you're dedicated to writing serious think pieces about difficult topics, you dread the day when the bean counters get hold of those lists and, after several hours' careful study, look up and say brightly, "Hey, I know! Why don't we commission more stories about Britney Spears and forget about all that policy crap?"

(I, of course, do not fall in either category: I write what one of my friends likes to call "boring bollocks about computers", and I have been open for years about my alt.showbiz.gossip habit.)

The BBC guys' presentation was one of a couple of dozen sessions; they were joined by developers from other companies, large and small, and, of course, various "Googlers", most notably Chris di Bona, who runs Google's open source program. In the way of the modern era, there was a "bloggers' lounge", nicely wi-fi'd and strewn with cushions in Google's favorite primary colors. OK, it looked like a playpen with laptops, but we're not here to judge.

There seems to be a certain self-consciousness among Googlers about the company's avowed desire not to be "evil". The in-progress acquisition of the advertising agency DoubleClick has raised a lot of questions recently – though while Google has created an infrastructure that could certainly make it a considerable privacy threat should it choose to go in that direction, so far, it hasn't actually done so.

But the more interesting thing about the Developer Day is that it brings home how much Google (and perhaps also Yahoo! is becoming a software company rather than the search engine service it used to be. One of the keys to Microsoft's success – and that of others before it, all the way back to Radio Shack – was the ecology of developers it built up around its software. We talk a lot about Microsoft's dominance of the desktop, but one of the things that made it successful in the early days was the range of software available to run on it. A company the size Microsoft was then could not have written it all. More important, even if the company could have done it, the number of third parties investing in writing for Windows helped give that software the weight it needed to become dominant. GNU/Linux, last time I looked, had most of the boxes checked, but it's still pretty hard to find fully functional personal finance software, persumably because that requires agreements with banks and brokerage firms over data formats, as well as compliance with a complex of tax laws.

The notion that building a community around your business is key to success on the Internet is an old one (at least in Internet years). Amazon.com succeeded first by publishing user reviews of its products and then by enlisting as an associate anyone who wanted to put a list of books on their Web site. Amazon.com also opened up its store to small, third-party sellers and latterly has started offering hosting services to other business. The size of eBay's user base is of course the key to everything: you put your items for sale where the largest number of people will see them. Yahoo!'s strategy has been putting as many services (search, email, news, weather, sports scores, poker) as possible on its site so that sooner or later they capture a visit from everyone. And, of course, Google itself has based its success in part on enlisting much of the rest of the Web as advertising billboards, for which it gets paid. Becoming embedded into other people's services is a logical next step. It will, though, make dealing with it a lot harder if the company ever does turn eeeevil.

The other fun BBC widget was clearly designed with the BBC newsreader Martyn Lewis in mind. In 1993, Lewis expressed a desire for more good news to be featured on TV. Well, here you go, Martyn: a Mood News Google Gadget that can be tuned to deliver just good news. Keep on the sunny side of life.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. She has an intermittent blog. Readers are welcome to post there or to send email, but please turn off HTML.

April 6, 2007

What's in a 2.0?

"The Web with rounded corners," some of my skeptical friends call it. One reason for going to Emerging Technology was to find out what Web 2.0 was supposed to be when it's at home in OReillyland and whether there's a there, there.

It's really surprising how much "Web 2.0" is talked about and taken for granted as a present, dominant trend in Silicon Valley (OK, etech was in San Diego, but its prevailing ethos lives about 550 miles north of there). In London, you can go to a year's worth of Net-related gatherings without ever hearing the term, and it doesn't seem to be even a minor theme in the technology news generally, even in the 49 states.

The cynical would conclude it's a Silicon Valley term, perhaps designed to attract funding from the local venture capitalists, who respond to buzzwords.

Not at all, said the guys sharing my taxi to the airport. For one thing, you can make assumptions now that you couldn't in "Web 1.0". For example: people know what a browser is; they use email; they know if they see a link that they can click on it and be taken to further information.

To me, this sounds less like a change in the Web and more like simple user education. People can drive cars now, too, where our caveman ancestors couldn't. Yet we don't call ourselves Homo Sapiens 2.0 or claim that we're a different species.

There also seems to be some disagreement about whether it's really right to call it Web 2.0. After all, it isn't like software where you roll out discrete versions with new product launches, or like, say, new versions of Windows, where you usually have to buy a new computer in order to cope with the demands of the new software. (The kids of the 1990s have learned a strange way to count, too: 1.0, 2.0, 3.0, 3.1, 3.11, 95, 97…)

Instead, Web 2.0, like folk music, seems to be a state of mind: it is what you point to when you say it. But if you figure that Web 1.0 was more or less passive point-and-click and Web 3.0 is the "semantic Web" Tim Berners-Lee has been talking about for years in which machines will take intelligently to other machines and humans will reap the benefits, then Web 2.0 is, logically, all that interactive stuff. Social networking, Twitter, interactive communities that leverage their members' experience and data to create new information and services.

Some examples. Wesabe, in which members pool their anonymized financial data, out of which the service produces analyses showing things consumers couldn't easily know before, such as which banks or credit cards typically cost the most. The Sunlight Foundation mines public resources to give US citizens a clearer picture of what their elected representatives are actually doing. The many social networks – Friendster, LinkedIn, Orkut, and so on – of course. And all those mashup things other people seem to have time to do – maps, earths, and other data.

The thing is, TheyWorkForYou has been mining the UK's public data in one form or another since 1998, when some of the same people first set up UpMyStreet. OK, it doesn't have a blog. Does that make it significantly less, you know, modern?

None of this is to say that there isn't genuinely a trend here, or that what's coming out of it isn't useful. Mashups are fun, we know this. And obviously there is real value in mining data or folks like the credit card companies, airlines, supermarkets, insurance companies, and credit scorers wouldn't be so anxious to grab all our data that they pay us with discounts and better treatment just to get it. If they can do it, we can – and there's clearly a lot of public data out there that has never been turned into usable information. Why shouldn't consumers be able to score banks and credit card companies the way they score us?

But adding a blog or a discussion forum doesn't seem to me sufficiently novel to claim that it's a brand new Web. What it does show is that if you give humans connectivity, they will keep building the same kinds of things on whatever platform is available. Every online system that I'm aware of, going back to the proprietary days of CompuServe and BIX (and, no doubt, others before them) has had mail, instant messaging, discussion forums, some form of shopping (however rudimentary), and some ability to post personal thoughts in public. Somewhere, there's probably a PhD dissertation in researching the question of what it says about us that we keep building the same things.

The really big changes are permanent storage and all-encompassing search. When there were many proprietary platforms and they were harder to use, the volume was smaller – but search was ineffective unless you knew exactly where to look or if the data had been deleted after 30 days. And you can't interact with data you can't find.

So we're back to cycnicism. If you want to say that "Web 2.0" is a useful umbrella term for attracting venture capital, well, fine. But let's not pretend it's a giant technological revolution.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her , or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 23, 2007

Double the networks, double the neutralities

Back in 1975, the Ithaca, New York apartment building I was living in had a fire in the basement, and by the time it was out so was my telephone line. The repairman's very first move was to disconnect the $3 30-foot cable I had bought at K-Mart and confiscate it. At the time, AT&T's similar cable cost $25.

In fact, by then AT&T had no right to control what equipment you attached to your phone line because of the Carterfone case, in which the FCC ruled against AT&T's argument that it had to own all the equipment in order to ensure that the network would function properly. But this is how the telco world worked; in Edinburgh in 1983 legally you could only buy a modem from British Telecom. I think it cost about £300 – for 300 baud. Expensive enough that I didn't get online until 1990.

Stories like this are part of why the Internet developed the way it did: the pioneers were determined to avoid a situation where the Internet was controlled like this. In the early 1980s, when the first backbone was being build in the US to connect the five NSF-funded regional computing centers, the feeling was mutual. John Connolly, who wrote the checks for a lot of that work, told me in an interview in 1993 that they had endless meetings with the telcos trying to get them interested, but those companies just couldn't see that there was any money in the Internet.
Well, now here we are, and the Internet is chewing up the telcos' business models and creating havoc for the cable companies who were supposed to be the beneficiaries, and so it's not surprising that the telcos' one wish is to transform the Internet into something more closely approximating the controlled world they used to love.

Which is how we arrived at the issue known as network neutrality. This particular debate has been percolating in the US for at least a year now, and some discussion is beginning in the UK. This week, at a forum held in Westminster on the subject, Ofcom and the DTI said the existing regulatory framework was sufficient.

The basic issue is, of course, money. The traditional telcos are not, of course, having a very good time of things, and it was inevitable that it would occur to some bright CEO – it turned out to be the head of Verizon – that there ought to be some way of "monetizing" all those millions of people going to Google, Yahoo!, and the other top sites. Why not charge a fee to give priority service? That this would also allow the telcos to discriminate against competitor VOIP services and the cablecos (chiefly Comcast) to discrminate against competing online video services is also a plus. These proposals are opposed not only by the big sites in question but by the usual collection of Net rights organization, who tend to believe all sites were created equal – or should be.
Ofcom – and others I've talked to – believes that the situation in the UK is different, in part because although most of the nation's DSL service is provided either directly or indirectly by BT that company has to be cooperative with its competitors or face the threat of regulation. The EU, however, is beginning to take a greater interest in these matters, and has begun legal proceedings against Germany over a law exempting Deutsche Telecom from opening the local loop of its new VDSL network to competitors.

But Timothy Wu, a law professor at Columbia and author of Who Controls the Internet: Illusions of a Borderless World, has pointed out that the current debates are ignoring an important sector of the market: wireless. The mobile market is not now, nor ever has been, neutral. It is less closed in Europe, where you can at least buy a phone and stick any SIM in it; but in the US most phones are hardware-locked to their networks, a situation that could hardly be less consumer-friendly. Apple's new iPod, for example, will be available through only one carrier, AT&T Wireless.

Wu's paper, along with the so-called "Carterfone" decision that forced AT&T to stop confiscating people's phone cords, is cited by Skype in a petition to get the FCC to require mobile phone operators to allow software applications open access. Skype's gripe is easy to comprehend: it can't get its service onto mobile phones. The operators' lack of interest in opening their networks is also easy to comprehend: what consumer is going to call on their expensive tariffs if they can use the Internet data connection to make cheap ones? Wu also documents other cases of features that are added or subtracted according to the network operators' demands: call timers (missing), wi-fi (largely absent), and Bluetooth (often crippled in the US).

The upshot is that because the two markets – wireless phones and the Internet – have developed from opposite directions, we have two network neutrality debates, not one. The wonder is that it took us so long to notice.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 14, 2006

Not too cheap to meter

An old Net joke holds that the best way to kill the Net is to invent a new application everyone wants. The Web nearly killed the Net when it was young. Binaries on Usenet. File-sharing. Video on demand may finally really do it. Not, necessarily, because it swamps servers, consumes all available bandwidth. But because, like spam, it causes people to adopt destructive schemes.

Two such examples turned up this week. The first, from the IP Development Network, the brainchild of Jeremy Penston, formerly of UUnet and Pipex, HD-TV over IP: Who Pays the Bill? (PDF), argues that present pricing models will not work in the HDTV future, and ISPs will need to control or provide their own content. It estimates, for example, that a consumer's single download of a streamed HD movie could cost an ISP £21.13, more than some users pay a month. The report has been criticized, and its key assumption – that the Internet will become the chief or only gateway to high-definition content – is probably wrong. Niche programming will get downloaded because any other type of distribution is uneconomical, but broadcast will survive for mass-market.

The germ that isn't so easily dismissed is the idea that bandwidth is not necessarily going to continue to get cheaper, at least for end users.

Which leads to exhibit B, the story that's gotten more coverage, a press release – the draft discussion paper isn't available yet – from the London-based Association of Independent Music (AIM) proposing that ISPs should be brought "into the official value chain". In other words, ISPs should be required to have and pay for licenses agreed with the music industry and a new "Value Recognition Right" should be created. AIM's reasoning: according to figures they cite from MusicAlly Research, some 60 percent of Internet traffic by data volume is P2P, file-sharing, and music has been the main driver of that. Therefore, ISPs are making money from music. Therefore, AIM wants some.

Let's be plain: this is madness.

First of all, the more correct verb there is "was", and even then it's only partially true. Yes, music was the driver behind Napster eight years ago, and Gnutella six years ago, and the various eHoofers. But now Bittorrent is the biggest bandwidth gobbler, and the biggest proportion of transferred data transferred is video, not music. This ought to be obvious: MP3 4Mb, one-hour TV show 350Mb, movie 700Mb to 4.7Gb. Music downloads started first and have been commercialized first, but that doesn't make it the main driver; it just makes it the historically *first* driver. In any event, music certainly isn't the main reason people get online: that is and was email and the Web.

Second of all, one of the key, underrated problems for any charging mechanism that involves distinguishing one type of bits from another type of bits in order to compensate someone is the loss of privacy. What you read, watch, and listen to is all part of what you think about; surely the inner recesses of your mind should be your own. A regime that requires ISPs to police what their customers do – even if it's in their own financial interests to do so – edges towards Orwell's Thought Police.

Third of all, anyone who believes that ISPs are making money from P2P needs remedial education. Do they seriously think that at something like £20 per month for up to 8mbps ADSL anyone's got much of a margin? P2P is, if anything, the bane of ISPs' existence, since it turns ordinary people into bandwidth hogs. Chris Comley, managing director of Wizards, the small ISP that supplies my service (it resells BT connections), says that although his company applies no usage caps, if users begin maxing out their connections (that is, using all their available bandwidth 24 hours a day, seven days a week), the company will start getting complaining email messages from BT and face having to pay higher charges for the connections it resells. Broadband pricing, like that of dial-up before it (when telephone bills could be relied upon to cap users' online hours), is predicated on the understanding that even users on an "unlimited" service will not in fact consume all the bandwidth that is available to them. In Comley's analogy, the owner of an all-you-can-eat buffet sets his pricing on the assumption that people who walk in for a meal are not in fact going to eat everything in the place.

"The price war over bandwidth is going to have to be reversed," he says, "because we have effectively discounted what the user pays for IP to such a low level that if they start to use it they're in trouble, and they will if they start using video on demand or IPTV."

We began with an old Internet joke. We end with an old Internet saying, generally traced back to the goofy hype of Nicholas Negroponte and George Gilder: that bandwidth is or will be too cheap to meter. It ought to be, given that the price of computing power keeps dropping. But if that's what we want it looks like we'll have to fight for it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 19, 2006

Toll roads

Ever since I read Robert W. McChesney's 1993 book, Telecommunications, Mass Media, and Democracy, I've been wondering if the Net could go the way radio did. As McChesney tells it, 1920s radio was dominated by non-profits, in part because no one believed anyone could ever be persuaded to advertise on the radio. The Telecommunications Act of 1934 changed radio into a commercial medium instead of the great democratizing, educational influence the pioneers expected.

Gigi Sohn, director of Public Knowledge, said unhappily at CFP, that legislative politics around network neutrality are breaking down into Republican versus Democrat. Even if you're a Republican and favor striking down Representative Ed Markey (D-MA)'s Network Neutrality Bill, it has to be bad news if vital decisions about the Internet's technical architecture are going to wind up a matter of partisan politics. If someone can really make the case that allowing, say, Verizon to charge Vonage extra for quality-of-service or faster data throughput would benefit Internet users, well, fair enough. But no one wins if these decisions boil down to politicians scoring points off each other. I'm sure this was always true in most subjects, but it seems particularly clear in the case of the Internet, whose origins are known and whose creators are still alive and working.

On the other hand, it doesn't help, as Danny Weitzner also said at CFP, that the arguments have become so emotional. TCP/IP creator Vint Cerf (now, like apparently half of everyone else on the planet, at Google), has called the telcos' proposals a desire to create a "toll road" in the middle of the Internet, rehetoric that seems to be propagating rapidly. To Net old-timers, that's fighting talk, like "modem tax". Red rag to bulls. Although it is becoming entertaining: rock musicians for network neutrality! And intriguing to see who is joining Save the Internet's coalition: Gun Owners of America and the Christian Coalition on the same list with the American Library Association and ACLU of Iowa.

The other key factor is that no one trusts the telcos (not that we should. Years ago, when I interviewed John Connolly, about his days at the National Science Foundation, where he signed many of the checks that financed the earliest Internet backbone; he talked about the many meetings he spent trying to get the telcos interested, but to no avail, since they couldn't see any way to make money from it. Now that they can, they want to come in and stomp all over it. Plus, there's the whole Verizon-blocking-everyone's-email as part of its anti-spam effort, and there's Comcast's history of blocking VPNs and other connections. And if that weren't enough there's the contention, voiced among others by Lawrence Lessig, that when the telcos were in charge technology stagnated for decades. Probably if they had their way the most innovative thing we could do even now would be to attach an answering machine to the end of their wire. And I'm old enough to remember a time when the telephone company would confiscate an extension cord if you installed one yourself and they found out about it. Will they be confiscating my Vosky next?

In their paper on the subject, Lessig and Tim Wu from the University of Virginia School of Law argue that what needs attention is not so much fair competition in infrastructure provision but fair competition at the application layer: access providers should not be allowed to favor one application over another, comparing it to the neutrality of the electrical network.

It seems to me that the argument for some kind of legally mandated network neutrality ought to follow logically from the earliest antitrust decisions under the Sherman Act: to ensure fair competition, content providers should not own or be able to control the channel of distribution. That logic required the movie studios to divest themselves of theater chains and Standard Oil to sell off its gas stations. Unfortunately, convergence makes that nuclear solution difficult. AOL sells online access and is owned by a major publisher that owns cable and satellite channels as well as magazines and movie studios. Comcast is the dominant cable broadband provider, and it provides (a relatively small amount of local) original TV programming. In the case of the telcos, their equivalent of "content" would be voice telephone calls. And if the analogy hadn't already broken down, the telcos' situation would kill it, because it would mean forcing them to choose between their traditional business (selling phone calls, a business whose revenues are vanishing) and their future business (selling the use of fat pipes and value-added services).

What no one is talking about – yet – is the international factor. It seems very unlikely that British or European telcos will be able to make the same kind of demands as AT&T, Qwest, and BellSouth. The only ones in a position to institute differential pricing and make it stick are the incumbents – and they would be heavily stomped on if they tried it. What would the Internet look like if there are "toll roads" in the US but network neutrality (in the best public service tradition of TV/radio broadcasting) everywhere else?

Wendy M. Grossman is author of (if NYU Press ever get it working again), From Anarchy to Power: the Net Comes of Age, and The Daily Telegraph A-Z Guide to the Internet. Her Web site also has an archive of all the earlier columns in this series.