Probably most of us have had a moment when the electronic device we're working on burps and says, Nope, full, can't eat another byte. This week, the Royal Society held a scientific meeting to debate whether this might happen to our communications networks.
Proceedings began with sunny optimism, when Andrew Lord, head of BT's optical core and access research, said he sees no problem. A few minutes later, Chih-Lin I, China Mobile's chief scientist of wireless technologies, begged to differ: with 800 million subscribers, China Mobile's costs keep rising while its revenues stay flat and called it "unsustainable". Later, she specified: the "OTTs" (over-the-top "BAT" services - search engine Baidu, retailer Ali Baba, and entertainment service Tencent). Tencent also has 800 million users, but nothing like the same costs. Echoing the complaints we hear from Verizon or AT&T, with some exasperation she asked how we allowed internet users to believe everything should be free. Well: like water, consumers flow to lowest-cost and payment is uphill.
Lord 's optimism isn't encouraging. South Korea is already talking about 10Gbps internet service. According to Akamai's State of the Internet report, the UK averages 10.9Mbps, less than half South Korea's current average. Cue pictures of a country lane, which a couple of speakers duly posted.
Still, Lord asked a valid question: "How much is enough?" He divided the thinking into three groups. Moore's Law: traffic has grown like this for decades, so it always will. Conservatives: growth will have to end sometime, and video has long been known as the most bandwidth-hungry application; besides, the eye-to-brain data rate is fixed at 10Mbps. Provide and they will consume: we don't know what the next killer apps are, but we never have; keep providing bandwidth and allow innovation, because besides video there's gaming, cloud, machine-to-machine, smart cities, and virtual reality, all still developing.
I recall that somewhere around 1999 Peter Dawe, founder of the early UK ISP Pipex, told me video might kill the internet. Obviously, it hasn't - yet. But it's likely that video may become a small part of the problem: the amount a single individual can consume in a day is finite, even when you add in bandwidth-sapping moves to 4K and 8K. Video has established (efficient) alternatives, like broadcast. The joker in the pack is the largely hidden traffic none of the speakers mentioned: billions of authentication requests, the data shipped around by third-party trackers, and, as Jon Crowcroft pointed out, the 98% of email that's spam. It's another of those Yes, Minister irregular verbs: I say rip-off, you say waste, he says valuable economic activity. Throw in trillions of sensors communicating like mad to create a system whose complexity Crowcroft estimates will be 1,000 times that of today's internet, and while yes, each sensor's data is invisible noise in a minute of YouTube, machines don't sleep. What will the data loads be from projects like Ken Goldberg's cloud robotics and remote medical surgery? Let's call that video-plus because probably multiple streams where zero latency and constant connection is crucial (talk about your "killer apps"). "Provide it and they will consume" might be entirely wrong, we don't know. But we do know that if you *don't* provide it, they *can't* consume it.
And so: what emerged over two days was prospective trouble in all directions.
The basic problem is physics. Classic information theory, developed by Claude Shannon, holds that every channel has a maximum capacity above which error-free transmission is impossible. Once a channel is saturated, either you accept unpredictable errors, or you increase the channel capacity, or you spread the traffic across more channels. There's an array of ideas for increasing fiber capacity, which Rene-Jean Esiambre outlined: go parallel (like processors have), change design to multicore or hollow core, develop higher-capacity materials. Polina Bayvel noted, however, that individual links but about maximizing the capacity of the network overall. Jacob Aron, at New Scientist, has a nice write-up of
"What is the rare element in this?" someone asked midway. "As far as I can see no resource is rare." He tallied it up: labor, silicon, capital... Well, OK, the world does have lots of sand. Ditto labor. But capital? China Mobile's graph comparing flat revenues to escalating costs and dubious payback seemed to make the case pretty clearly. What's expensive is not fiber, but digging up the road and pulling it: a business model crunch.
The scariest prospective crunches came from Crowcroft, who noted that the internet was designed as an experimental platform. "Unfortunately, it got successful." Among the pieces he singled out: TCP ("not really fit for purpose"), the routing system (BGP "doesn't scale or converge" and "no one's working on a replacement"), poor safety and security models, few considerations of the consequences of failures... He suggested that engineers should be thinking of unreliability as a goal at every layer, allowing multiple opportunities to correct faults. "Engineer for an unreliable world" sounds like a sensible motto to me.
Ah, yes: sense. There's another crunch. It already has a T-shirt.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.