" /> net.wars: January 2020 Archives

« December 2019 | Main | February 2020 »

January 31, 2020

Dirty networks

Thumbnail image for European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpgWe rarely talk about it this way, but sometimes what makes a computer secure is a matter of perspective. Two weeks ago, at the CPDP-adjacent Privacy Camp, a group of Russians explained seriously why they trust Gmail, WhatsApp, and Facebook.

"If you remove these tools, journalism in Crimea would not exist," said one. Google's transparency reports show that the company has never given information on demand to the Russian authorities.

That is, they trust Google not because they *trust* Google but because using it probably won't land them in prison, whereas their indigenous providers are stoolies in real time. Similarly, journalists operating in high-risk locations may prefer WhatsApp, despite its Facebookiness, because they can't risk losing their new source by demanding a shift to unfamiliar technology, and the list of shared friends helps establish the journalist's trustworthiness. The decision is based on a complex set of context and consequences, not on a narrow technological assessment.

So, now. Imagine you lead a moderately-sized island country that is about to abandon its old partnerships, and you must choose whether to allow your telcos to buy equipment from a large Chinese company, which may or may not be under government orders to build in surveillance-readiness. Do you trust the Chinese company? If not, who *do* you trust?

In the European Parliament, during Wednesday's pro forma debate on the UK's Withdrawal Agreement and emotional farewell, Guy Verhofstadt, the parliament's Brexit coordinator, asked: "What is in fact threatening Britain's sovereignty most - the rules of our single market or the fact that tomorrow they may be planting Chinese 5G masts in the British islands?"

He asked because back in London Boris Johnson was announcing he would allow Huawei to supply "non-core" equipment for up to 35% (measured how?) of the UK's upcoming 5G mobile network. The US, in the form of a Newt Gingrich, seemed miffed. Yet last year Brian Fung noted at the Washington Post ($) the absence of US companies among the only available alternatives: ZTE (China), Nokia (Finland), and Ericsson (Sweden). The failure of companies like Motorola and Lucent to understand, circa 2000, the importance of common standards to wireless communications - a failure Europe did not share - cost them their early lead. Besides, Fung adds, people don't trust the US like they used to, given Snowden's 2013 revelations and the unpredictable behavior of the US's current president. So, the question may be less "Do you want spies with that?" and more, "Which spy would you prefer?"

A key factor is cost. Huawei is both cheaper *and* the technology leader, partly, Alex Hern writes at the Guardian, because its government grants it subsidies that are illegal elsewhere. Hern calls the whole discussion largely irrelevant, because *actually* Huawei equipment is already embedded. Telcos - or rather, we - would have to pay to rip it out. A day later, BT proves he's right: it forecasts bringing the Huawei percentage down will cost £500 million.

All of this discussion has been geopolitical: Johnson's fellow Conservatives are unhappy; US secretary of state Mike Pompeo doesn't want American secrets traveling through Huawei equipment.

Technical expertise takes a different view. Bruce Schneier, for example, says: yes, Huawei is not trusted, and yes, the risks are real, but barring Huawei doesn't make the network secure. The US doesn't even want a secure network, if that means a network it can't spy into.

In a letter to The Times, Martyn Thomas, a fellow at the Royal Academy of Engineering, argues that no matter who supplies it the network will be "too complex to be made fully secure against an expert cyberattack". 5G's software-defined networks will require vastly more cells and, crucially, vastly more heterogeneity and complexity. You have to presume a "dirty network", Sue Gordon, then (US) Principal Deputy Director of National Intelligence, warned in April 2019. Even if Huawei is barred from Britain, the EU, and the US, it will still have a huge presence in Africa, which it's been building for years, and probably Latin America.

There was a time when a computer was a wholly-owned system built by a single company that also wrote and maintained its software; if it was networked it used that company's proprietary protocols. Then came PCs, and third-party software, and the famously insecure Internet. 5G, however, goes deeper: a network in which we trust nothing and no one, not just software but chips, wires, supply chains, and antennas, which Thomas explains "will have to contain a lot of computer components and software to process the signals and interact with other parts of the network". It's impossible to control every piece of all that; trying would send us into frequent panics over this or that component or supplier (see for example Super Micro). The discussion Thomas would like us to have is, "How secure do we need the networks to be, and how do we intend to meet those needs, irrespective of who the suppliers are?"

In other words, the essential question is: how do you build trusted communications on an untrusted network? The Internet's last 25 years have taught us a key piece of the solution: encrypt, encrypt, encrypt. Johnson, perhaps unintentionally, has just made the case for spreading strong, uncrackable encryption as widely as possible. To which we can only say: it's about time.

Illustrations: The European Court of Justice, to mark the fact that on this day the UK exits the European Union.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.

Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2020

Software inside

Hedy_Lamarr_in_The_Conspirators_2.jpgIn 2011, Netscape creator-turned-venture capitalists Marc Andreesen argued that software is eating the world. Andreesen focused on a rather narrow meaning of "world" - financial value. Amazon ate Borders' lunch; software fuels the success of Wal-Mart, Fedex, airlines, and financial services. Like that.

There is, however, a more interesting sense in which software is eating the world, and that's its takeover of what we think of as "hardware". A friend tells me, for example, that part of the pleasure he gets from driving a Tesla is that its periodic software updates keep the car feeling new, so he never looks enviously at the features on later models. Still, these updates do at least sound like traditional software. The last update of 2019, for example, included improved driver visualization, a "Camp Mode" to make the car more comfortable to spend the night in, and other interface improvements. I assume something as ordinarily useful as map updates is too trivial to mention.

Even though this means a car is now really a fancy interconnected series of dozens of computer networks whose output happens to be making a large, heavy object move on wheels. I don't have trouble grasping the whole thing, not really. It's a control system.

Much more confounding was the time, in late 1993. when I visited Demon Internet, then a startup founded to offer Internet access to UK consumers. Like quite a few others, I was having trouble getting connected via the Demon's adapted version of KA9Q, connection software written for packet radio. This was my first puzzlement: how could software for "packet radio" (whatever that was) do anything on a computer? That was nothing to my confusion when Demon staffer Mark Turner explained to me that the computer could parse the stream of information coming into it and direct the results to different applications simultaneously. At that point, I'd only ever used online services where you could only do one thing at a time, just as you could only make one phone call at a time. I remember finding the idea of one data stream servicing many applications at once really difficult to grasp. How did it know what went where?

That is software, and it's what happened in the shift from legacy phone networks' circuit switching to Internet-style packet switching.

I had a similar moment of surreality when first told about software-defined radio. A radio was a *thing*. How could it be software? By then I knew about spread spectrum, invented by the actress Hedy Lamarr and pianist George Antheil to protect wartime military conversations from eavesdropping, so it shouldn't have seemed as weird as it did.

And so to this week, when, at the first PhD Cyber Security Winter School, I discovered programmable - - that is, software-defined - networks. Of course networks are controlled by software already, but at the physical layer it's cables, switches, and routers. If one of those specialized devices needs to be reconfigured you have to do it locally, device by device. Now, the idea is more generic hardware that can be reprogrammed on the fly, enabling remote - and more centralized and larger-scale - control. Security people like the idea that a network can both spot and harden itself against malicious traffic much faster. I can't help being suspicious that this new world will help attackers, too, first by providing a central target to attack, and second because it will be vastly more complex. Authentication and encryption will be crucial in an environment where a malformed or malicious data packet doesn't just pose a threat to the end user who receives it but can reprogram the network. Helpfully, the NSA has thought about this in more depth and greater detail. They do see centralization as a risk, and recommend a series of measures for protecting the controller; they also highlight the problems increased complexity brings.

As the workshop leader said, this is enough of a trend for Cisco, and Intel to embrace it; six months ago, Intel paid $5 billion for Barefoot Networks, the creator of P4, the language I saw demonstrated for programming these things.

At this point I began wondering if this doesn't up-end the entire design philosophy of the Internet, which was to push all the intelligence out to the edges, The beginnings of this new paradigm, active networking, appeared around the early 2000s. The computer science literature - for example, Activating Networks (PDF), by Jonathan M. Smith, Kenneth L. Calvert, Sandra L. Murphy, Hilarie K. Orman, and Larry L. Peterson, and Active Networking: One View of the Past, Present, and Future (PDF), by Smith and Scott M. Nettles - plots out the problems of security and complexity in detail, and considers the Internet and interoperability issues. The Road to SDN: An Intellectual History of Programmable Networks, by Nick Feamster, Jennifer Rexford, and Ellen Zegura, recapitulates the history to date.

My real question, however, is one I suspect has received less consideration: will these software-defined networks make surveillance and censorship easier or harder? Will they have an effect on the accessibility of Internet freedoms? Are there design considerations we should know about? These seem like reasonable questions to ask as this future hurtles toward us.

Illustrations: Hedy Lamarr, in The Conspirators, 1944..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 10, 2020

The forever bug

Bug_de_l'an_2000.jpgY2K is back, and this time it's giggling at us.

For the past few years, there's been a growing drumbeat on social media and elsewhere to the effect that Y2K - "the year 2000 bug" - never happened. It was a nothingburger. It was hyped then, and anyone saying now it was a real thing is like, ok boomer.

Be careful what old averted messes you dismiss; they may come back to fuck with you.

Having lived through it, we can tell you the truth: Y2K *was* hyped. It was also a real thing that was wildly underestimated for years before it was taken as seriously as it needed to be. When it finally registered as a genuine and massive problem, millions of person-hours were spent remediating software, replacing or isolating systems that couldn't be fixed, and making contingency and management plans. Lots of things broke, but, because of all that work, nothing significant on a societal scale. Locally, though, anyone using a computer at the time likely has a personal Y2K example. In my own case, an instance of Quicken continued to function but stopped autofilling dates correctly. For years I entered dates manually before finally switching to GnuCash.

The story, parts of which Chris Stokel-Walker recounts at New Scientist, began in 1971, when Bob Bemer published a warning about the "Millennium Bug", having realized years earlier that the common practice of saving memory space by using two digits instead of four to indicate the year was storing up trouble. He was largely ignored, in part, it appeared, because no one really believed the software they were writing would still be in use decades later.

It was the mid-1990s before the industry began to take the problem seriously, and when they did the mainstream coverage broke open. In writing a 1997 Daily Telegraph article, I discovered that mechanical devices had problems, too.

We had both nay-sayers, who called Y2K a boondoggle whose sole purpose was to boost the computer industry's bottom line, and doommongers, who predicted everything from planes falling out of the sky to total societal collapse. As Damian Thompson told me for a 1998 Scientific American piece (paywalled), the Millennium Bug gave apocalyptic types a *mechanism* by which the crash would happen. In the Usenet newsgroup comp.software.year-2000, I found a projected timetable: bank systems would fail early, and by April 1999 the cities would start to burn... When I wrote that society would likely survive because most people wanted it to, some newsgroup members called me irresponsible, and emailed the editor demanding he "fire this dizzy broad". Reconvening ten years later, they apologized.

Also at the extreme end of the panic spectrum was the then-head of Deutsche Bank, Ed Yardeni, who repeatedly predicted that Y2K would cause a worldwide recession; it took him until 2002 to admit his mistake, crediting the industry's hard work.

It was still a real problem, and with some workarounds and a lot of work most of the effects were contained, if not eliminated. Reporters spent New Year's Eve at empty airports, in case there was a crash. Air travel that night, for sure, *was* a nothingburger. In that limited sense, nothing happened.

Some of those fixes, however, were not so much fixes as workarounds. One of these finessed the rollover problem by creating a "window" and telling systems that two-digit years fell between 1920 and 2020, rather than 1900 and 2000. As the characters on How I Met Your Mother might say: "It's a problem for Future Ted and Future Marshall. Let's let those guys handle it."

So, it's 2020, we've hit the upper end of the window, the bug is back, and Future Ted and Future Marshall are complaining about Past Ted and Past Marshall, who should have planned better. But even if they had...the underlying issue is temporary thinking that leads people to still - still, after all these decades - believe that today's software will be long gone 20 years from now and therefore they need only worry about the short term of making it work today.

Instead, the reality is, as we wrote in 2014, that software is forever.

That said, the reality is also that Y2K is forever, because if the software couldn't be rewritten to take a four-digit year field in 1999 it probably can't be today, either. Everyone stresses the need to patch and update software, but a lot - for an increasing value of "a lot" as Internet of Things devices come on the market with no real idea of how long they were be in service - of things can't be updated for one reason or another. Maybe the system can't be allowed to go down; maybe it's a bespoke but crucial system whose maintainers are long gone; maybe the software is just too fragile and poorly documented to change; maybe old versions propagated all over the place and are laboring on in places where they've simply been forgotten. All of that is also a reason why it's not entirely fair for Stokel-Walker to call the old work "a lazy fix". In a fair percentage of cases, creating and moving the window may have been the only option.

But fret ye not. We will get through this. And then we can look forward to 2038, when the clocks run out in Linux. Future Ted and Future Marshall will handle it.

Illustrations: Millennium Bug manifested at a French school (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 3, 2020

Chronocentric circles

We wrapped up 2018 with a friend's observation that there was no excitement around technology any more; we conclude the Year of the Bedbug with the regularly heard complaint that the Internet isn't *fun* any more. The writer of this last piece, Brian Koerber, is at least a generation later in arriving online than I was, and he's not alone: where once the Internet was a venue for exploring the weird and unexpected and imagining a hopeful future, increasingly it's a hamster wheel of the same few, mostly commercial, sites and services, which may be entertaining but do not produce any sense of wonder in their quest to exploit us all. Phillip Maciak expands the trend by mourning the death of innovative web publishing, while Abid Omar calls today's web an unusable, user-hostile wasteland. In September, Andres Guadamuz wondered if boredom would kill the Internet; we figure it's a tossup between that and the outrageous energy consumption.

The feeling of sameness is exacerbated by the fact that so many of this year's stories have been mutatis mutandis variations on those of previous years. Smut-detecting automated bureaucrats continue to blame perfectly good names for their own deficiencies, 25 years after AOLbarred users from living in Scunthorpe; the latest is Lyft. Less amusingly, for the ninth year in a row, Freedom House finds that global Internet freedom has declined; of the 65 countries it surveys, only 16 have seen improvement, and that only marginal.

Worse, the year closed with the announcement of perhaps the most evil invention of recent years, the toilet designed to deter lingering. "Most evil", because the meanness is intentional, rather than the result of a gradual drift away from founding values.

Meanwhile, the EU passed a widely disliked copyright-tightening bill. The struggle to change it from threat to opportunity burned out yet another copyright warrior; now-former MEP Julia Reda. It appears increasingly impossible to convince national governments that there is no such thing as a hole - in a wall or in encryption software - that only "good guys" can use (and still less that "good guys" is entirely in the eyes of the beholder). After four years of effort to invent mechanisms for it, age verification may have died...or it may come back as a "duty of care" in whatever legislation builds upon the Online Harms white paper - or in the EU's Audiovisual Media Services Directive. And, nearly three years on, US sites are still ghosting EU residents for fear of GDPR and its potentially massive fines. With the January 1 entry into force of the California Consumer Privacy Act, the US west coast seems set to join us. Hot times for corporate lawyers!

The most noticeable end-of-year trend, however, has been the return of the decade as a significant timeframe and the future as ahead of us. In 2010, the beginning of a decade in which people went from boasting about their smartphones to boasting about how little they used them, no one mentioned the end-of-decade, perhaps because we were all still too startled to be living in the third millennium and the 21st century, known as "the future" for the first decades of my life. Alternatively, perhaps, as a friend suggests, it's because the last couple of years have been so exhausting and depressing that people are clinging to anything that suggests we might now be in for something new.

At Vanity Fair, Nick Bolton has a particularly disturbing view of 2030, and he doesn't even consider climate change, water supplies, the rise of commercial podcasts or cybersecurity.

I would highlight instead a couple of small green shoots of optimism. The profligate wastage exposed by the WeWork IPO appears to be sparking a very real change in both the Silicon Valley venture capital funding ethos (good) and the cost basis of millennial lifestyles (more difficult), or "counterfeit capitalism", as Matt Stoller calls it. Even Wired is suggesting that the formerly godlike technology company founder is endangered. Couple that with 2019's dramatic and continuing rise in employee activism within technology companies and increasing regulatory pressure, particularly on Uber and Airbnb, and there might be some cause to hope for change. Even though company founders like Mark Zuckerberg and Sergey Brin and Larry Page have made themselves untouchable by controlling the majority of voting shares in their companies, they won't *have* companies if they can't retain the talent. The death of the droit de genius ethos that the Jeffrey Epstein case exposed can't come soon enough.

I also note the sudden rebirth of personal and organizational online forums, based on technology such as Mastodon and Soapbox. Some want to focus on specific topics and restrict members to trusted colleagues; some want a lifeboat (paywall) in case of a Twitter ban; WT Social wants to change the game away from data exploitation. Whether any of thesewill have staying power is an open question; a decade ago, when Diaspora, tried to decentralize social media, it failed to gain traction. This time round, with greater consciousness of the true price of pay-with-data "free" services, these return-to-local efforts may have better luck.

Happy new year.

Illustrations:Roborovski hamster (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.