" /> net.wars: July 2020 Archives

« June 2020 | Main | August 2020 »

July 31, 2020

Driving while invisible

jamesbond-invisiblecar.jpegThe point is not whether it's ludicrous but whether it breaks the law.

Until Hannah Smethurst began speaking at this week's gikii event - the year's chance to mix law, digital rights, and popular culture - I had not realized just how many invisible vehicles there are in our books and films. A brief trawl turns up: Wonder Woman's invisible jet, Harry Potter's invisibility cloak and other invisibility devices, and James Bond's invisible Aston Martin. Do not trouble me with your petty complaints about physics. This is about the law.

Every gikii (see here for links to writeups of previous years) - ranges from deeply serious-with-a-twist to silly-with-an-insightful undercurrent. This year's papers included the need for a fundamental rethink of how we regulate power (Michael Veale), the English* "bubble" law that effectively granted flatmates permanent veto power over each other's choice of sex partner (gikii founder Lilian Edwards), and the mistaken-identity frustrations of having early on used your very common name as your Gmail address (Jat Singh).

In this context, Smethurst's paper is therefore business as usual. As she explained, there is nothing in highway legislation that requires your car to be visible. The same is not true of number plates, which the law says must be visible at all times. But can you enforce it? If you can't see the car, how do you know you can't see the number plate? More uncertain is the highway code's requirement to indicate braking and turns when people don't know you're there; Smethurst suggested that a good lawyer could argue successfully that turning on the lights unexpectedly would dazzle someone. No, she said, the main difficulty is the dangerous driving laws. Well, that and the difficulty of getting insurance to cover the many accidents when people - pedestrians, cyclists, other cars - collide with it.

This raised the possibly of "invisibility lanes", an idea that seems like it should be the premise for a sequel to Death Race 2000. My overall conclusion: invisibility is like online anonymity. People want it for themselves, but not for other people - at least, not for other people they don't trust to behave well. If you want an invisible car so you can drive 100 miles an hour with impunity, I suggest a) you probably aren't safe to have one, and b) try driving across Kansas.

We then segued into the really important question: if you're riding an invisible bike, are *you* visible? (General consensus: yes, because you're not enclosed.)

On a more serious note, people have a tendency to laugh nervously when you mention that numerous jurisdictions are beginning to analyze sewage for traces of coronavirus. Actually, wastewater epidemiology, as this particular public health measure is known, is not a new surveillance idea born of just this pandemic, though it does not go all the way back to John Snow and the Broadwick Street pump. Instead, Snow plotted known cases on a map, and spotted the pump as the source of contagion when they formed a circle around it. Still, epidemiology did start with sewage.

In the decades since wastewater epidemiology was developed, some of its uses have definitely had an adversarial edge, such asestablishing the level of abuse of various drugs and doping agents or particular diseases in a given area. The goal, however, is not to supposed to be trapping individuals; instead it's to provide population-wide data. Because samples are processed at the treatment plant along with everyone else's, there's a reasonable case to be made the system is privacy-preserving; even though you could analyze samples for an individual's DNA and exact microbiome, matching any particular sample to its own seems unlikely.

However, Reuben Binns argued, that doesn't mean there are no privacy implications. Like anything segmented by postcode, the catchment areas defined for such systems are likely to vary substantially in the number of households and individuals they contain, and a lot may depend on where you put the collection points. This isn't so much an issue for the present purpose, which is providing an early-warning system for coronavirus outbreaks, but will be later, when the system is in place and people want to use it for other things. A small neighborhood with a noticeable concentration of illegal drugs - or a small section of an Olympic athletes village with traces of doping agents above a particular threshold - could easily find itself a frequent target of more invasive searches and investigations. Also, unless you have your own septic field, there is no opt-out.

Binns added this unpleasant prospect: even if this system is well-intentioned and mostly harmless, it becomes part of a larger "surveillant assemblage" whose purpose is fundamentally discriminatory: "to create distinctions and hierarchies in populations to treat them differently," as he put it. The direction we're going, eventually every part of our infrastructure will be a data source, for our own good.

This was also the point of Veale's paper: we need to stop focusing primarily on protecting privacy by regulating the use and collection of data, and start paying attention to the infrastructure. A large platform can throw away the data and still have the models and insights that data created - and the exceptional computational power to make use of it. All that infrastructure - there's your invisible car.

Illustrations: James Bond's invisible car (from Live and Let Die).

*Correction: I had incorrectly identified this law as Scottish.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 24, 2020

The invisible Internet

1964 world's fair-RCA_Pavilion-Doug Coldwell.jpgThe final session of this week's US Internet Governance Forum asked this question: what do you think Internet governance will look like five, ten, and 25 years from now?

Danny Weizner, who was assigned 25 years, started out by looking back 25 years to 1995, and noted that by and large we have the same networks, and he therefore thinks we will have largely the same networks in 2045. He might have - but didn't - point out how many of the US-IGF topics were the same ones we were discussing in 1995: encryption and law enforcement access, control of online content, privacy, and cyber security. The encryption panel was particularly nostalgic; it actually featured three of the same speakers I recall from the mid-1990s on the same topic. The online content one owed its entertainment value to the presence of one of the original authors of Section 230, the liability shield written into the 1996 Communications Decency Act. There were newcomers: 5G; AI, machine learning, and big data; and some things to do with the impact of the pandemic.

As Laura DeNardis then said, looking back to the past helps when thinking about the future, if only to understand how much change can happen in that time. Through that lens, although the Internet has changed enormously in 25 years in many ways the *debates* and *issues* have barely altered - they're just reframed. But here's your historical reality: 25 years ago we were reading Usenet newsgroups to find interesting websites and deploring the sight of the first online ads.

This is a game anyone can play, and so we will. We will try to avoid seeing the November US presidential election as a hinge.

The big change of the last ten years is the transformation of every Internet debate into a debate about a few huge companies, none of which were players in the mid-1990s. The rise of the mobile Internet was predicted by 2000, but it wasn't until 2006 and the arrival of the iPhone that it became a mass-market reality and began the merger of the physical and online worlds, followed by machine learning, and AI as the next big wave. Now, as DiNardis correctly said, we're beginning to see the Internet moving into the biological world. She predicted, therefore, that the Internet will be both very small (the biological cellular level) and very large (Vint Cerf's galactic Internet). "The Internet will have to move out of communications issues and into environmental policy, consumer safety, and health," she said. Meanwhile, Danny Weizner suggested that data scientists will become the new priests - almost certainly true, because if we do nothing to rein in technology they will be the people whose algorithms determine how decisions are made.

But will we really take no control? The present trend is toward three computing power blocs: China, the United States, and the EU. Chinese companies are beginning to move into the West, either by operating (such as TikTok, which US president Donald Trump has mooted banning) or by using their financial clout to push Westerners to conform to their values. The EU is only 28 years old (dating from the Maastricht Treaty), but in that time has emerged as the only power willing to punish US companies by making them pay taxes, respect privacy law, or accept limits on acquisitions. Will it be as willing to take on Chinese companies if they start to become equally dominant in the West and as willing to violate the fundamental rights enshrined in data protection law?

In his 1998 book, The Invisible Computer, usability pioneer Donald Norman predicted that computers would become invisible, embedded inside all sorts of devices, like electric motors before them. Yesterday, Brenda Leong made a similar prediction by asking the AI session how we will think about robots when they've become indistinguishable. Her analogy, the Internet itself, which in the 1990s was something you had to "go to" by dialing up and waiting for modems to wait, but somewhere around 2010 began to simply be wherever you go, there you are.

So my prediction for 25 years from now is that there will effectively be no such thing as today's "Internet governance"; it will have disappeared into every other type of governance, though engineering and standards bodies will still work to ensure that the technical underpinnings remain robust and reliable. I'd like to think that increasingly technical standards will be dominated by climate change, so that emerging technologies that, like cryptocurrencies, use more energy than entire countries, will be sent back to the drawing board because someone will do the math at the design stage.

Today's debates will merge with their offline counterparts, just as data protection law no longer differentiates between paper-based and electronic data. As the biological implants DiNardis mentioned - and Andrea Matwyshyn has been writing about 2016 - come into widespread use, they will be regulated as health care. We will regulate Internet *companies*, but regulating Facebook (in Western countries) is not governing the Internet.

Many conflicts will persist. Matwyshyn's Internet of Bodies is the perfect example, as copyright laws written for the entertainment industry are invoked by medical device manufacturers. A final prediction, therefore: net.wars is unlikely to run out of subjects in my lifetime.

Illustrations: A piece of the future as seen at the 1964 New York World's Fair (by Doug Coldwell.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 17, 2020

Flying blind

twitter-bird-flipped.jpgQuick update to last week: the European Court of Justice has ruled in favor of Max Schrems a second time and struck down Privacy Shield, the legal framework that allowed data transfers from the EU to the US (and other third countries); businesses can still use Standard Contractual Clauses, subject to some conditions. TL;DR: Sucks even more to be the UK, caught in the middle between the EU and US demands regarding data flows. On to this week...

This week's Twitter hack is scary. Not, obviously, because it was a hack; by this time we ought to be too used to systems being penetrated by attackers to panic. We know technology is insecure. That's not news.

The big fear should be the unused potential.

Twitter's influence has always been disproportionate to its size. By Big Social Media standards, Twitter is small - a mere snip at 330 million users, barely bigger than Pinterest. TikTok has 800 million, Instagram has 1 billion, YouTube 2 billion, and Facebook 2.5 billion. But Twitter is addictively home to academics, politicians, and entertainers - and journalists, who monitor Twitter constantly for developments to report on. A lot of people feel unable to mention Twitter these days without stressing how much of a sinkhole they think it is (the equivalent of, in decades past, boasting how little TV you watched), but for public information in the West Twitter is a nerve center. We talk a lot about how Facebook got Trump elected, but it was Twitter that got him those acres of free TV and print coverage.

I missed most of the outage. According to Vice, on Wednesday similarly-worded tweets directing followers to send money in the form of bitcoin began appearing in the feeds coming from the high-profile, high-follower accounts belonging to Joe Biden, Elon Musk, Uber, Apple, Bill Gates, and others. Twitter had to shut down a fair bit of the service for a while and block verified users - high-profile public figures that Twitter deems important enough to make sure they're not fakes - from posting. The tweets have been removed, and some people who - presumably trying to follow standard practice in a data breach - tried to change their passwords got locked out - and some people must have sent money, since Vice reported the Bitcoin wallet in question had collected $100,000. But overall not much harm was done.

This time.

Most people, when they think about their social media account or email being hacked, think first of the risk that their messages will be read. This is always a risk, and it's a reason not to post your most sensitive secrets to technology and services you don't control. But the even bigger problem many people overlook is exactly what the attackers did here: spoofed messages that fool friends and contacts - in this case, the wider public - into thinking they're genuine. This is not a new problem; hackers have sought to take advantage of trust relationships to mount attacks ever since Kevin Mitnick dubbed the practice "social engineering" circa 1990.

In his detailed preliminary study of the attack, Brian Krebs suggests the attack likely came from people who've "typically specialized in hijacking social media accounts via SIM swapping". Whoever did it and whatever route they took, it seems clear they gained access to Twitter's admin tools, which enabled them to change the email address associated with accounts and either turn off or capture the two-factor authentication that might alert the actual owners. (And if, like many people, you operate Twitter, email, and 2FA on your phone, you actually don't *have* two factors, you have one single point of failure - your phone. Do not do this if you can avoid it.)

In the process of trying to manage the breach, Eric Geller reports at Politico, Twitter silenced accounts belonging to numerous politicians including US president Donald Trump and the US National Weather Service tornado alerts, among many others that routinely post public information, in some cases for more than 24 hours. You can argue that some of these aren't much of a loss, but the underlying problem is a critical one, in that organizations and individuals of all stripes use Twitter as an official outlet for public information. Forget money: deployed with greater subtlety at the right time, such an attack could change the outcome of elections by announcing false information about polling places (Geller's suggestion), or kill people simply by suppressing critical public safety warnings.

What governments and others don't appear to have realized is that in relying on Twitter as a conduit to the public they are effectively outsourcing their security to it without being in a position to audit or set standards beyond those that apply to any public company. Twitter, on the other hand, should have had more sense: if it created special security arrangements for Trump's account, as the New York Times says it did, why didn't it occur to the company to come up with a workable system for all its accounts? How could it not have noticed the need? The recurring election problems around the world weren't enough of a clue?

Compared to what the attackers *could* have wanted, stealing some money is trivial. Twitter, like others before it, will have to rethink its security to match its impact.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 10, 2020

Trading digital rights

The_Story_of_Mankind_-_MediƦval_Trade.pngUntil this week I hadn't fully appreciated the number of ways Brexiting UK is trapped between the conflicting demands of major international powers of the size it imagines itself still to be. On the question of whether to allow Huawei to participate in building the UK's 5G network, the UK is caught between the US and China. On conditions of digital trade - especially data protection - the UK is trapped between the US and the EU with Northern Ireland most likely to feel the effects. This was spelled out on Tuesday in a panel on digital trade and trade agreements convened by the Open Rights Group.

ORG has been tracking the US-UK trade negotiations and their effect on the UK's continued data protection adequacy under the General Data Protection Regulation. As discussed here before, the basic problem with respect to privacy is that outside the state of California, the US has only sector-specific (mainly health, credit scoring, and video rentals) privacy laws, while the EU regards privacy as a fundamental human right, and for 25 years data protection has been an essential part of implementing that right.

In 2018 when the General Data Protection Regulation came into force, it automatically became part of British law. On exiting the EU at the end of January, the UK replaced it with equivalent national legislation. Four months ago, Boris Johnson said the UK intends to develop its own policies. This is risky; according to Oliver Patel and Nathan Lea at UCL, 75% of the UK's data flows are with the EU (PDF). Deviation from GDPR will mean the UK will need the EU to issue an adequacy ruling that the UK's data protection framework is compatible. The UK's data retention and surveillance policies may make obtaining that adequacy decision difficult; as Anna Fielder pointed out in Tuesday's discussion, this didn't arise before because national security measures are the prerogative of EU member states. The alternatives - standard contractual clauses and binding corporate rules - are more expensive to operate, are limited to the organization that uses them, and are being challenged in the European Court of Justice.

So the UK faces a quandary: does it remain compatible with the EU, or choose the dangerous path of deviation in order to please its new best friend, the US? The US, says Public Citizen's Burcu Kilic, wants unimpeded data flows and prohibitions on requirements for data localization and disclosure of source code and algorithms (as proposals for regulating AI might mandate).

It is easy to see these issues purely in terms of national alliances. The bigger issue for Kilic - and for others such as Transatlantic Consumer Dialogue - is the inclusion of these issues in trade agreements at all, a problem we've seen before with intellectual property provisions. Even when the negotiations aren't secret, which they generally are, international agreements are relatively inflexible instruments, changeable only via the kinds of international processes that created them. The result is to severely curtail the ability of national governments and legislatures to make changes - and the ability of civil society to participate. In the past, most notably with respect to intellectual property rights, corporate interests' habit of shopping their desired policies around from country to country until one bit and then using that leverage to push the others to "harmonize" has been called "policy laundering". This is a new and updated version, in which you bypass all that pesky, time-consuming democracy nonsense. Getting your desired policies into a trade agreement gets you two - or more - countries for the price of one.

In the discussion, Javier Ruiz called it "forum shifting" and noted that the latest example is intermediary liability, which is included in the US-Mexico-Canada agreement that replaced NAFTA. This is happening just as countries - including the US - are responding to longstanding problems of abuse on online platforms by considering how to regulate the big online platforms - in the US, the debate is whether and how to amend S230 of the Communications Decency Act, which offers a shield against intermediary liability, in the UK it's the online harms bill and the age-appropriate design code.

Every country matters in this game. Kilic noted that the US is also in the process of negotiating a trade deal with Kenya that will also include digital trade and intellectual property - small in and of itself, but potentially the model for other African deals - and for whatever deal Kenya eventually makes with the UK.

Kilic traces the current plans to the Trans-Pacific Partnership, which included the US during the Obama administration and which attracted public anger over provisions for investor-state dispute settlement. On assuming the presidency, Trump withdrew, leaving the other countries to recreate it as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership, which was formally signed in March 2018. There has been some discussion of the idea that a newly independent Britain could join it, but it's complicated. What the US wanted in TPP, Kilic said, offers a clear guide to what it wants in trade agreements with the UK and everywhere else - and the more countries enter into these agreements, the harder it becomes to protect digital rights. "In trade world, trade always comes first."

Illustrations: Medieval trade routes (from The Story of Mankind, 1921).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 3, 2020

The transparent society

kew-Palm House pond.JPGI realize I cheated. The question was complex: assume a pre-defined character ("Rural Rain") and decide how she would have handled an ethical dilemma involving a portfolio and a bank that appeared to be profiting from shady Middle East arms deals. I said she'd do nothing for now and do more research.

In other words, *I* didn't want to decide. I plead that I've never LARPed before, and my character could be a procrastinator, but that, too, is a cheat. Any fiction writer has to make decisions like these all the time. If you want to write that novel, this is practice. And I failed.

The LARP in this case was a fictional focus group organized by Ruth Catlow as part of real UCL research studying how attitudes to data transparency and consumer ethics are shifting. Recursion 'r' us. The UCL project, Glass Houses, has already produced an umber of papers on subjects like banks and cryptocurrencies, and privacy, transparency, and the blockchain, which is often being mentioned as a method for ensuring privacy and transparency.

The thought process that led to our fictional focus group began with Sarah Meiklejohn, who specializes in cryptocurrencies, and observed that even though Zcash supports anonymity, most users don't take advantage of it (PDF), even though there's no particular social pressure to deter them. Lacking the ability to talk to Zcash users themselves, the researchers developed this exercise to explore why and how people care about transparency and how they might think about changing their behavior based on life experiences or arguments presented to them.

So: the fictitious company True Insight, founded 2013 to use data-based emerging technology and design methodologies to find novel solutions, presented us each with a dilemma involving either finance or food and asked us to make a decision. In breakout groups (by topic), we discussed those decisions. Then we were asked to imagine our lives in 2030, taking into account the consequences of those decisions.

My character's dilemma was whether to move her savings account, which was held by a new online bank, chosen for its flexibility and competitive interest rate. Unfortunately, she had now discovered that 90% of the bank's investments were linked with major arms and military contractors operating in the Middle East and the Gulf. Should she move her account? This is where I felt someone who had just lost her earning power - my character was described as a newly retired care home worker who had finished secondary school - would be slow and cautious. What are her alternatives?

I have to applaud the creativity of the others in the group. Mr Fintech, who in 2020 was the bank's head and was trying to control the PR fallout, had abandoned his wife and children, moved to Thailand, and remarried. Now, he said, he had left the industry and was leading a group "hacking the blockchain". Another's assigned 2020 character was a fellow customer who decided with her partner to move their account for ethical reasons even though it meant denting their aspirations to have children and buy a house. By 2030, she said, the new radical transparency had exposed things her partner had hidden, and they'd split up. "I should have known when he wanted to name our child 'Elon'," she said sadly. Her job had disappeared, and with it her dreams. She was just trying to get by.

My character's description said she liked to read the news. I decided she would conveniently also like to read, now she had time, and would continue to educate herself, including reading books about banking, investment, the Middle East, and the arms trade. I thought she'd be more shocked at the bank's incompetence in failing to spot that it was investing in front for an arms dealer than by its ethical failure. Her 2030, in my imagining, was not much different from her 2020: she'd remain in her small town apartment, carefully managing her resources. A cell of the radical transparency movement that another character mentioned arrived early in her town, and what began as a movement to force ethics on companies and their supply chains had been turned on individuals. In her case, the local group had discovered that a workman replacing her toilet had eaten lunch at a disapproved pub and blamed her for not having prevented this.

Mr Fintech suggested my character should VPN herself thoroughly. Instead, I thought she'd opt for physical world interactions as much as possible because people behave differently when they actually know you. Interestingly, the now-single struggler reported a similar approach. She no longer had "the luxury" to embrace ethical choices, but her area's inability to depend on government was leading them to use barter, trade off the books, and create local currencies.

In 1998, privacy advocates were outraged by David Brin's book The Transparent Society, which argued for radical openness (an idea whose time is apparently trying to come). At the Computers, Freedom, and Privacy conference, I remember him saying that privacy laws protected only the rich and powerful. I never believed that. This exercise showed me, to my surprise, that I apparently do believe that transparency laws could be abused the same way and for the same reason: we live in a society that is judgmental and unforgiving about small infractions. Like so much else, transparency is a tool, not a solution.

Illustrations: The Palm House at Kew (via Kew Gardens).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.