Main

January 15, 2021

One thousand

net.wars-the-book.gifIn many ways, this 1,000th net.wars column is much like the first (the count is somewhat artificial, since net.wars began as a 1998 book, then presaged by four years of news analysis pieces for the Daily Telegraph, and another book in 2001...and a lot of my other writing also fits under "computers, freedom, and privacy"; *however*). That November 2001 column was sparked by former Home Office minister Jack Straw's smug assertion that after 9/11 those of us who had defended access to strong cryptography must be feeling "naive". Here, just over a week after the Capitol invasion, three long-running issues are pertinent: censorship; security and the intelligence failures that enabled the attack; and human rights when demands for increased surveillance capabilities surface, as they surely will.

Censorship first. The US First Amendment only applies to US governments (a point that apparently requires repeating). Under US law, private companies can impose their own terms of service. Most people expected Twitter would suspend Donald Trump's account approximately one second after he ceased being a world leader. Trump's incitement of the invasion moved that up, and led Facebook, including its subsidiaries Instagram and WhatsApp, Snapchat, and, a week after the others, YouTube to follow suit. Less noticeably, a Salesforce-owned email marketing company ceased distributing emails from the Republican National Committee.

None of these social media sites is a "public square", especially outside the US, where they've often ignored local concerns. They are effectively shopping malls, and ejecting Trump is the same as throwing out any other troll. Trump's special status kept him active when many others were unjustly banned, but ultimately the most we can demand from these services is clearly stated rules, fairly and impartially enforced. This is a tough proposition, especially when you are dependent on social media-driven engagement.

Last week's insurrection was planned on numerous openly accessible sites, many of which are still live. After Twitter suspended 70,000 accounts linked to QAnon, numerous Republicans complaining they had lost followers seemed to be heading to Parler, a relatively new and rising alt-right Twitterish site backed by Rebekah Mercer, among others. Moving elsewhere is an obvious outcome of these bans, but in this crisis short-term disruption may be helpful. The cost will be longer-term adoption of channels that are harder to monitor.

By January 9 Apple was removing Parler from the App Store, to be followed quickly by Android (albeit less comprehensively, since Android allows side-loading). Amazon then kicked Parler off its host, Amazon Web Services. It is unknown when, if ever, the site will return.

Parler promptly sued Amazon claiming an antitrust violation. AWS retaliated with a crisp brief that detailed examples of the kinds of comments the site felt it was under no obligation to host and noted previous warnings.

Whether or not you think Parler should be squashed - stipulating that the imminent inauguration requires an emergency response - three large Silicon Valley platforms have combined to destroy a social media company. This is, as Jillian C. York, Corynne McSherry, and Danny O'Brien write at EFF, a more serious issue. The "free speech stack", they write, requires the cooperation of numerous layers of service providers and other companies. Twitter's decision to ban one - or 70,000 - accounts has limited impact; companies lower down the stack can ban whole populations. If you were disturbed in 2010, when, shortly after the diplomatic cables release, Paypal effectively defunded Wikleaks after Amazon booted it off its servers, then you should be disturbed now. These decisions are made at obscure layers of the Internet where we have little influence. As the Internet continues to centralize, we do not want just these few oligarchs making these globally significant decisions.

Security. Previous attacks - 9/11 in particular - led to profound damage to the sense of ownership with which people regard their cities. In the UK, the early 1990s saw the ease of walking into an office building vanish, replaced by demands for identification and appointments. The same happened in New York and some other US cities after 9/11. Meanwhile, CCTV monitoring proliferated. Within a year of 9/11, the US passed the PATRIOT Act, and the UK had put in place a series of expansions to surveillance powers.

Currently, residents report that Washington, DC is filled with troops and fences. Clearly, it can't stay that way permanently. But DC is highly unlikely to return to the openness of just ten days ago. There will be profound and permanent changes, starting with decreased access to government buildings. This will be Trump's most visible legacy.

Which leads to human rights. Among the videos of insurrectionists shocked to discover that the laws do apply to them were several in which prospective airline passengers discovered they'd been placed preemptively on the controversial no-fly list. Many others who congregated at the Capitol were on a (separate) terrorism watch list. If the post-9/11 period is any guide, the fact that the security agencies failed to connect any of the dots available to them into actionable intelligence will be elided in favor of insisting that they need more surveillance powers. Just remember: eventually, those powers will be used to surveil all the wrong people.


Illustrations: net.wars, the book at the beginning.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 16, 2020

The rights stuff

Humans-Hurt-synth.pngIt took a lake to show up the fatuousness of the idea of granting robots legal personality rights.

The story, which the AI policy expert Joanna Bryson highlighted on Twitter, goes like this: in February 2019 a small group of people, frustrated by their inability to reduce local water pollution, successfully spearheaded a proposition in Toledo, Ohio that created the Lake Erie Bill of Rights. Its history since has been rocky. In February 2020, a farmer sued the city and a US district judge invalidated the bill. This week, three-judge panel from Ohio's Sixth District Court of Appeals ruled the February judge made a mistake. For now, the lake still has its rights. Just.

We will leave aside the question of whether giving lakes and the other ecosystems listed in the above-linked Vox article is an effective means of environmental protection. But given that the idea of giving robots rights keeps coming up - the EU is toying with the possibility - it seems worth teasing out the difference.

In response to Bryson, Nicholas Bohm noted the difference between legal standing and personality rights. The General Data Protection Regulation, for example, grants legal standing in two new ways: collective action and civil society representing individuals seeking redress. Conversely, even the most-empowered human often lacks legal standing; my outrage that a brick fell on your head from the top of a nearby building does not give me the right to sue the building's owner on your behalf.

Rights as a person, however, would allow the brick to sue on its own behalf for the damage done to it by landing on a misplaced human. We award that type of legal personhood to quite a few things that aren't people - corporations, most notoriously. In India, idols have such rights, and Bohm cites a case in which the trustee of a temple, because the idol they represented had these rights in India, was allowed to join a case claiming improper removal in England.

Or, as Bohm put it more succinctly, "Legal personality is about what you are; standing is about what it's your business to mind."

So if lakes, rivers, forests, and idols, why not robots? The answer lies in what these things represent. The lakes, rivers, and forests on whose behalf people seek protection were not human-made; they are parts of the larger ecosystem that supports us all, and most intimately the people who live on their banks and verges. The Toledoans who proposed granting legal rights to Lake Erie were looking for a way to force municipal action over the lake's pollution, which was harming them and all the rest of the ecosystem the lake feeds. At the bottom of the lake's rights, in other words, are humans in existential distress. Granting the lake rights is a way of empowering the humans who depend on it. In that sense, even though the Indian idols are, like robots, human-made, giving them personality rights enables action to be taken on behalf of the human community for whom they have significance. Granting the rights does not require either the lake or the idol to possess any form of consciousness.

In a paper to which Bryson linked, S.G. Solaiman argues that animals don't quality for rights, even though they have some consciousness, because a legal personality must be able to "enjoy rights and discharge duties". The Smithsonian National Zoo's giant panda, who has been diligently caring for her new cub for the last two months, is not doing so out of legal obligation.

Nothing like any of this can be said of rights for robots, certainly not now and most likely not for a long time into the future, if ever. Discussions such as David Gunkel's How to Survive a Robot Invasion, which compactly summarizes the pros and cons, generally assume that robots will only qualify for rights after a certain threshold of intelligent consciousness has been met. Giving robots rights in order to enable suffering humans to seek redress does not come up at all, even when the robots' owners hold funerals because the manufacturer has discontinued the product. Those discussions rightly focus on manufacturer liability.

In the 2015 British TV series Humans (a remake of the 2012 Swedish series Äkta människor), an elderly Alzheimer's patient (William Hurt) is enormously distressed when his old-model carer robot is removed, taking with it the only repository of his personal memories, which he can no longer recall unaided. It is not necessary to give the robot the right to sue to protect the human it serves, since family or health workers could act on his behalf. The problem in this case is an uncaring state.

The broader point, as Bryson wrote on Twitter, is that while lakes are unique and can be irreparably damaged, digital technology - including robots - "is typically built to be fungible and upgradeable". Right: a compassionate state merely needs to transfer George's memories into a new model. In a 2016 blog posting, Bryson also argues against another commonly raised point, which is whether the *robots* suffer: if designers can install suffering as a feature, they can take it out again.

So, the tl;dr: sorry, robots.


Illustrations: George (William Hurt) and his carer "synth", in Humans.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.