Majority report
How do democracy and algorithmic governance live together? This was the central question of a workshop this week on computational governance. This is only partly about the Internet; many new tools for governance are appearing all the time: smart contracts, for example, and AI-powered predictive systems. Many of these are being built with little idea of how they can go wrong.
The workshop asked three questions:
- What can technologists learn from other systems of governance?
- What advances in computer science would be required for computational systems to be useful in important affairs like human governance?
- Conversely, are there technologies that policy makers can use to improve existing systems?
Implied is this: who gets to decide? On the early Internet, for example, decisions were reached by consensus among engineers, funded by hopeful governments, who all knew each other. Mass adoption, not legal mandate, helped the Internet's TCP/IP protocols dominate over many other 1990s networking systems: it was free, it worked well enough, and it was *there*. The same factors applied to other familiar protocols and applications: the web, email, communications between routers and other pieces of infrastructure. Proposals circulated as Requests for Comments, and those that found the greatest acceptance were adopted. In those early days, as I was told in a nostalgic moment at a conference in 1998, anyone pushing a proposal because it was good for their company would have been booed off the stage. It couldn't last; incoming new stakeholders demanded a voice.
If you're designing an automated governance system, the fundamental question is this: how do you deal with dissenting minorities? In some contexts - most obviously the US Supreme Court - dissenting views stay on the record alongside the majority opinion. In the long run of legal reasoning, it's important to know how judgments were reached and what issues were considered. You must show your work. In other contexts where only the consensus is recorded, minority dissent is disappeared - AI systems, for example, where the labelling that's adopted is the result of human votes we never see.
In one intriguing example, a panel of judges may rule a defendant is guilty or not guilty depending on whether you add up votes by premise - the defendant must have both committed the crime and possessed criminal intent - or by conclusion, in which each judge casts a final vote and only these are counted. In a small-scale human system the discrepancy is obvious. In a large-scale automated system, which type of aggregation do you choose, and what are the consequences, and for whom?
Decentralization poses a similarly knotty conundrum. We talk about the Internet's decentralized origins, but its design fundamentally does not prevent consolidation. Centralized layers such as the domain name system and anti-spam blocking lists are single points of control and potential failure. If decentralization is your goal, the Internet's design has proven to be fundamentally flawed. Lots of us have argued that we should redecentralize the Internet, but if you adopt a truly decentralized system, where do you seek redress? In a financial system running on blockchains and smart contracts, this is a crucial point.
Yet this fundamental flaw in the Internet's design means that over time we have increasingly become second-class citizens on the Internet, all without ever agreeing to any of it. Some US newspapers are still, three and a half years on, ghosting Europeans for fear of GDPR; videos posted to web forums may be geoblocked from playing in other regions. Deeper down the stack, design decisions have enabled surveillance and control by exposing routing metadata - who connects to whom. Efforts to superimpose security have led to a dysfunctional system of digital certificates that average users either don't know is there or don't know how to use to protec themselves. Efforts to cut down on attacks and network abuse have spawned a handful of gatekeepers like Google, Akamai, Cloudflare, and SORBS that get to decide what traffic gets to go where. Few realize how much Internet citizenship we've lost over the last 25 years; in many of our heads, the old cooperative Internet is just a few steps back. As if.
As Jon Crowcroft and I concluded in our paper on leaky networks for this year's this year's Gikii, "leaky" designs can be useful to speed development early on even though they pose problems later, when issues like security become important. The Internet was built by people who trusted each other and did not sufficiently imagine it being used by people who didn't, shouldn't, and couldn't. You could say it this way: in the technology world, everything starts as an experiment and by the time there are problems it's lawless.
So this the main point of the workshop: how do you structure automated governance to protect the rights of minorities? Opting to slow decision making to consider the minority report impedes decision making in emergencies. If you limit Internet metadata exposure, security people lose some ability to debug problems and trace attacks.
We considered possible role models: British corporate governance; smart contracts;and, presented by Miranda Mowbray, the wacky system by which Venice elected a new Doge. It could not work today: it's crazily complex, and impossible to scale. But you could certainly code it.
Illustrations: Monument to the Doge Giovanni Pesaro (via Didier Descouens at Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.