" /> net.wars: October 2021 Archives

« September 2021 | Main | November 2021 »

October 29, 2021

Majority report

Frari_(Venice)_nave_left_-_Monument_to_Doge_Giovanni_Pesaro_-_Statue_of_the_Doge.jpgHow do democracy and algorithmic governance live together? This was the central question of a workshop this week on computational governance. This is only partly about the Internet; many new tools for governance are appearing all the time: smart contracts, for example, and AI-powered predictive systems. Many of these are being built with little idea of how they can go wrong.

The workshop asked three questions:

- What can technologists learn from other systems of governance?
- What advances in computer science would be required for computational systems to be useful in important affairs like human governance?
- Conversely, are there technologies that policy makers can use to improve existing systems?

Implied is this: who gets to decide? On the early Internet, for example, decisions were reached by consensus among engineers, funded by hopeful governments, who all knew each other. Mass adoption, not legal mandate, helped the Internet's TCP/IP protocols dominate over many other 1990s networking systems: it was free, it worked well enough, and it was *there*. The same factors applied to other familiar protocols and applications: the web, email, communications between routers and other pieces of infrastructure. Proposals circulated as Requests for Comments, and those that found the greatest acceptance were adopted. In those early days, as I was told in a nostalgic moment at a conference in 1998, anyone pushing a proposal because it was good for their company would have been booed off the stage. It couldn't last; incoming new stakeholders demanded a voice.

If you're designing an automated governance system, the fundamental question is this: how do you deal with dissenting minorities? In some contexts - most obviously the US Supreme Court - dissenting views stay on the record alongside the majority opinion. In the long run of legal reasoning, it's important to know how judgments were reached and what issues were considered. You must show your work. In other contexts where only the consensus is recorded, minority dissent is disappeared - AI systems, for example, where the labelling that's adopted is the result of human votes we never see.

In one intriguing example, a panel of judges may rule a defendant is guilty or not guilty depending on whether you add up votes by premise - the defendant must have both committed the crime and possessed criminal intent - or by conclusion, in which each judge casts a final vote and only these are counted. In a small-scale human system the discrepancy is obvious. In a large-scale automated system, which type of aggregation do you choose, and what are the consequences, and for whom?

Decentralization poses a similarly knotty conundrum. We talk about the Internet's decentralized origins, but its design fundamentally does not prevent consolidation. Centralized layers such as the domain name system and anti-spam blocking lists are single points of control and potential failure. If decentralization is your goal, the Internet's design has proven to be fundamentally flawed. Lots of us have argued that we should redecentralize the Internet, but if you adopt a truly decentralized system, where do you seek redress? In a financial system running on blockchains and smart contracts, this is a crucial point.

Yet this fundamental flaw in the Internet's design means that over time we have increasingly become second-class citizens on the Internet, all without ever agreeing to any of it. Some US newspapers are still, three and a half years on, ghosting Europeans for fear of GDPR; videos posted to web forums may be geoblocked from playing in other regions. Deeper down the stack, design decisions have enabled surveillance and control by exposing routing metadata - who connects to whom. Efforts to superimpose security have led to a dysfunctional system of digital certificates that average users either don't know is there or don't know how to use to protec themselves. Efforts to cut down on attacks and network abuse have spawned a handful of gatekeepers like Google, Akamai, Cloudflare, and SORBS that get to decide what traffic gets to go where. Few realize how much Internet citizenship we've lost over the last 25 years; in many of our heads, the old cooperative Internet is just a few steps back. As if.

As Jon Crowcroft and I concluded in our paper on leaky networks for this year's this year's Gikii, "leaky" designs can be useful to speed development early on even though they pose problems later, when issues like security become important. The Internet was built by people who trusted each other and did not sufficiently imagine it being used by people who didn't, shouldn't, and couldn't. You could say it this way: in the technology world, everything starts as an experiment and by the time there are problems it's lawless.

So this the main point of the workshop: how do you structure automated governance to protect the rights of minorities? Opting to slow decision making to consider the minority report impedes decision making in emergencies. If you limit Internet metadata exposure, security people lose some ability to debug problems and trace attacks.

We considered possible role models: British corporate governance; smart contracts;and, presented by Miranda Mowbray, the wacky system by which Venice elected a new Doge. It could not work today: it's crazily complex, and impossible to scale. But you could certainly code it.


Illustrations: Monument to the Doge Giovanni Pesaro (via Didier Descouens at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 22, 2021

It's about power

vampire-squid-flickr-2111032672_db268e72d9_c.jpgIt is tempting to view every legislative proposal that comes from the present UK government as an act of revenge against the people and institutions that have disagreed with it.

The UK's Supreme Court undid Boris Johnson's decision to prorogue Parliament in the 2019 stages of the Brexit debates; the proposes to limit judicial review. The Election Commission recommended codes of conduct to keep political advertising fair; the Elections Bill, as retiring House of Lords member David Puttnam writes at the Guardian as one element of a long list of anti-democratic moves, prioritizes registration and voter ID, here as in the US, measures likely to disenfranchise opposition voters.

The UK government's proposals for reforming data protection law - the consultation is open until November 19 - also seem to fit this scenario. Granted, the UK wasn't a fan, even in 2013, when the EU's General Data Protection Regulatioon was being negotiated. Today's proposals would roll back some aspects of the law. Notably, it suggests discouraging individuals from filing subject access requests by introducing fees, last seen in the 1998 Data Protection Act GDPR replaced, and giving organizations greater latitude to refuse. This thinking is familiar from the 2013 discussions about freedom of information requests. The difference here: it's *our* data we want to access.

More pervasive, though, is the consultation's general assumption that data protection is a burden that impedes innovation and needs to be lightened to unlock economic growth. The EU, reading it, may be relieved it only granted the UK's data protection regime adequacy for four years.

It is impossible to read the subject access rights section (page 69ff) without concluding that the "burden" the government seeks to relieve is its own. In a panel on the proposed changes at the UK Internet Governance Forum, speakers agreed that businesses are not calling for this. What they *do* want is guidance. Diverging from GDPR makes life more complicated by creating multiple regimes that all require compliance. If you're a business, you want consistency and clarity. It's hard to see how these proposals provide them.

This is even more true for individuals who depend on their rights under GDPR (and equivalent) to understand the decisions that have been made about them. As Renate Samson put it at UKIGF, viewing their data is crucial in obtaining redress for erroneous immigration and asylum decisions. "Understanding why the computer says no is critical for redress purposes." In May, the Open Rights Group and the3million won this very battle against the government - under GDPR.

These issues are familiar ground for net.wars. What's less so is the UK's behavior. As in other areas - the widely criticized covid response, its dealings throughout the Brexit negotiations - Britain seems to assume it can dictate terms. At UKIGF, Michael Veale tried to point out the reality: "The UK has to engage with GDPR in a way that shows it understands it's now a rule-taker." It's almost impossible to imagine this government understanding any such thing.

A little earlier, the MP Chris Philip, had said the UK is determined to be a scientific and technology "superpower". This country, he said, is number three behind the US and China; we need to get to "an even better position".

Pause for puzzlement. Does Philip think the UK can pass either the US or China in AI? What would that even mean? AI, of all technologies, requires collaboration. Is he really overlooking the EU's technical weight as a bloc? Is the argument that data is essential for AI, AI is the economic future of Britain, so therefore individuals should roll over and open up for...Apple and Google? Do Apple and Google see their role in life as helping the UK to become a world leader in AI?

After all, "the US" isn't really the US as a nation in this discussion; in AI "the US" is the six giant multinational companies Amy Webb that all want to dominate (Google, Microsoft, Apple, Facebook, IBM, Amazon). Data protection law is one of the essential tools for limiting their ability to slurp up everyone's data.

Meanwhile, this government's own policies seem to be in conflict with each other. Simultaneously, it's also at work on a digital identity framework. Getting people to use it will require trust, which proposals to reform data protection law undermine. And trust in this government with respect to data is already faltering, because of the fiasco over our medical data back in June. It's not clear the government is making any of these connections;

Twenty years ago, data protection was about privacy and the big threat was governments. Gradually, as the online advertising industry formed and start-ups became giant companies, the view of data protection law expanded to include helping to redress the imbalance of power between individuals and large companies. Now, with those companies dominating the landscape, data protection is also about restructuring power and ensuring that small players have a chance faced with giant competitors who can corral everyone's devices and extract their data. The more complicated the regulations, as European Digital Rights keeps saying, the more it's only the biggest companies that can afford the infrastructure to comply with them. "Data protection" sounds abstract and boring. Don't be fooled. It's about power.


Illustrations: Vampire squid (via Anne-Lise Heinrichs, on Flickr, following Michael Veale's comparison to Big Tech at UKIGF).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 15, 2021

The future is hybrid

grosser-somebody.JPGEvery longstanding annual event-turned-virtual these days has a certain tension.

"Next year, we'll be able to see each other in person!" says the host, beaming with hope. Excited nods in the Zoom windows and exclamation points in the chat.

Unnoticed, about a third of the attendees wince. They're the folks in Alaska, New Zealand, or Israel, who in normal times would struggle to attend this event in Miami, Washington DC, or London because of costs or logistics.

"We'll be able to hug!" the hosts say longingly.

Those of us who are otherwhere hear, "It was nice having you visit. Hope the rest of your life goes well."

When those hosts are reminded of this geographical disability, they immediately say how much they'd hate to lose the new international connections all these virtual events have fostered and the networks they have built. Of course they do. And they mean it.

"We're thinking about how to do a hybrid event," they say, still hopefully.

At one recent event, however, it was clear that hybrid won't be possible without considerable alterations to the event as it's historically been conducted - at a rural retreat, with wifi available only in the facility's main building. With concurrent sessions in probably six different rooms and only one with the basic capability to support remote participants, it's clear that there's a problem. No one wants to abandon the place they've used every year for decades. So: what then? Hybrid in just that one room? Push the facility whose selling point is its woodsy distance from modern life to upgrade its broadband connections? Bring a load of routers and repeaters and rig up a system for the weekend? Create clusters of attendees in different locations and do node-to-node Zoom calls? Send each remote participant a hugging pillow and a note saying, "Wish you were here"?

I am convinced that the future is hybrid events, if only because businesses sound so reluctant to resume paying for so much international travel, but the how is going to take a lot of thought, collaboration, and customization.

***

Recent events suggest that the technology companies' own employees are a bigger threat to business-as-usual than portending regulation and legislation. Facebook's had two major whistleblowers - Sophie Zhang and Frances Haugen in the last year, and basically everyone wants to fix the site's governance. But Facebook is not alone...

At Uber, a California court ruled in August that drivers are employees; a black British driver has filed a legal action complaining that Uber's driver identification face-matching algorithm is racist; and Kenyan drivers are suing over contract changes they say have cut their takehome pay to unsustainably low levels.

Meanwhile, at Google and Amazon, workers are demanding the companies pull out of contracts with the Israeli military. At Amazon India, a whistleblower has handed Reuters documents showing the company has exploited internal data to copy marketplace sellers' products and rig its search engine to display its own versions first. *And* Amazon's warehouse workers continue to consider unionizing - and some cities back them.

Unfortunately, the bigger threat of the legislation being proposed in the US, UK, New Zealand, Canada is *also* less to the big technology companies than to the rest of the Internet. For example, in reading the US legislation Mike Masnick finds intractable First Amendment problems. Last week I liked this idea of focusing on content social media companies' algorithms amplify, but Masnick persuasively argues it's not so simple, citing Daphne Koller, who thought more critically about the First Amendment problems that will arise in implementing that idea.

***

The governor of Missouri, Mike Parson, has accused Josh Renaud, a journalist with the St Louis Post-Dispatch, of hacking into a government website to view several teachers' social security numbers. From the governor's description, it sounds like Renaud hit either CTRL-U or hit F12, looked at the HTML code, saw startlingly personal data, and decided correctly that the security flaw was newsworthy. (He also responsibly didn't publish his article until he had notified the website administrators and they had fixed the issue.)

Parson disagrees about the legitimacy of all this, and has called for a criminal investigation into this incident of "hacking" (see also scraping). The ability to view the code that makes up a web page and tells the browser how to display it is a crucial building block of the web; when it was young and there were no instruction manuals, that was how you learned to make your own page by copying. A few years ago, the Guardian even posted technical job ads in its pages' HTML code, where the right applicants would see them. No password, purloined or otherwise, is required. The code is just sitting there in plain sight on a publicly accessible server. If it weren't, your web page would not display.

Twenty-five years ago, I believed that by now governments would be filled with 30-somethings who grew up with computers and the 2000-era exploding Internet and could restrain this sort of overreaction. I am very unhappy to be wrong about this. And it's only going to get worse: today's teens are growing up with tablets, phones, and closed apps, not the open web that was designed to encourage every person to roll their own.


Illustrations: Exhibit from Ben Grosser's "Software for Less, reimagining Facebook alerts, at the Arebyte Gallery until end October.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 8, 2021

The inside view

Facebook user revenue chart.pngSo many lessons, so little time.

We have learned a lot about Facebook in the last ten days, at least some of it new. Much of it is from a single source, the documents exfiltrated and published by Frances Haugen.

We knew - because Haugen is not the first to say so - that the company is driven by profits and a tendency to view its systemic problems as PR issues. We knew less about the math. One of the more novel points in Haugen's Senate testimony on Tuesday was her explanation of why Facebook will always be poorly moderated outside the US: safety does not scale. Safety costs the same for each new country Facebook adds - but each new country is also a progressively smaller market than the last. Consequence: the cost-benefit analysis fails. Currently, Haugen said, Facebook only covers 50 of the world's approximately 500 languages, and even in some of those cases the country does not have local experts to help understand the culture. What hope for the rest?

Additional data: at the New York Times, Kate Klonick checks Facebook's SEC filings to find that average revenue per North American user per *quarter* was $53.56 in the last quarter of 2020, compared to $16.87 for Europe, $4.05 for Asia, and $2.77 for the rest of the world. Therefore, Klonick said at In Lieu of Fun, most of its content moderation money is spent in the US, which has less than 10% of the service's users. All those revenue numbers dropped slightly in Q1 2021.

We knew that in some countries Facebook is the only Internet people can afford to access. We *thought* that it only represented a single point of failure in those countries. Now we know that when Facebook's routing goes down - its DNS and BGP routing were knocked out by a "maintenance error" - the damage can spread to other parts of the Internet. The whole point of the Internet was to provide communications in case of a bomb outage. This is bad.

As a corollary, the panic over losing connections to friends and customers even in countries where social pressure, not data plans, ties people to Facebook is a sign of monopoly. Haugen, like Kevin Roose in the New York Times, sees signs of desperation in the documents she leaked. This company knows its most profitable audiences are aging; Facebook is now for "old people". The tweens are over at Snapchat, TikTok, and even Telegram, which added 70 million signups in the six hours Facebook was out.

We already knew Facebook's business model was toxic, a problem it shares with numerous other data-driven companies not currently in the spotlight. A key difference: Zuckerberg's unassailable control of his company's voting shares. The eight SEC complaints Haugen has filed is the first potential dent in that.

Like Matt Stoller, I appreciate a lot of Haugen's ideas for remediation: pushing people to open links before sharing, and modifying Section 230 to make platforms responsible for their algorithmic amplification, an idea also suggested by fellow data scientist Roddy Lindsay and British technology journalist Charles Arthur in his new book, Social Warming. For Stoller, these are just tweaks to how Facebook works. Haugen says she wants to "save" Facebook, not harm it. Neither her changes nor Zuckerberg's call for government regulation touch its concentrated power. Stoller wants "radical decentralization". Arthur wants cap social network size.

One fundamental mistake may be to think of Facebook as *a* monopoly rather than several at once. As an economic monopoly, businesses all over the world depend on Facebook and subsidiaries to reach their customers, and advertisers have nowhere else to go. Despite last year's pledged advertising boycott over hate speech on Facebook, since Haugen's revelations began, advertisers have been notably silent. As a social monopoly, Facebook's outage was disastrous in regions where both humanitarians and vulnerable people rely on it for lifesaving connections; in richer countries, the inertia of established connections leaves Facebook in control of large swaths of our social and community networks. This week taught us that its size also threatens infrastructure. Each of these calls for a different approach.

Stoller has several suggestions for crashing Facebook's monopoly power, one of which is to ban surveillance advertising. But he rejects regulation and downplays the crucial element of interoperability; create a standard so that messaging can flow between platforms, and you've dismantled customer lock-in. The result would be much more like the decentralized Internet of the 1990s.

Greater transparency would help; just two months ago Facebook shut down independent research into content interactions and its political advertising - and tried to blame the Federal Trade Commission.

This is *not* a lesson. Whatever we have learned Mark Zuckerberg has not. At CNN, Donie O'Sullivan fact-checks Zuckerberg's response.

A day after Haugen's testimony, Zuckerberg wrote (on Facebook, requiring a login): "I think most of us just don't recognize the false picture of the company that is being painted." Cue Robert Burns: "O wad some Pow'r the giftie gie us | To see oursels as ithers see us!" But really, how blinkered do you have to be to not recognize that if your motto is Move fast and break things people are going to blame you for the broken stuff everywhere?


Illustrations: Slide showing revenue by Facebook user geography from its Q1 2021 SEC filing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 1, 2021

Plausible diversions

amazon-astro.pngIf you want to shape a technology, the time to start is before it becomes fixed in the mindset of "'twas ever thus". This was the idea behind the creation of We Robot. At this year's event (see below for links to previous years), one clear example of this principle came from Thomas Krendl Gilbert and Roel I. J. Dobbe, whose study of autonomous vehicles pointed out the way we've privileged cars by coining "jaywalkification". On the blank page in the lawbook, we chose to make it illegal for pedestrians to get in cars'' way.

We Robot's ten years began with enthusiasm, segued through several depressed years of machine learning and AI, and this year has seemingly arrived at a twist on Arthur C. Clark's famous dictum To wit: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability. You could say it's been ten years of progressively removing robots' glamor.

Something like this was at the heart of the paper by Andrew Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar, which uses the computer science staple of abstraction as a model for assigning responsibility for the behavior of complex systems. Weed out debates over the innards - is the system's algorithm unfair, or was the training data biased? - and aim at the main point​: this employer chose this system that produced these results. No one needs to be inside its "black box" if you can understand its boundaries. In one analogy, it's not the manufacturer's fault if a coffee maker fails to produce drinkable coffee from poisoned water and ground acorns; it *is* their fault if the machine turns potable water and ground coffee into toxic sludge. Find the decision points, and ask: how were those decisions made?

Gilbert and Dobbe used two other novel coinages: "moral crumple zoning" (from Madeleine Claire Elish's paper at We Robot 2016) and "rubblization", for altering the world to assist machines. Exhibit A, which exemplifies all three, is the 2018 incident in which an Uber car on autopilot killed a pedestrian in Tempe, Arizona. She was jaywalking; she and the inattentive safety driver were moral crumple zoned; and the rubblized environment prioritized cars.

Part of Gilbert's and Dobbe's complaint was that much discussion of autonomous vehicles focused on the trolley problem, which has little relevance to how either humans or AIs drive cars. It's more useful instead to focus on how autonomous vehicles reshape public space as they begin to proliferate.

This reshaping issue also arose in two other papers, one on smart farming in East Africa by Laura Foster, Katie Szilagyi, Angeline Wairegi, Chidi Oguamanam, and Jeremy de Beer, and one by Annie Brett on the rapid, yet largely overlooked expansion of autonomous vehicles in ocean shipping, exploration, and data collection. In the first case, part of the concern is the extension of colonization by framing precision agriculture and smart farming as more valuable than the local knowledge held by small farmers, the majority of whom are black women, and viewing that knowledge as freely available for appropriation. As in the Western world, where manufacturers like John Deere and Monsanto claim intellectual property rights in seeds and knowledge that formerly belonged to farmers, the arrival of AI alienates local knowledge by stowing it in algorithms, software, sensors, and equipment and makes the plants on which our continued survival depends into inert raw material. Brett, in her paper, highlights the growing gaps in international regulation as the Internet of Things goes maritime and changes what's possible.

A slightly different conflict - between privacy and the need to not be "mis-seen" - lies at the heart of Alice Xiang's discussion of computer vision. Elsewhere, Agathe Balayn and Seda Gürses make a related point in a new EDRi report that warns against relying on technical debiasing tweaks to datasets and algorithms at the expense of seeing the larger social and economic costs of these systems.

In a final example, Marc Canellas studied whole cybernetic systems and finds they create gaps where it's impossible for any plaintiff to prove liability, in part because of the complexity and interdependence inherent in these systems. Canellas proposes that the way forward is to redefine intentional discrimination and apply strict liability. You do not, Cynthia Khoo observed in discussing the paper, have to understand the inner workings of complex technology in order to understand that the system is reproducing the same problems and the same long history if you focus on the outcomes, and not the process - especially if you know the process is rigged to begin with. The wide spread of move fast and break things, Canellas noted, mostly encumbers people who are already vulnerable.

I like this overall approach of stripping away the shiny distraction of new technology and focusing on its results. If, as a friend says, Facebook accurately described setting up an account as "adding a line to our database" instead of "connecting with your friends", who would sign up? Similarly, don't let Amazon get cute about its new "Astro" comprehensive in-home data collector.

Many look at Astro and see instead the science fiction robot butler of decades hence. As Frank Pasquale noted, we tend to overemphasize the far future at the expense of today's decisions. In the same vein, Deborah Raji called robot rights a way of absolving people of their responsibility. Today's greater threat is that gig employers are undermining workers' rights, not whether robots will become sentient overlords. Today's problem is not that one day autonomous vehicles may be everywhere, but that the infrastructure needed to make partly-autonomous vehicles safe will roll over us. Or, as Gilbert put it: don't ask how you want cars to drive; ask how you want cities to work.


Previous years: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference; 2020.

Illustrations: Amazon photo of Astro.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.