" /> net.wars: October 2021 Archives

« September 2021 | Main

October 15, 2021

The future is hybrid

grosser-somebody.JPGEvery longstanding annual event-turned-virtual these days has a certain tension.

"Next year, we'll be able to see each other in person!" says the host, beaming with hope. Excited nods in the Zoom windows and exclamation points in the chat.

Unnoticed, about a third of the attendees wince. They're the folks in Alaska, New Zealand, or Israel, who in normal times would struggle to attend this event in Miami, Washington DC, or London because of costs or logistics.

"We'll be able to hug!" the hosts say longingly.

Those of us who are otherwhere hear, "It was nice having you visit. Hope the rest of your life goes well."

When those hosts are reminded of this geographical disability, they immediately say how much they'd hate to lose the new international connections all these virtual events have fostered and the networks they have built. Of course they do. And they mean it.

"We're thinking about how to do a hybrid event," they say, still hopefully.

At one recent event, however, it was clear that hybrid won't be possible without considerable alterations to the event as it's historically been conducted - at a rural retreat, with wifi available only in the facility's main building. With concurrent sessions in probably six different rooms and only one with the basic capability to support remote participants, it's clear that there's a problem. No one wants to abandon the place they've used every year for decades. So: what then? Hybrid in just that one room? Push the facility whose selling point is its woodsy distance from modern life to upgrade its broadband connections? Bring a load of routers and repeaters and rig up a system for the weekend? Create clusters of attendees in different locations and do node-to-node Zoom calls? Send each remote participant a hugging pillow and a note saying, "Wish you were here"?

I am convinced that the future is hybrid events, if only because businesses sound so reluctant to resume paying for so much international travel, but the how is going to take a lot of thought, collaboration, and customization.

***

Recent events suggest that the technology companies' own employees are a bigger threat to business-as-usual than portending regulation and legislation. Facebook's had two major whistleblowers - Sophie Zhang and Frances Haugen in the last year, and basically everyone wants to fix the site's governance. But Facebook is not alone...

At Uber, a California court ruled in August that drivers are employees; a black British driver has filed a legal action complaining that Uber's driver identification face-matching algorithm is racist; and Kenyan drivers are suing over contract changes they say have cut their takehome pay to unsustainably low levels.

Meanwhile, at Google and Amazon, workers are demanding the companies pull out of contracts with the Israeli military. At Amazon India, a whistleblower has handed Reuters documents showing the company has exploited internal data to copy marketplace sellers' products and rig its search engine to display its own versions first. *And* Amazon's warehouse workers continue to consider unionizing - and some cities back them.

Unfortunately, the bigger threat of the legislation being proposed in the US, UK, New Zealand, Canada is *also* less to the big technology companies than to the rest of the Internet. For example, in reading the US legislation Mike Masnick finds intractable First Amendment problems. Last week I liked this idea of focusing on content social media companies' algorithms amplify, but Masnick persuasively argues it's not so simple, citing Daphne Koller, who thought more critically about the First Amendment problems that will arise in implementing that idea.

***

The governor of Missouri, Mike Parson, has accused Josh Renaud, a journalist with the St Louis Post-Dispatch, of hacking into a government website to view several teachers' social security numbers. From the governor's description, it sounds like Renaud hit either CTRL-U or hit F12, looked at the HTML code, saw startlingly personal data, and decided correctly that the security flaw was newsworthy. (He also responsibly didn't publish his article until he had notified the website administrators and they had fixed the issue.)

Parson disagrees about the legitimacy of all this, and has called for a criminal investigation into this incident of "hacking" (see also scraping). The ability to view the code that makes up a web page and tells the browser how to display it is a crucial building block of the web; when it was young and there were no instruction manuals, that was how you learned to make your own page by copying. A few years ago, the Guardian even posted technical job ads in its pages' HTML code, where the right applicants would see them. No password, purloined or otherwise, is required. The code is just sitting there in plain sight on a publicly accessible server. If it weren't, your web page would not display.

Twenty-five years ago, I believed that by now governments would be filled with 30-somethings who grew up with computers and the 2000-era exploding Internet and could restrain this sort of overreaction. I am very unhappy to be wrong about this. And it's only going to get worse: today's teens are growing up with tablets, phones, and closed apps, not the open web that was designed to encourage every person to roll their own.


Illustrations: Exhibit from Ben Grosser's "Software for Less, reimagining Facebook alerts, at the Arebyte Gallery until end October.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 8, 2021

The inside view

Facebook user revenue chart.pngSo many lessons, so little time.

We have learned a lot about Facebook in the last ten days, at least some of it new. Much of it is from a single source, the documents exfiltrated and published by Frances Haugen.

We knew - because Haugen is not the first to say so - that the company is driven by profits and a tendency to view its systemic problems as PR issues. We knew less about the math. One of the more novel points in Haugen's Senate testimony on Tuesday was her explanation of why Facebook will always be poorly moderated outside the US: safety does not scale. Safety costs the same for each new country Facebook adds - but each new country is also a progressively smaller market than the last. Consequence: the cost-benefit analysis fails. Currently, Haugen said, Facebook only covers 50 of the world's approximately 500 languages, and even in some of those cases the country does not have local experts to help understand the culture. What hope for the rest?

Additional data: at the New York Times, Kate Klonick checks Facebook's SEC filings to find that average revenue per North American user per *quarter* was $53.56 in the last quarter of 2020, compared to $16.87 for Europe, $4.05 for Asia, and $2.77 for the rest of the world. Therefore, Klonick said at In Lieu of Fun, most of its content moderation money is spent in the US, which has less than 10% of the service's users. All those revenue numbers dropped slightly in Q1 2021.

We knew that in some countries Facebook is the only Internet people can afford to access. We *thought* that it only represented a single point of failure in those countries. Now we know that when Facebook's routing goes down - its DNS and BGP routing were knocked out by a "maintenance error" - the damage can spread to other parts of the Internet. The whole point of the Internet was to provide communications in case of a bomb outage. This is bad.

As a corollary, the panic over losing connections to friends and customers even in countries where social pressure, not data plans, ties people to Facebook is a sign of monopoly. Haugen, like Kevin Roose in the New York Times, sees signs of desperation in the documents she leaked. This company knows its most profitable audiences are aging; Facebook is now for "old people". The tweens are over at Snapchat, TikTok, and even Telegram, which added 70 million signups in the six hours Facebook was out.

We already knew Facebook's business model was toxic, a problem it shares with numerous other data-driven companies not currently in the spotlight. A key difference: Zuckerberg's unassailable control of his company's voting shares. The eight SEC complaints Haugen has filed is the first potential dent in that.

Like Matt Stoller, I appreciate a lot of Haugen's ideas for remediation: pushing people to open links before sharing, and modifying Section 230 to make platforms responsible for their algorithmic amplification, an idea also suggested by fellow data scientist Roddy Lindsay and British technology journalist Charles Arthur in his new book, Social Warming. For Stoller, these are just tweaks to how Facebook works. Haugen says she wants to "save" Facebook, not harm it. Neither her changes nor Zuckerberg's call for government regulation touch its concentrated power. Stoller wants "radical decentralization". Arthur wants cap social network size.

One fundamental mistake may be to think of Facebook as *a* monopoly rather than several at once. As an economic monopoly, businesses all over the world depend on Facebook and subsidiaries to reach their customers, and advertisers have nowhere else to go. Despite last year's pledged advertising boycott over hate speech on Facebook, since Haugen's revelations began, advertisers have been notably silent. As a social monopoly, Facebook's outage was disastrous in regions where both humanitarians and vulnerable people rely on it for lifesaving connections; in richer countries, the inertia of established connections leaves Facebook in control of large swaths of our social and community networks. This week taught us that its size also threatens infrastructure. Each of these calls for a different approach.

Stoller has several suggestions for crashing Facebook's monopoly power, one of which is to ban surveillance advertising. But he rejects regulation and downplays the crucial element of interoperability; create a standard so that messaging can flow between platforms, and you've dismantled customer lock-in. The result would be much more like the decentralized Internet of the 1990s.

Greater transparency would help; just two months ago Facebook shut down independent research into content interactions and its political advertising - and tried to blame the Federal Trade Commission.

This is *not* a lesson. Whatever we have learned Mark Zuckerberg has not. At CNN, Donie O'Sullivan fact-checks Zuckerberg's response.

A day after Haugen's testimony, Zuckerberg wrote (on Facebook, requiring a login): "I think most of us just don't recognize the false picture of the company that is being painted." Cue Robert Burns: "O wad some Pow'r the giftie gie us | To see oursels as ithers see us!" But really, how blinkered do you have to be to not recognize that if your motto is Move fast and break things people are going to blame you for the broken stuff everywhere?


Illustrations: Slide showing revenue by Facebook user geography from its Q1 2021 SEC filing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 1, 2021

Plausible diversions

amazon-astro.pngIf you want to shape a technology, the time to start is before it becomes fixed in the mindset of "'twas ever thus". This was the idea behind the creation of We Robot. At this year's event (see below for links to previous years), one clear example of this principle came from Thomas Krendl Gilbert and Roel I. J. Dobbe, whose study of autonomous vehicles pointed out the way we've privileged cars by coining "jaywalkification". On the blank page in the lawbook, we chose to make it illegal for pedestrians to get in cars'' way.

We Robot's ten years began with enthusiasm, segued through several depressed years of machine learning and AI, and this year has seemingly arrived at a twist on Arthur C. Clark's famous dictum To wit: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability. You could say it's been ten years of progressively removing robots' glamor.

Something like this was at the heart of the paper by Andrew Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar, which uses the computer science staple of abstraction as a model for assigning responsibility for the behavior of complex systems. Weed out debates over the innards - is the system's algorithm unfair, or was the training data biased? - and aim at the main point​: this employer chose this system that produced these results. No one needs to be inside its "black box" if you can understand its boundaries. In one analogy, it's not the manufacturer's fault if a coffee maker fails to produce drinkable coffee from poisoned water and ground acorns; it *is* their fault if the machine turns potable water and ground coffee into toxic sludge. Find the decision points, and ask: how were those decisions made?

Gilbert and Dobbe used two other novel coinages: "moral crumple zoning" (from Madeleine Claire Elish's paper at We Robot 2016) and "rubblization", for altering the world to assist machines. Exhibit A, which exemplifies all three, is the 2018 incident in which an Uber car on autopilot killed a pedestrian in Tempe, Arizona. She was jaywalking; she and the inattentive safety driver were moral crumple zoned; and the rubblized environment prioritized cars.

Part of Gilbert's and Dobbe's complaint was that much discussion of autonomous vehicles focused on the trolley problem, which has little relevance to how either humans or AIs drive cars. It's more useful instead to focus on how autonomous vehicles reshape public space as they begin to proliferate.

This reshaping issue also arose in two other papers, one on smart farming in East Africa by Laura Foster, Katie Szilagyi, Angeline Wairegi, Chidi Oguamanam, and Jeremy de Beer, and one by Annie Brett on the rapid, yet largely overlooked expansion of autonomous vehicles in ocean shipping, exploration, and data collection. In the first case, part of the concern is the extension of colonization by framing precision agriculture and smart farming as more valuable than the local knowledge held by small farmers, the majority of whom are black women, and viewing that knowledge as freely available for appropriation. As in the Western world, where manufacturers like John Deere and Monsanto claim intellectual property rights in seeds and knowledge that formerly belonged to farmers, the arrival of AI alienates local knowledge by stowing it in algorithms, software, sensors, and equipment and makes the plants on which our continued survival depends into inert raw material. Brett, in her paper, highlights the growing gaps in international regulation as the Internet of Things goes maritime and changes what's possible.

A slightly different conflict - between privacy and the need to not be "mis-seen" - lies at the heart of Alice Xiang's discussion of computer vision. Elsewhere, Agathe Balayn and Seda Gürses make a related point in a new EDRi report that warns against relying on technical debiasing tweaks to datasets and algorithms at the expense of seeing the larger social and economic costs of these systems.

In a final example, Marc Canellas studied whole cybernetic systems and finds they create gaps where it's impossible for any plaintiff to prove liability, in part because of the complexity and interdependence inherent in these systems. Canellas proposes that the way forward is to redefine intentional discrimination and apply strict liability. You do not, Cynthia Khoo observed in discussing the paper, have to understand the inner workings of complex technology in order to understand that the system is reproducing the same problems and the same long history if you focus on the outcomes, and not the process - especially if you know the process is rigged to begin with. The wide spread of move fast and break things, Canellas noted, mostly encumbers people who are already vulnerable.

I like this overall approach of stripping away the shiny distraction of new technology and focusing on its results. If, as a friend says, Facebook accurately described setting up an account as "adding a line to our database" instead of "connecting with your friends", who would sign up? Similarly, don't let Amazon get cute about its new "Astro" comprehensive in-home data collector.

Many look at Astro and see instead the science fiction robot butler of decades hence. As Frank Pasquale noted, we tend to overemphasize the far future at the expense of today's decisions. In the same vein, Deborah Raji called robot rights a way of absolving people of their responsibility. Today's greater threat is that gig employers are undermining workers' rights, not whether robots will become sentient overlords. Today's problem is not that one day autonomous vehicles may be everywhere, but that the infrastructure needed to make partly-autonomous vehicles safe will roll over us. Or, as Gilbert put it: don't ask how you want cars to drive; ask how you want cities to work.


Previous years: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference; 2020.

Illustrations: Amazon photo of Astro.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.