" /> net.wars: March 2021 Archives

« February 2021 | Main | April 2021 »

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 19, 2021

Dystopian non-fiction

Screenshot from 2021-03-18 12-51-27.pngHow dumb do you have to be to spend decades watching movies and reading books about science fiction dystopias with perfect surveillance and then go on and build one anyway?

*This* dumb, apparently, because that what Shalini Kantayya discovers in her documentary Coded Bias, which premiered at the 2020 Sundance Film Festival. I had missed it until European Digital Rights (EDRi) arranged a streaming this week.

The movie deserves the attention paid to The Social Dilemma. Consider the cast Kantayya has assembled: "math babe" Cathy O'Neil, data journalism professor Meredith Broussard, sociologist Zeynep Tufekci, Big Brother Watch executive director Silkie Carlo, human rights lawyer Ravi Naik, Virginia Eubanks, futurist Amy Webb, and "code poet" Joy Buolamwini, who is the film's main protagonist and provides its storyline, such as it is. This film wastes no time on technology industry mea non-culpas, opting instead to hear from people who together have written a year's worth of reading on how modern AI disassembles people into piles of data.

The movie is framed by Buoalmwini's journey, which begins in her office at MIT. At nine, she saw a presentation on TV from MIT's Media Lab, and, entranced by Cynthia Breazeal's Kismet robot, she instantly decided: she was going to be a robotics engineer and she was going to MIT.

At her eventual arrival, she says, she imagined that coding was detached from the world - until she started building the Aspire Mirror and had to get a facial detection system working. At that point, she discovered that none of the computer vision tracking worked very well...until she put on a white mask. She started examining the datasets used to train the facial algorithms and found that every system she tried showed the same results: top marks for light-skinned men, inferior results for everyone else, especially the "highly melanated".

Teaming up with Deborah Raji, in 2018 Buolamwini published a study (PDF) of racial and gender bias in Amazon's Rekognition system, then being trialed with law enforcement. The company's response leads to a cameo, in which Buolamwini chats with Timnit Gebru about the methods technology companies use to discredit critics. Poignantly, today's viewers know that Gebru, then still at Google was only months away from becoming the target of exactly that behavior, fired over her own critical research on the state of AI.

Buolamwini's work leads Kantayya into an exploration of both algorithmic bias generally, and the uncontrolled spread of facial recognition in particular. For the first, Kantayya surveys scoring in recruitment, mortgage lending, and health care, and visits the history of discrimination in South Africa. Useful background is provided by O'Neil, whose Weapons of Math Destruction is a must-read on opaque scoring, and Broussard, whose Artificial Unintelligence deplores the math-based narrow conception of "intelligence" that began at Dartmouth in 1956, an arrogance she discusses with Kantayya on YouTube.

For the second, a US unit visits Brooklyn's Atlantic Plaza Towers complex, where the facial recognition access control system issues warnings for tiny infractions. A London unit films the Oxford Circus pilot of live facial recognition that led Carlo, with Naik's assistance, to issue a legal challenge in 2018. Here again the known future intervenes: after the pandemic stopped such deployments, BBW ended the challenge and shifted to campaigning for a legislative ban.

Inevitably, HAL appears to remind us of what evil computers look like, along with a red "I'm an algorithm" blob with a British female voice that tries to sound chilling.

But HAL's goals were straightforward: it wanted its humans dead. The motives behind today's algorithms are opaque. Amy Webb, whose book The Big Nine profiles the nine companies - six American, three Chinese - who are driving today's AI, highlights the comparison with China, where the government transparently tells citizens that social credit is always watching and bad behavior will attract penalties for your friends and family as well as for you personally. In the US, by contrast, everyone is being scored all the time by both government and corporations, but no one is remotely transparent about it.

For Buolamwini, the movie ends in triumph. She founds the Algorithmic Justice League and testifies in Congress, where she is quizzed by Alexandria Ocasio-Cortez(D-NY) and Jamie Raskin (D-MD), who looks shocked to learn that Facebook has patented a system for recognizing and scoring individuals in retail stores. Then she watches as facial recognition is banned in San Francisco, Somerville, Massachusetts, and Oakland, and the electronic system is removed from the Brooklyn apartment block - for now.

Earlier, however, Eubanks, author of Automating Inequality, issued a warning that seems prescient now, when the coronavirus has exposed all our inequities and social fractures. When people cite William Gibson's "The future is already here - it's just not evenly distributed", she says, they typically mean that new tools spread from rich to poor. "But what I've found is the absolute reverse, which is that the most punitive, most invasive, most surveillance-focused tools that we have, they go into poor and working communities first." Then they get ported out, if they work, to those of us with higher expectations that we have rights. By then, it may be too late to fight back.

See this movie!


Illustrations: Joy Buolamwini, in Coded Bias.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 5, 2021

Voter suppression in action

Thumbnail image for bush-gore-hanging-chad-florida.jpgThe clowder of legislation to restrict voting access that's popping up across the US is casting the last 20 years of debate over online voting in a new light.

For anyone who, like me, has never spent more than a few minutes casting their vote, the scenes from the 2020 US election were astounding. In response to a photo of a six-*hour* line of waiting voters, someone on Twitter observed, "That is democracy in action." Almost immediately a riposte: "That is voter suppression in action."

I had no idea of the tactics of voter suppression until the 2008 Computers, Freedom, and Privacy conference, when Lillie Coney led a panel on updates to deceptive election practices. Among those Coney and Tova Wang listed were robocalls advising Democrats and Republicans to vote on different days (one the real election day, one not) or saying that the polling location had changed and letters sent to Latino names threatening deportation if they voted illegally. Crude tactics, but effective, especially among new voters. Coney and Wang imagined these shifting to much better-targeted email and phony websites. It was too soon for anyone to spot two-year-old Facebook as the eventual vector.

By 2020, voter suppression was much more blatant. Republicans planted fake drop boxes in California; Texas selectively closed polling places, especially those in central locations easily accessed by public transport; and everywhere Donald Trump insisted that mail-in ballots meant fraud. Nonetheless, even Fox News admitted that the 2020 election was the most secure in US history and there's no evidence of fraud in any jurisdiction. The ability to audit and recount, not just read a number off an electronic counter, is crucial to being able to say this.

It now appears that this election was just a warm-up. The Brennan Center is currently tracking 253 bills that restrict voting access in 43 states, and 704 bills with provisions to expand it in a different set of 43 states. Sometimes both approaches coexist in the same bill. Outside the scope of legislation, later this year congressional districts will be redrawn based on the 2020 census, another process that can be gamed. At the federal level, Democrats are pushing the passage of H.R.1, the For the People Act, to reform many aspects of the US electoral system including financing, districting, and ethics. One section of the bill provides grants to update voting systems, creates security requirements for private companies that sell voting machines and election equipment, and requires those companies to report cybersecurity incidents. Citizens for Ethics supplies the sources of the ideas enshrined in the act. For even more, see Democracy Docket, whose founder, Marc Elias, has been fighting the legal cases with a remarkable record of success. Ensuring fairness is not specifically about Republicans; historically both parties have gamed the system to hang onto power when they've had the chance.

Ever since 1999, when Bill Clinton asked the National Science Foundation to look into online voting, the stated reasons have always *sounded* reasonable - basically, to increase turnout by improving convenience. In the UK, this argument was taken up by the now-defunct organization Webroots Democracy, which argued that it could improve access for younger people used to doing everything on their phones, and would especially grant better access for groups such as visually impaired people who are not well provided for under the present system. These problems still need to be solved.

The reasons against adopting online voting haven't changed since 2000, when Rebecca Mercuri first outlined the security problems. In the UK very little has changed since 2007, when a couple of pilots led the Electoral Commission to advise against pursuing the idea for sound reasons. Tl;dr: computer scientists prefer pencils.

In 2016, to celebrate its second anniversary, Webroots founder Areeq Chowdhury said national adoption in the UK was achievable by the "next general election", then expected in 2020. He had some reason to believe this; in 2015 then Speaker of the House John Bercow suggested online voting should be used for the 2020 election. But, oh, timing! Chowdhury could have no idea that a month after that Webroots meeting the UK was going to vote (using paper and pencils) to leave the EU. In the resulting change in the political climate, two general elections have passed, in 2017 and 2019, both conducted using pencils and paper. So will May's delayed London mayoral election. The government's 2019 plan tobring in mandatory photographic voter ID by 2023 will diminish, not increase, access.

In the US, only 55.7% of eligible voters participated in the 2016 election, and the turnout for congressional primaries can be as low as 11%. Again, time changed everything: between 2000 and 2016 it seemed as though turnout would go on dropping. Then came 2020. Loving or hating incumbent Donald Trump broke records: 66.3% of eligible voters cast ballots, the highest percentage since 1900. That result bears out what many have said: turnout depends on voters believing that their vote matters.

The aggregate picture suggests that the appeal of online voting may have been to encourage the kinds of voters politicians wanted at a time when it was mostly younger, affluent, and educated people who had smartphones and Internet access. Follow the self-interest.


Illustrations: Officials recount a ballot in the narrow Bush-Gore 2000 election.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

Covid's children

LSE-Livingstone-panel-2021-03.pngI wonder a lot about how the baby downstairs will develop differently because of his September 2020 birth date. In his first five months, the only humans who have been in close contact are his two parents, a smattering of doctors and nurses, and a stray neighbor who occasionally takes him for walks. Walks, I might add, in which he never gets out of his stroller but in which he exhibits real talent for staring contests (though less for intelligent conversation). His grandparents he only knows through video calls. His parents think he's grasped that they're real, though not present, people. But it's hard to be sure.

The effects of the pandemic are likely to be clear a lot sooner for the older children and young people whose lives and education have been disrupted over the past year. This week, as part of the LSE Post-Covid World Festival, Sonia Livingstone (for whose project I wrote some book reviews a few years ago) led a panel to discuss those effects.

Few researchers in the UK - Livingstone, along with Andy Phippen, is one of the exceptions, as is, less formally, filmmaker and House of Lords member Beeban Kidron, whose 2013 film InRealLife explores teens' use of the Internet - ever bother to consult children to find out what their online experiences and concerns really are. Instead, the agenda shaped by politicians and policy makers centers on adults' fears, particularly those that can be parlayed into electoral success. The same people who fret that social media is posing entirely new problems today's adults never encountered as children refuse to find out what those problems look like to the people actually experiencing them. Worse, the focus is narrow: protecting children from pornography, grooming, and radicalization is everywhere, but protecting them from data exploitation is barely discussed. In the UK, as Jen Persson, founder of DefendDigitlMe, keeps reminding us, collecting children's data is endemic in education.

This was why the panel was interesting: all four speakers are involved in projects aimed to understand and amplify children's and young people's own concerns. From that experience, all four - Konstantinos Papachristou, the youth lead for the #CovidUnder19 project, Maya Götz, who researches children, youth, and television, Patricio Cuevas-Parra, who is part of a survey of 10,000 children and young people, and Laurie Day - highlighted similar issues of lack of access and inequality - not just to the Internet but also to vaccines and good information.

In all countries, the shift to remote leaning has been abrupt, exposing infrastructure issues that were always urgent, but never quite urgent enough to fix. Götz noted that in some Asian countries and Chile she's seeing older technologies being pressed into service to remedy some of this - technologies like broadcast TV and radio; even in the UK, after the first lockdown showed how many low-income families could not afford sufficient data plans, the the BBC began broadcasting curriculum-based programming.

"Going back to normal," Day said, "needs a rethink of what support is needed." Yet for some students the move to online learning has been liberating, lightening social and academic pressures and giving space to think about their values and the opportunity to be creative. We don't hear so much about that; British media focus on depression and loss.

By the time the baby downstairs reaches school age, the pandemic will be over, but its footprint will be all over how his education proceeds.

Persson, who focuses on the state's use of data in education, says that one consequence of the pandemic is that Microsoft and Google have entrenched themselves much more deeply into the UK's education infrastructure.

"With or without covid, schools are dependent on them for their core infrastructure now, and that's through platforms joining up their core personal data about students and staff - email addresses, phone numbers, names, organizational data - and joining all that up," she says. Parents are encouraged to link to their children's accounts, and there is, for the children concerned, effectively, "no privacy". The software, she adds, was really designed for business and incompletely adapted for education. For example, while there are controls schools can use for privacy protection, the defaults, as always, are towards open sharing. In her own children's school, which has 2,000 students, the software was set up so every user could see everyone else's email address.

"It's a huge contrast to [the concern about] online harms, child safety, and the protection mantra that we have to watch everything because the world is so unsafe," she says. Partly, this is also a matter of perception: policy makers tend to focus on "stranger danger" and limiting online content rather than ID theft, privacy, and how all this collected data may be used in the future. The European Digital Rights Initiative (EDRi) highlights the similar thinking behind European Commission proposals to require the platforms to scan private communications as part of combating child sexual abuse online.

All this awaits the baby downstairs. The other day, an 18-month-old girl ran up to him, entranced. Her mother pulled her back before she could touch him or the toys tied to his stroller. For now, he, like other pandemic babies, is surrounded by an invisible barrier. We won't know for several decades what the long-term effect will be.


Illustrations: Illustrations: Sonia Livingstone's LSE panel.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.