" /> net.wars: September 2020 Archives

« August 2020 | Main | October 2020 »

September 25, 2020

The zero on the phone

WeRobot2020-Poster.jpegAmong the minor casualties of the pandemic has been the appearance of a Swiss prototype robot at this year's We Robot, the ninth year of this unique conference that crosses engineering, technology policy, and law to identify future conflicts and pre-emptively suggest solutions. The result was to leave the robots considered by this virtual We Robot remarkably (appropriately) abstract.

We Robot was founded to get a jump on the coming conflicts that robots will bring to law and policy, in part so that we don't repeat the Internet experience of repeating the same arguments decades on end. This year's event pre-empted the Internet experience in a new way: many authors have drawn on the failed optimism and cooperation of the 1990s to begin defining ways to ensure that robotics and AI do not follow the same path. Where at the beginning we were all eager to embrace robots, this year their disembodied AIs are being done *to* us.

In the one slight exception to this rule, Hallie Siegel's exploration of senior citizens' attitudes towards new technologies found that the seniors she studies are pragmatic, concerned about their privacy and autonomy and only really interested in technologies that provided benefits they really need.

Jason Millar and Elizabeth Gray drew directly on the Internet experience by comparing network neutrality to the issues surrounding the mapping software that controls turn-by-turn navigation systems in a discussion of "mobility shaping". Should navigation services be common carriers, as telephone lines are? The idea appeals to me, if only because the potential for physical control of where our vehicles are allowed to go seems so clear.

The theme of exploitation was particularly visible in the two papers on Africa. In the first, Arthur Gwagwa (Strathmore University, Nairobi), Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy de Beer warn that the combination of foreign capital and local resources is likely to reproduce the power structures of previous forms of colonialism, an argument also seen recently in a paper by Abeba Birhane. Women in particular, who run the majority of start-ups in some African countries, may be ignored, and the authors suggest that a GDPR-like rule awarding individuals control over their own data could be crucial in creating value for, rather than extracted from, Africa.

In the second, Laura Foster (Indiana University), Bram Van Wiele, and Tobias Schönwetter extracted a database of press stories about AI in Africa from Lexis-Nexus, to find the familiar set of claims for new technology: happy, value-neutral disruption, yay!. The failure of most of these articles to consider gender and race, they observed, doesn't make the emerging picture neutral, but serves to reinforce the default of the straight, white male.

One way we push back against AI/robot control is the "human in the loop" to whom the final decision is delegated. This human has featured in every We Robot conference, most notably in 2016 as Madeleine Elish's moral crumple zone. In his paper, Liam McCoy argues for the importance of meaningful control, because the middle ground, where the human is expected to solve the most complex situations where AI fails without support or authority is truly dangerous. The middle ground may be profitable; at UK IGF a few weeks ago, Gus Hosein noted that automating dispute resolution has what's made GAFA rich. But in the higher stakes of cyber-physical systems, the human you summon by pushing zero has to be able to make a difference.

Silvia de Conca's idea of "human-centered legal design", which sought to give autonomous agents a duty of care as a way of filling the gap in liability that presently exists, and Cynthia Khoo's interest in vulnerable communities who are harmed by behavior that emerges from combined business models, platform scale, human nature, and algorithm design presented different methods of putting a human in the loop. Often, Khoo has found in investigating this idea, the potential harm was in fact known and simply ignored; how much can and should be foreseen when system parts interact in unexpected ways is a rising issue.

Several papers explored previously unnoticed vectors for bias and control. Sentiment analysis, last seen being called "the snake oil of 2011", and its successor, emotion analysis, which I first saw explored in the 1990s by Rosalind Picard at MIT, are creeping into AI systems. Some are particularly dubious: aggression detection systems and emotion recognition cameras.

Emily McBain-Ashfield and Jason Millar are the first I'm aware of to study how stereotyping gets into these systems. Yes, it's in the data - but the problem lies in the process analyzing and tagging it. The authors found three methods of doing this: manual (human, slow), dictionary-based using seed words (automated), and crowdsourced (see also Mary L. Gray and Siddharth Suri's 2019 book, Ghost Work. All have problems; automating that sort of issue creates notoriously crude mistakes, and the participants in crowdsourcing may be from very different linguistic and cultural contexts.

The discussant for this paper, Osonde Osaba sounded appalled: "By having these AI models of emotion out in the wild in commercial products we are essentially sanctioning the unregulated experimentation on humans and their emotional processes without oversight or control."

Remedies have to contend, however, with the legacy infrastructure. Alice Xiang discovered a conflict between traditional anti-discrimination law, which bars decision making based on a set of protected classes and the technical methods of mitigating algorithmic bias. "If we're not careful," she said, "the vast majority of approaches proposed in machine learning literature might actually be illegal if they are ever tested in court."

We Robot 2020 was the first to be held outside the US, and chairs Florian Martin-Bariteau, Jason Millar, and Katie Szilagyi set out to widen its international character and diversity. When the pandemic hit, the resulting exceptional breadth of location of authors and discussants made it infeasible to ask everyone to pretend they were in Ottawa's time zone. The conference therefore has recorded the authors' and discussants' conversations as if live - which means that you, too, can experience the originals. Just follow the links. We Robot events not already linked here: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference.


Illustrations: Our robot avatars attend the conference for us on the We Robot 2020 poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 18, 2020

Systems thinking

Official_portrait_of_Chi_Onwurah_crop_3.jpgThere's a TV ad currently running on MSNBC that touts the services of a company that makes custom T-shirts to help campaigns raise funds for causes such as climate change.

Pause. It takes 2,700 liters of water to make a cotton T-shirt - water that, the Virtual Water project would argue, is virtually exported from cotton-growing nations to those earnest climate change activists. Plus other environmental damage relating to cotton; see also the recent paper tracking the pollution impact of denim microfibers. So the person buying the T-shirt may be doing a good thing on the local level by supporting climate change activism while simultaneously exacerbating the climate change they're trying to oppose.

The same sort of issue arose this week at the UK Internet Governance Forum with respect to what the MP and engineer Chi Onwurah (Labour-Newcastle upon Tyne Central) elegantly called "data chaos" - that is, the confusing array of choices and manipulations we're living in. Modern technology design has done a very good job of isolating each of us into a tiny silo, in which we attempt to make the best decisions for ourselves and our data without any real understanding of the wider impact on wider society.

UCL researcher Michael Veale expanded on this idea: "We have amazing privacy technologies, but what we want to control is the use of technologies to program and change entire populations." Veale was participating in a panel on building a "digital identity layer" - that is, a digital identity infrastructure to enable securely authenticated interactions on the Internet. So if we focus on confidentiality we miss the danger we're creating in allowing an entire country to rely on intermediaries whose interests are not ours but whose actions could - for example - cause huge populations to self-isolate during a pandemic. It is incredibly hard just to get a half-dozen club tennis players to move from WhatsApp to something independent of Facebook. At the population level, lock-in is far worse.

Third and most telling example. Last weekend, at the 52nd annual conference of the Cybernetics Society, Kate Cooper, from the Birmingham Food Council, made a similar point when, after her really quite scary talk, she was asked whether we could help improve food security if those of us who have space started growing vegetables in our gardens. The short answer: no. "It's subsistence farming," she said, going on to add that although growing your own food helps you understand your own relationship with food and where it comes from and can be very satisfying to do, it does nothing at all to help you gain a greater understanding of the food system and the challenges of keeping it secure. This is - or could be - another of Yes, Minister's irregular verbs: I choose not to eat potato chips; you very occasionally eat responsibly-sourced organic potato chips; potato chips account for 6% of Britain's annual crop of potatoes. This was Cooper's question: is that a good use of the land, water, and other resources? Growing potatoes in your front garden will not lead you to this question.

Cybernetics was new to me two years ago, when I was invited to speak at the 50th anniversary conference. I had a vague idea it had something to do with Isaac Asimov's robots. In its definition, Wikipedia cites MIT scientific Norbert Weiner in 1948: "the scientific study of control and communication in the animal and the machine". So it *could* be a robot. Trust Asimov.

Attending the 2018 event, followed by this year's, which was shared with the American Society for Cybernetics, showed cybernetics up as a slippery transdiscipline. The joint 2020 event veered from a case study of IBM to choreography, taking in subjects like the NHS Digital Academy, design, family therapy, social change, and the climate emergency along the way. Cooper, who seemed as uncertain as I was two years ago whether her work really had anything to do with cybernetics, fit right in.

The experience has led me to think of cybernetics as a little like Bayes' Theorem as portrayed in Sharon Bertsch McGrayne's book The Theory That Would Not Die. As she tells the story, for two and a half centuries after its invention, select mathematicians kept the idea alive but rarely dared to endorse it publicly - and today it's everywhere. The cybernetics community feels like this, too: a group who are nurturing an overlooked, poorly understood-by-the-wider-world, but essential field waiting for the rest of us to understand its power.

For a newcomer, getting oriented is hard; some of the discussion seems abstract enough to belong in a philosophy department. Other aspects - such as Ray Ison's description of his new book, The Hidden Power of Systems Thinking, smacks of self-help, especially when he describes it: "The contention of the book is that systems thinking in practice provides the means to understand and fundamentally alter the systems governing our lives."

At this stage, however, with the rolling waves of crises hitting our societies (which Ison helpfully summed up in an apt cartoon), if this is cybernetics, it sounds like exactly what we need. "Why," asked the artist Vanilla Beer, whose father was the cybernetics pioneer Stafford Beer, "is something so useful unused?" Beats me.


Illustrations: Chi Onwurah (official portrait, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 11, 2020

Autofail

sfo-fires-hasbrouck.jpegA new complaint surfaced on Twitter this week. Anthony Ryan may have captured it best: "In San Francisco everyone is trying unsuccessfully to capture the hellish pall that we're waking up to this morning but our phone cameras desperately want everything to be normal." california-fires-sffdpio.jpegIn other words: as in these pictures, the wildfires have turned the Bay Area sky dark orange ("like dusk on Mars," says one friend), but people attempting to capture it on their phone cameras are finding that the automated white balance correction algorithms recalibrate the color to wash out the orange in favor of grey daylight.

At least that's something the computer is actually doing, even if it's counter-productive. Also this week, the Guardian ran an editorial that it boasted had been "entirely" written by OpenAI's language generator, GPT-3. Here's what they mean by "written" and "entirely": the AI was given a word length, a theme, and the introduction, from which it produced eight unique essays, which the Guardian editors chopped up and pieced together into a single essay, which they then edited in the usual way, cutting lines and rearranging paragraphs as they saw fit. Trust me, human writers don't get to submit eight versions of anything; we'd be fired when the first one failed. But even if we did, editing, as any professional writer will tell you, is the most important part of writing anything. As I commented on Twitter, the whole thing sounds like a celebrity airily claiming she's written her new book herself, with "just some help with the organizing". I'd advise that celebrity (name withheld) to have a fire extinguisher ready for when her ghostwriter reads that and thinks of all the weeks they spent desperately rearranging giant piles of rambling tape transcripts into a (hopefully) compelling story.

The Twitter discussion of this little foray into "AI" briefly touched on copyright. It seems to me hard to argue that the AI is the author given the editors' recombination of its eight separately-generated pieces (which likely took longer than if one of them had simply written the piece). Perhaps you could say - if you're willing to overlook the humans who created, coded, and trained the AI - that the AI is the author of the eight pieces that became raw material for the essay. As things are, however, it seems clear that the Guardian is the copyright owner, just as it would be if the piece had been wholly staff-written (by humans).

Meanwhile, the fallout from Max Schrems' latest win continues to develop. The Irish Data Protection Authority has already issued a preliminary order to suspend data transfers to the US; Facebook is appealing. The Swiss data protection authority has issued a notice that the Swiss-US Privacy Shield is also void. During a September 3 hearing before the European Parliament Committee on Civil Liberties, Justice, and Home Affairs, MEP Sophie in't Veld said that by bringing the issue to the courts Schrems is doing the job data protection authorities should be doing themselves. All agreed that a workable - but this time "Schrems-proof" - solution must be found to the fundamental problem, which Gwendolyn Delbos-Corfield summed up as "how to make trade with a country that has decided to put mass surveillance as a rule in part of its business world". In't Veld appeared to sum up the entire group's feelings when she said, "There must be no Schrems III."

Of course we all knew that the UK was going to get caught in the middle between being able to trade with the EU, which requires a compatible data protection regime (either the continuation of the EU's GDPR or a regime that is ruled equal), and the US, which wants data to be free-flowing and which has been trying to use trade agreements to undermine the spread of data protection laws around the world (latest newcomer: Brazil). What I hadn't quite focused on (although it's been known for a while) is that, just like the US surveillance system, the UK's own surveillance regime could disqualify it from the adequacy ruling it needs to allow data to go on flowing. When the UK was an EU member state, this didn't arise as an issue because EU data protection law permits member states to claim exceptions for national security. Now that the UK is out, that exception no longer applies. It was a perk of being in the club.

Finally, the US Senate, not content with blocking literally hundreds of bills passed by the House of Reprsentatives over the last few years, has followed up July's antitrust hearings with the GAFA CEOs with a bill that's apparently intended to answer Republican complaints that conservative voices are being silenced on social media. This is, as Eric Goldman points out in disgust one of several dozen bits of legislation intended to modify various pieces of S230 or scrap it altogether. On Twitter, Tarleton Gillespie analyzes the silliness of this latest entrant into the fray. While modifying S230 is probably not the way to go about it, right now curbing online misinformation seems like a necessary move - especially since Facebook CEO Mark Zuckerberg has stated outright that Facebook won't remove anti-vaccine posts. Even in a pandemic.


Illustrations: The San Francisco sky on Wednesday ("full sun, no clouds, only smoke"), by Edward Hasbrouck; accurate color comparison from the San Francisco Fire Department.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 4, 2020

The Internet as we know it

Internet_map_1024-2005-the Opte Project.jpgIt's another of those moments when people to whom the Internet is still a distinctive and beloved medium fret that it's about to be violently changed into all everything they were glad it wasn't when it began. That this group is a minority is in itself a sign. Circa 1994, almost every Internet user was its defender. Today, for most people, the Internet just *is* and ever has been - until someone comes along and wants to delete their favorite service.

Fears of splintering the Internet are as old as the network itself. Different people have focused on different mechanisms: TV and radio-style corporate takeover (see for example Robert McChesney's work); incompatible censorship and data protection regimes, technical incompatibilities born of corporate overreach, and so on. In 2013, four seemed significant: copyright, localizing data storage (data protection), censorship; losing network neutrality, and splitting the addressing system.

Then, the biggest threats appeared to be structural censorship and losing network neutrality. Both are still growing. In 2019, Access Now says, 213 Internet shutdowns in 33 countries collectively disrupted 1,706 days of Internet access. No one imagined this in the 1990s, when all countries vied to reap the benefits of getting their citizens online. More conceivable were government regulation, shifting technological standards, corporate ownership, copyright laws, and unequal access...but we never expected the impact of the eventual convergence with the mobile world, a clash of cultures that got serious after 2010, when social media and smartphones began mutually supercharging.

A couple of weeks ago, James Ball introduced a new threat, writing disapprovingly about US president Donald Trump's executive order declaring the video-sharing app TikTok a national emergency. Ball rightly calls this ban "generational vandalism, but then he writes that banning an app solely because of the nationality of its owner, he writes, "could be an existential threat to the Internet as we know it".

If that's true, then the Internet is already not "the Internet as we know it". So much depends on when your ideas of "the Internet" were formed and where you live. As Ball himself acknowledges in his new book, The System: Who Owns the Internet and How It Owns Us, in some countries Facebook is synonymous with the Internet because of the zero-rating deals the company has struck with mobile phone operators. In China, "the Internet", contrary to what most people believed was possible in the 1990s, is a giant, firewalled nationally controlled space. TikTok, as primarily a mobile phone app lives in a highly curated "the Internet" of app stores. Finally, even though "the Internet" in the 1990s sense is still with us in that people can still build their new ideas, most people's "the Internet" is now confined to the same few sites that exercise extraordinary control over what is read, seen, and heard.

The Australian Competition and Consumer Commission's new draft News Media Bargaining Code provides an example. It requires Google and Facebook (and, eventually, others) to negotiate in good faith to pay news media companies for use of their content when users share links and snippets. Unlike Spain's previous similar attempt, Google can't escape by shutting down its news service because it also serves up news through its search engine and YouTube. Facebook has said it will block Australian users from sharing local or international news on Facebook and Instagram if the code becomes mandatory. But, as Alex Hern writes, the problem is that "One of the big ways that Facebook and Google have been bad for the news industry has been by becoming indispensable to the news industry". Australia can push this code into force, but when it does Google won't pay publishers *and* publishers will lose most of their traffic, exactly as happened in Spain and Germany. But misinformation will flourish.

This is still an upper network layer problem, albeit simplified by corporate monopoly. On the 1995-2010 web, there would be too many site owners to contend with, just as banning apps (see also India) is vastly simplified by needing to negotiate with just two app store owners. Censoring the open Internet required China to build a national firewall and hire maintainers while millions of new sites and services arrived every day. When they started, no one believed it could even be done.

The mobile world is not and never has been "the Internet as we know it", built to facilitate openness for scientists. Telephone companies have always been happiest with controlled systems and walled gardens, and before 2006, manufacturers like Nokia, Motorola, and Psion had to tailor their offerings to telco specifications. The iPhone didn't just change the design and capabilities of the slab in your hand; it also changed the makeup and power structures of the industry as profoundly as the PC had changed computing before it.

But these are still upper layers. Far more alarming, as Milton Mueller writes at the Internet Governance Project, is Trump's policy of excluding Chinese businesses from Internet infrastructure - and China's ideas for "new IP". This is a crucialthreat to the interoperable bedrock of "the network of all networks". As the Internet Society explains, it is that cooperative architecture "with no central authority" that made the Internet so successful. This is the first principle that built the Internet as we know it.


Illustrations: Map of the Internet circa 2005 (via The Opte Project at Wikimedia Commons.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.