« Globalizing Britain | Main | Is the juice worth the squeeze? »

Learning events

gikii-SyntheticLearn-screenshot.pngFederica Giovanella's iPhone kept interrupting her as she spoke. The problem was Siri, which interpreted her pronunciation of "theory" as a call to wake up and work. Siri is real, of course, but it was purely coincidental that the talk it was interrupting, "Product Liability and the Internet of (Stranger) Things", was focused how law should deal with remote control and the Internet of Things, using a scene from Stranger Things as Exhibit A of the possibilities.

The concatenation was a fine demonstration of the essence of gikii, a small annual conference that mixes law, technology, and pop culture. Founded by Lilian Edwards and Andres Guadamuz, this year was the 16th iteration of this event that applies law to imaginary futures to understand how to manage prospective real ones. (For discussions of previous years, see: 2020, 2019, 2018, 2016, 2014, 2013, and 2008.)

It can be difficult to disentangle truth from fiction at gikii. "Are we serious?" I asked Jon Crowcroft while developing our paper, Leaky by Design.

What should be fiction, but sadly is not, is Jiahong Chen and Lucas Miotto's discussion of real-time profiling, which is particularly creepy in light of this week's revelation of the UK government's use of sensitive data to influence the population's behavior.

Also sadly non-fictional is universities' pandemic-fuelled race to take advantage of the rise of remote learning to demand that instructors record every "learning event", as Guido Noto La Diega explained in Death Becomes HE. Obviously this is not the first time universities have demanded ownership of academic staff's intellectual property, but it's an unprecedented expansion. Noto La Diega noted that in a short time universities have shifted from opt-in (I will choose to record this lecture) to opt-out (I must specifically turn off recording for this lecture). At one prominent UK university, a commenter noted, although the union agreement specifies that staff can't be required to record lectures, the administration nonetheless makes failing to record them a disciplinary offense. The result, he said, is "strangling academic freedom".

Also non-fictional was Miranda Mowbray's study of the algorithm used to substitute for 2020 secondary school exam results; the results were so controversial they led to street protests. Mowbray finds that a key flash point was wrong: the algorithm was not more class-biased than teachers' predicted grades. What is needed for future such systems is many of the usual things: early consultation with stakeholders, explicit fairness requirements that can be communicated to the public, personalized explanations, and so on.

However, the advantage of looking at law through fiction is that you don't have to wait for real cases to do your analysis. Helpfully, there is a lot of AI fiction against which to examine the EU's publication, in April, of draft AI regulations. Lilian Edwards and Hannah Smethurst applied the EU's proposed risk-based analysis to regulating magic, using this year's series WandaVISION as a source of example outcomes. Reuben Binns demonstrated how to audit an algorithm for weaknesses by interrogating it in character as Columbo. Marion Oswald's short story, based on GK Chesterton's Father Brown story The Mistake of the Machine, saw Father Brown (human psychology) and Sherlock Holmes (data from external observation) face off over a bit of domestic carnage. As in Chesterton's original, the crucial element was understanding humans.

And yet, AI itself is so often fiction, in reality the composition of hidden numbers of human-performed microtasks, as Mary L. Gray and Siddharth Suri have shown. This is, Vanessa Hanschke and Yasmin Hansschke concluded via a hand-drawn animation showing the lives of "clickworkers", very much traditional exploitation and "not the AI labor revolution we were promised".

The fictional version of Adrian Aronsson-Storrier, who with Will Page produced this year's Best Paper, might agree. In a mock video seeking seed funding for this "disruptive industry player" in the "DeepFake sector", Page and Adrian Aronsson-Storrier explain their company, SyntheticLearn, a DeepFake service that underpays PhD students to prepare, write, and record customized lectures that are then delivered by digitally faked senior professors. Hence the company's slogan: "Fake it, don't make it." But good news: SyntheticLearn has signed up to no less than 20 AI and machine learning ethics pledges.

"Our DeepFake activities are entirely lawful," Aronsson-Storrier reassures prospective investors.

Oy. (Really: watch the video.)

If that weren't bad enough, "all our dystopias are converging," said Paul Bernal in a race through movie images from several decades: climate change, technological nightmares, the rise of extremism, racism, and authoritarianism, and disease and disaster are all crashing into one another. Proposed solutions often make each dystopia worse, particularly for privacy, because all of them lead to proposals that tracking people through apps will help.

And yet, despite all these dysfunctional narratives - I haven't yet mentioned Hjalte Osborn Frandsen fretting about space junk, for example, a potential future that hangs on my wall in the form of a 1984 lithograph by Edinburgh artist Peter Standen - overall this year's gikii seemed surprisingly not-pessimistic. Bernal's crashing dystopias may have something to do with it: anything that suggests there *is* a future, however, difficult to navigate, seems optimistic.

Illustrations: A screenshot from Will Page's and Adrian Aronsson-Storrier's paper, SyntheticLearn (page down for the video).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack URL for this entry:

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)