« The Internet as we know it | Main | Systems thinking »

Autofail

sfo-fires-hasbrouck.jpegA new complaint surfaced on Twitter this week. Anthony Ryan may have captured it best: "In San Francisco everyone is trying unsuccessfully to capture the hellish pall that we're waking up to this morning but our phone cameras desperately want everything to be normal." california-fires-sffdpio.jpegIn other words: as in these pictures, the wildfires have turned the Bay Area sky dark orange ("like dusk on Mars," says one friend), but people attempting to capture it on their phone cameras are finding that the automated white balance correction algorithms recalibrate the color to wash out the orange in favor of grey daylight.

At least that's something the computer is actually doing, even if it's counter-productive. Also this week, the Guardian ran an editorial that it boasted had been "entirely" written by OpenAI's language generator, GPT-3. Here's what they mean by "written" and "entirely": the AI was given a word length, a theme, and the introduction, from which it produced eight unique essays, which the Guardian editors chopped up and pieced together into a single essay, which they then edited in the usual way, cutting lines and rearranging paragraphs as they saw fit. Trust me, human writers don't get to submit eight versions of anything; we'd be fired when the first one failed. But even if we did, editing, as any professional writer will tell you, is the most important part of writing anything. As I commented on Twitter, the whole thing sounds like a celebrity airily claiming she's written her new book herself, with "just some help with the organizing". I'd advise that celebrity (name withheld) to have a fire extinguisher ready for when her ghostwriter reads that and thinks of all the weeks they spent desperately rearranging giant piles of rambling tape transcripts into a (hopefully) compelling story.

The Twitter discussion of this little foray into "AI" briefly touched on copyright. It seems to me hard to argue that the AI is the author given the editors' recombination of its eight separately-generated pieces (which likely took longer than if one of them had simply written the piece). Perhaps you could say - if you're willing to overlook the humans who created, coded, and trained the AI - that the AI is the author of the eight pieces that became raw material for the essay. As things are, however, it seems clear that the Guardian is the copyright owner, just as it would be if the piece had been wholly staff-written (by humans).

Meanwhile, the fallout from Max Schrems' latest win continues to develop. The Irish Data Protection Authority has already issued a preliminary order to suspend data transfers to the US; Facebook is appealing. The Swiss data protection authority has issued a notice that the Swiss-US Privacy Shield is also void. During a September 3 hearing before the European Parliament Committee on Civil Liberties, Justice, and Home Affairs, MEP Sophie in't Veld said that by bringing the issue to the courts Schrems is doing the job data protection authorities should be doing themselves. All agreed that a workable - but this time "Schrems-proof" - solution must be found to the fundamental problem, which Gwendolyn Delbos-Corfield summed up as "how to make trade with a country that has decided to put mass surveillance as a rule in part of its business world". In't Veld appeared to sum up the entire group's feelings when she said, "There must be no Schrems III."

Of course we all knew that the UK was going to get caught in the middle between being able to trade with the EU, which requires a compatible data protection regime (either the continuation of the EU's GDPR or a regime that is ruled equal), and the US, which wants data to be free-flowing and which has been trying to use trade agreements to undermine the spread of data protection laws around the world (latest newcomer: Brazil). What I hadn't quite focused on (although it's been known for a while) is that, just like the US surveillance system, the UK's own surveillance regime could disqualify it from the adequacy ruling it needs to allow data to go on flowing. When the UK was an EU member state, this didn't arise as an issue because EU data protection law permits member states to claim exceptions for national security. Now that the UK is out, that exception no longer applies. It was a perk of being in the club.

Finally, the US Senate, not content with blocking literally hundreds of bills passed by the House of Reprsentatives over the last few years, has followed up July's antitrust hearings with the GAFA CEOs with a bill that's apparently intended to answer Republican complaints that conservative voices are being silenced on social media. This is, as Eric Goldman points out in disgust one of several dozen bits of legislation intended to modify various pieces of S230 or scrap it altogether. On Twitter, Tarleton Gillespie analyzes the silliness of this latest entrant into the fray. While modifying S230 is probably not the way to go about it, right now curbing online misinformation seems like a necessary move - especially since Facebook CEO Mark Zuckerberg has stated outright that Facebook won't remove anti-vaccine posts. Even in a pandemic.


Illustrations: The San Francisco sky on Wednesday ("full sun, no clouds, only smoke"), by Edward Hasbrouck; accurate color comparison from the San Francisco Fire Department.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
https://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/943

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives