" /> net.wars: January 2023 Archives

« December 2022 | Main

January 20, 2023

New music

Nick_Cave_by_Amelia_Troubridge-370.jpgThe news this week that "AI" "wrote" a song "in the style of" Nick Cave (who was scathing about the results) seemed to me about on a par with the news in the 1970s that the self-proclaimed medium Rosemary Brown was able to take dictation of "new works" by long-dead famous composers. In that: neither approach seems likely to break new artistic ground.

In Brown's case, musicologists, psychologists, and skeptics generally converged on the belief that she was channeling only her own subconscious. AI doesn't *have* a subconscious...but it does have historical inputs, just as Brown did. You can say "AI" wrote a set of "song lyrics" if you want, but that "AI" is humans all the way down: people devised the algorithms and wrote the computer code, created the historical archive of songs on which the "AI" was trained, and crafted the prompt that guided the "AI"'s text generation. But "the machine did it by itself" is a better headline.

Meanwhile...

Forty-two years after the first one, I have been recording a new CD (more details later). In the traditional folk world, which is all I know, getting good recordings is typically more about being practiced enough to play accurately while getting the emotional performance you want. It's also generally about very small budgets. And therefore, not coincidentally, a whole lot less about sound effects and multiple overdubs.

These particular 42 years are a long time in recording technology. In 1980, if you wanted to fix a mistake in the best performance you had by editing it in from a different take where the error didn't appear, you had to do it with actual reels of tape, an edit block, a razor blade, splicing tape...and it was generally quicker to rerecord unless the musician had died in the interim. Here in digital 2023, the studio engineer notes the time codes, slices off a bit of sound file, and drops it in. Result! Also: even for traditional folk music, post-production editing has a much bigger role.

Autotune, which has turned many a wavering tone into perfect pitch, was invented in 1997. The first time I heard about it - it alters the pitch of a note without altering the playback speed! - it sounded indistinguishable from magic. How was this possible? It sounded like artificial intelligence - but wasn't.

The big, new thing now, however, *is* "AI" (or what currently passes for it), and it's got nothing to do with outputting phrases. Instead, it's stem splitting - that is, the ability to take a music file that includes multiple instruments and/or voices, and separate out each one so each can be edited separately.

Traditionally, the way you do this sort of thing is you record each instrument and vocal separately, either laying them down one at a time or enclosing each musician/singer into their own soundproof booth, from where they can play together by listening to each other over headphones. For musicians who are used to singing and playing at the same time in live performance, it can be difficult to record separate tracks. But in recording them together, vocal and instrumental tracks tend to bleed into each other - especially when the instrument is something like an autoharp, where the instrument's soundboard is very close to the singer's mouth. Bleed means you can't fix a small vocal or instrumental error without messing up the other track.

With stem splitting, now you can. You run your music file through one of the many services that have sprung up, and suddenly you have two separated tracks to work with. It's being described to me as a "game changer" for recording. Again: sounds indistinguishable from magic.

This explanation makes it sound less glamorous. Vocals and instruments whose frequencies don't overlap can be split out using masking techniques. Where there is overlap, splitting relies on a model that has been trained on human-split tracks and that improves with further training. Still a black box, but now one that sounds like so many other applications of machine learning. Nonetheless, heard in action it's startling: I tried LALAL_AI on a couple of tracks, and the separation seemed perfect.

There are some obvious early applications of this. As the explanation linked above notes, stem splitting enables much finer sampling and remixing. A singer whose voice is failing - or who is unavailable - could nonetheless issue new recordings by laying their old vocal over a new instrumental track. And vice-versa: when, in 2002, Paul Justman wanted to recreate the Funk Brothers' hit-making session work for Standing in the Shadows of Motown, he had to rerecord from scratch to add new singers. Doing that had the benefit of highlighting those musicians' ability and getting them royalties - but it also meant finding replacements for the ones who had died in the intervening decades.

I'm far more impressed by the potential of this AI development than of any chatbot that can put words in a row so they look like lyrics. This is a real thing with real results that will open up a world of new musical possibilities. By contrast, "AI"-written song lyrics rely on humans' ability to conceive meaning where none exists. It's humans all the way up.


Illustrations: Nick Cave in 2013 (by Amanda Troubridge, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 13, 2023

The ad delusion

Thumbnail image for Facebook-76536_640.pngThe fundamental lie underlying the advertising industry is that people can be made to like ads. People inside the industry sometimes believe this to a delusional degree - at an event some years ago, for example, I remember a Facebook representative suggesting that correctly targeted ads could be even more compelling to the site's users than *pictures of their grandchildren*. As if.

Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.

Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.

This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.

This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.

Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".

At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.

At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.

At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.

The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.

The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.

Illustrations: Image of Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon or Twitter.

The ad delusion

Thumbnail image for Facebook-76536_640.pngThe fundamental lie underlying the advertising industry is that people can be made to like ads. People inside the industry sometimes believe this to a delusional degree - at an event some years ago, for example, I remember a Facebook representative suggesting that correctly targeted ads could be even more compelling to the site's users than *pictures of their grandchildren*. As if.

Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.

Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.

This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.

This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.

Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".

At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.

At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.

At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.

The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.

The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.

Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 5, 2023

Resolution

metThumbnail image for Metropolis-openingshot.pngFor the last five years a laptop has been whining loudly in my living room. It hosts my mail server.

I know: who has their own mail server any more? Even major universities famed for their technological leadership now outsource to Google and Microsoft.

In 2003, when I originally set it up, lots of geeky friends had them. I wanted my email to come to the same domain as my website, which by then was already eight years old. I wanted better control of spam than I was getting with the email addresses I was using at the time. I wanted to consolidate the many email addresses I had accrued through years of technology reporting. And I wanted to be able to create multiple mailboxes at that domain for different purposes, so I could segregate the unreadable volume of press releases from personal email (and use a hidden, unknown address for sensitive stuff, like banking). At the time, I had that functionality via an address on the now-defunct Demon Internet, but Demon had become a large company in its ten years of existence, and you never knew...

In 2015, when Hillary Clinton came under fire for running her own mail server, I explained all this for Scientific American. The major benefit of doing it yourself, I seem to recall concluding at the time, was one Clinton's position barred her from gaini ng: the knowledge that if someone wants your complete historical archive they can't get it by cutting a secret deal with your technology supplier.

For about the first ten years, running my own mail server was a reasonably delightful experience. Being able to use IMAP to synchronize mail across multiple machines or log into webmail on my machine hanging at the end of my home broadband made me feel geekishly powerful, like I owned at least this tiny piece of the world. The price seemed relatively modest: two days of pain every couple of years to update nad upgrade it. And the days of pain weren't that bad; I at least felt I was gaining useful experience in the process.

Around me, the technological world chnaged. Gmail and other services got really good at spam control. The same friends with mail servers first began using Gmail for mailing lists, and then, eventually, for most things.

And then somehow, probably around six or seven years ago, the manageable two days of pain crossed into "I don' wanna" territory. Part of the problem was deciding whether to stick with Windows as the operating system or shift to Linux. Shifting to Linux required a more complicated and less familiar installation process as well as some extra difficulty in transferring the old data files. Staying with Windows, however, meant either sticking with an old version heading for obsolescence or paying to upgrade to a new version I didn't really want and seemed likely to bring its own problems. I dithered.

I dithered for a long time.

Meanwhile, dictionary attacks on that server became increasingly relentless. This is why the laptop is whining: its limited processing power can't keep up with each new barrage of some hacker script trying endless user names to find the valid ones.

There have been weirder attacks. One, whose details I have mercifully reppressed, overwhelmed the server entirely; I was only able to stop it by barring a succession of Internet addresses.

Things broke and didn't get repaired, awaiting the upgrade that never happened. At some point, I lost the ability to log in remotely via the web. I'm fairly sure the cause was that I changed a setting and not some hacker attack, but I've never been able to locate and fix it. This added to the dither of upgrading, as did the discovery that my server software appeared to have been bought by a Russian company.

Through all this, the outside world became more hostile to small servers, as part of efforts to improve spam blocking security against attacks. Delaying upgrading the server has also meant not keeping up well enough with new protocols and preventions as they've developed. Administrators I deal with began warning me about resulting incompatibilities. Gmail routinely dropped my email to friends into spam folders. I suspect this kind of concentration will be the future of the Mastodon Fediverse if it reaches mainstream use.

The warnings this fall that Britain might face power outages this winter broke the deadlock. I was going to have to switch to hosted email like everyone else. Another bit of unwiring.

I can see already that it will be a great relief not worrying about the increasingly fragile server any more. I can reformat and give away that old laptop and the less old one that was supposed to replace it. I will miss the sense of technological power having it gave me, but if I'm honest I haven't had that in a long time now. In fact, the server itself seems to want to be put out of its misery: it stopped working a few days before Christmas, and I'm running on a hosted system as a failover. Call it my transitional server.

If I *really* miss it, I suppose I can always set up my own Mastodon instance. How hard can it be, right?


Illustrations: A still from Fritz Lang's 1927 classic, Metropolis, in celebration of its accession into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon.or Twitter.