wg
]]>In Brown's case, musicologists, psychologists, and skeptics generally converged on the belief that she was channeling only her own subconscious. AI doesn't *have* a subconscious...but it does have historical inputs, just as Brown did. You can say "AI" wrote a set of "song lyrics" if you want, but that "AI" is humans all the way down: people devised the algorithms and wrote the computer code, created the historical archive of songs on which the "AI" was trained, and crafted the prompt that guided the "AI"'s text generation. But "the machine did it by itself" is a better headline.
Meanwhile...
Forty-two years after the first one, I have been recording a new CD (more details later). In the traditional folk world, which is all I know, getting good recordings is typically more about being practiced enough to play accurately while getting the emotional performance you want. It's also generally about very small budgets. And therefore, not coincidentally, a whole lot less about sound effects and multiple overdubs.
These particular 42 years are a long time in recording technology. In 1980, if you wanted to fix a mistake in the best performance you had by editing it in from a different take where the error didn't appear, you had to do it with actual reels of tape, an edit block, a razor blade, splicing tape...and it was generally quicker to rerecord unless the musician had died in the interim. Here in digital 2023, the studio engineer notes the time codes, slices off a bit of sound file, and drops it in. Result! Also: even for traditional folk music, post-production editing has a much bigger role.
Autotune, which has turned many a wavering tone into perfect pitch, was invented in 1997. The first time I heard about it - it alters the pitch of a note without altering the playback speed! - it sounded indistinguishable from magic. How was this possible? It sounded like artificial intelligence - but wasn't.
The big, new thing now, however, *is* "AI" (or what currently passes for it), and it's got nothing to do with outputting phrases. Instead, it's stem splitting - that is, the ability to take a music file that includes multiple instruments and/or voices, and separate out each one so each can be edited separately.
Traditionally, the way you do this sort of thing is you record each instrument and vocal separately, either laying them down one at a time or enclosing each musician/singer into their own soundproof booth, from where they can play together by listening to each other over headphones. For musicians who are used to singing and playing at the same time in live performance, it can be difficult to record separate tracks. But in recording them together, vocal and instrumental tracks tend to bleed into each other - especially when the instrument is something like an autoharp, where the instrument's soundboard is very close to the singer's mouth. Bleed means you can't fix a small vocal or instrumental error without messing up the other track.
With stem splitting, now you can. You run your music file through one of the many services that have sprung up, and suddenly you have two separated tracks to work with. It's being described to me as a "game changer" for recording. Again: sounds indistinguishable from magic.
This explanation makes it sound less glamorous. Vocals and instruments whose frequencies don't overlap can be split out using masking techniques. Where there is overlap, splitting relies on a model that has been trained on human-split tracks and that improves with further training. Still a black box, but now one that sounds like so many other applications of machine learning. Nonetheless, heard in action it's startling: I tried LALAL_AI on a couple of tracks, and the separation seemed perfect.
There are some obvious early applications of this. As the explanation linked above notes, stem splitting enables much finer sampling and remixing. A singer whose voice is failing - or who is unavailable - could nonetheless issue new recordings by laying their old vocal over a new instrumental track. And vice-versa: when, in 2002, Paul Justman wanted to recreate the Funk Brothers' hit-making session work for Standing in the Shadows of Motown, he had to rerecord from scratch to add new singers. Doing that had the benefit of highlighting those musicians' ability and getting them royalties - but it also meant finding replacements for the ones who had died in the intervening decades.
I'm far more impressed by the potential of this AI development than of any chatbot that can put words in a row so they look like lyrics. This is a real thing with real results that will open up a world of new musical possibilities. By contrast, "AI"-written song lyrics rely on humans' ability to conceive meaning where none exists. It's humans all the way up.
Illustrations: Nick Cave in 2013 (by Amanda Troubridge, via Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.
Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.
This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.
This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.
Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".
At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.
At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.
At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.
The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.
The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.
Illustrations:
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>I know: who has their own mail server any more? Even major universities famed for their technological leadership now outsource to Google and Microsoft.
In 2003, when I originally set it up, lots of geeky friends had them. I wanted my email to come to the same domain as my website, which by then was already eight years old. I wanted better control of spam than I was getting with the email addresses I was using at the time. I wanted to consolidate the many email addresses I had accrued through years of technology reporting. And I wanted to be able to create multiple mailboxes at that domain for different purposes, so I could segregate the unreadable volume of press releases from personal email (and use a hidden, unknown address for sensitive stuff, like banking). At the time, I had that functionality via an address on the now-defunct Demon Internet, but Demon had become a large company in its ten years of existence, and you never knew...
In 2015, when Hillary Clinton came under fire for running her own mail server, I explained all this for Scientific American. The major benefit of doing it yourself, I seem to recall concluding at the time, was one Clinton's position barred her from gaini ng: the knowledge that if someone wants your complete historical archive they can't get it by cutting a secret deal with your technology supplier.
For about the first ten years, running my own mail server was a reasonably delightful experience. Being able to use IMAP to synchronize mail across multiple machines or log into webmail on my machine hanging at the end of my home broadband made me feel geekishly powerful, like I owned at least this tiny piece of the world. The price seemed relatively modest: two days of pain every couple of years to update nad upgrade it. And the days of pain weren't that bad; I at least felt I was gaining useful experience in the process.
Around me, the technological world chnaged. Gmail and other services got really good at spam control. The same friends with mail servers first began using Gmail for mailing lists, and then, eventually, for most things.
And then somehow, probably around six or seven years ago, the manageable two days of pain crossed into "I don' wanna" territory. Part of the problem was deciding whether to stick with Windows as the operating system or shift to Linux. Shifting to Linux required a more complicated and less familiar installation process as well as some extra difficulty in transferring the old data files. Staying with Windows, however, meant either sticking with an old version heading for obsolescence or paying to upgrade to a new version I didn't really want and seemed likely to bring its own problems. I dithered.
I dithered for a long time.
Meanwhile, dictionary attacks on that server became increasingly relentless. This is why the laptop is whining: its limited processing power can't keep up with each new barrage of some hacker script trying endless user names to find the valid ones.
There have been weirder attacks. One, whose details I have mercifully reppressed, overwhelmed the server entirely; I was only able to stop it by barring a succession of Internet addresses.
Things broke and didn't get repaired, awaiting the upgrade that never happened. At some point, I lost the ability to log in remotely via the web. I'm fairly sure the cause was that I changed a setting and not some hacker attack, but I've never been able to locate and fix it. This added to the dither of upgrading, as did the discovery that my server software appeared to have been bought by a Russian company.
Through all this, the outside world became more hostile to small servers, as part of efforts to improve spam blocking security against attacks. Delaying upgrading the server has also meant not keeping up well enough with new protocols and preventions as they've developed. Administrators I deal with began warning me about resulting incompatibilities. Gmail routinely dropped my email to friends into spam folders. I suspect this kind of concentration will be the future of the Mastodon Fediverse if it reaches mainstream use.
The warnings this fall that Britain might face power outages this winter broke the deadlock. I was going to have to switch to hosted email like everyone else. Another bit of unwiring.
I can see already that it will be a great relief not worrying about the increasingly fragile server any more. I can reformat and give away that old laptop and the less old one that was supposed to replace it. I will miss the sense of technological power having it gave me, but if I'm honest I haven't had that in a long time now. In fact, the server itself seems to want to be put out of its misery: it stopped working a few days before Christmas, and I'm running on a hosted system as a failover. Call it my transitional server.
If I *really* miss it, I suppose I can always set up my own Mastodon instance. How hard can it be, right?
Illustrations: A still from Fritz Lang's 1927 classic, Metropolis, in celebration of its accession into the public domain.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon.or Twitter.
And yet. In a recent paper (PDF) that Tammy Xu summarizes at MIT Technology Review, the EPOCH AI research and forecasting unit argues that we are at risk of running out of a particular kind of data: the stuff we use to train large language models. More precisely, the stock of data deemed suitable for use in language training datasets is growing more slowly than the size of the datasets these increasingly large and powerful models require for training. The explosion of privacy-invasive, mechanically captured data mentioned above doesn't help with this problem; it can't help train what today passes for "artificial intelligence to improve its ability to generate content that reads like it could have been written by a sentient human.
So in this one sense the much-debunked saw that "data is the new oil" is truer than its proponents thought. Like drawing water from aquifers or depleting oil reserves, data miners have been relying on capital resources that have taken eras to build up and that can only be replenished over similar time scales. We professional writers produce new "high-quality" texts too slowly.
As Xu explains, "high-quality" in this context generally means things like books, news articles, scientific papers, and Wikipedia pages - that is, the kind of prose researchers want their models to copy. Wikipedia's English language section makes up only 0.6% of GPT-3 training data. "Low-quality" is all the other stuff we all churn out: social media postings, blog postings, web board comments, and so on. There is of course vastly more of this (and some of it is, we hope, high-quality)..
The paper's authors estimate that the high-quality text modelers prefer could be exhausted by 2026. Images, which are produced at higher rates, will take longer to exhaust - lasting to perhaps between 2030 and 2040. The paper considers three options for slowing exhaustion: broaden the standard for acceptable quality; find new sources; and develop more data-efficient solutions for training algorithms. Pursuing the fossil fuel analogy, I guess the equivalents might be: turning to techniques such as fracking to extract usable but less accessible fossil fuels, developing alternative sources such as renewables, and increasing energy efficiency. As in the energy sector, we may need to do all three.
I suppose paying the world's laid-off and struggling professional writers to produce text to feed the training models can't form part of the plan?
The first approach might have some good effects by increasing the diversity of training data. The same is true of the second, although using AI-generated text (synthetic data to train the model seems as recursive as using an algorithm to highlight trends to tempt users. Is there anything real in there?
Regarding the third... It's worth remembering the 2020 paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (the paper over which Google apparently fired AI ethics team leader Timnit Gebru). In this paper (and a FaCCT talk), Gebru, Emily M. Bender, Angelina McMillan-Major, and Shmargaret Shmitchell outlined the escalating environmental and social costs of increasingly large language models and argued that datasets needed to be carefully curated and documented, and tailored to the circumstances and context in which the model was eventually going to be used.
As Bender writes at Medium, there's a significant danger that humans reading the language generated by systems like GPT-3 may *believe* it's the product of a sentient mind. At IAI News, she and Chirag Shah call text generators like GPT-3 dangerous because they have no understanding of meaning even as they spit out coherent answers to user questions in natural language. That is, these models can spew out plausible-sounding nonsense at scale; in 2020, Renée DiResta predicted at The Atlantic that generative text will provide an infinite supply of disinformation and propaganda.
This is humans finding patterns even where they don't exist: all the language model does is make a probabilistic guess about the next word based on statistics derived from the data it's been trained on. It has no understanding of its own results. As Ben Dickson puts it at TechTalks as part of an analysis of the workings of the language model BERT, "Coherence is in the eye of the beholder." On Twitter, Bender quipped that a good new name would be PSEUDOSCI (for Pattern-matching by Syndicate Entities of Uncurated Data Objects, through Superfluous (energy) Consumption and Incentives).
If running out of training data means a halt on improving the human-like quality of language generators' empty phrases, that may not be such a bad thing.
Illustrations: Drunk parrot (taken by Simon Bisson).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>Most discussion of facial recognition to date has focused on privacy: that it becomes impossible to move around public spaces without being identified and tracked. We haven't thought enough about the potential use of facial recognition to underpin a braad permission-based society in which our presence in any space can be detected and terminated at any time. In such a society, we are all migrants.
That particular unwanted dystopian future is upon us. This week, we learned that a New Jersey lawyer was blocked from attending the Radio City Music Hall Christmas show with her daughter because the venue's facial recognition system identified her as a member of a law firm involved in litigation against Radio City's owner, MSG Entertainment. Security denied her entry, despite her protests that she was not involved in the litigation. Whether she was or wasn't shouldn't really matter; she had committed no crime, she was causing no disturbance, she was granted no due process, and she had no opportunity for redress.
Soon after she told her story a second instance emerged, a male lawyer who was blocked from attending a New York Knicks basketball game at Madison Square Garden. Then, quickly, a third: a woman and her husband were removed from their seats at a Brandi Carlile concert, also at Madison Square Garden.
MSG later explained that litigation creates "an inherently adverse environment". I read that this way: the company has chosen to use developing technology in an abusive display of power. In other words, MSG is treating its venues as if they were the new-style airports Edward Hasbrouck has detailed, also covered here a few weeks back. In its original context, airport thinking is bad enough; expanded to the world's many privately-owned public venues, the potential is terrifying.
Early adopters of sharing data to exclude bad people talked about barring known shoplifters from chains of pubs or supermarkets, or catching and punishing criminals much more quickly. The MSG story means the mission has crept from "terrorist" to "don't like their employer" at unprecedented speed.
The right to navigate the world without interference is one privileged folks have taken for granted. With some exceptions: in England, the right to ramble all parts of the countryside took more than a century to codify into law.To an American, exclusion from a public venue *feels* like it should be a Constitutional issue - but of course it's not, since the affected venues are owned by a private company. In the reactions I've seen to the MSG stories, people have called for a ban on live facial recognition. By itself that's probably not going to be enough, now that this compost heap of worms has been opened; we are going to need legislation to underpin the right to assemble in privately-owned public spaces. Such a right sort of exists already in the conditions baked into many relevant local licensing laws that require venue operators to be the real-world equivalent of common carriers in telecommunications, who are not allowed to pick and choose whose data they will carry.
In a fourth MSG incident, a lawyer who is suing Madision Square Garden for barring him from entering, tricked the cameras at the MSG-owned Beacon Theater by disguising himself with a beard and a baseball cap. He didn't exactly need to, as his company had won a restraining order requiring MSG to let its lawyers into its venues (the case continues).
In that case, MSG's lawyer told the court barring opposition lawyers was essential to protect the company: "It's not feasible for any entertainment venue to operate any other way,"
Since when? At the New York Times, Kashmir Hill explains that the company adopted this policy last summer and depends on the photos displayed on law firms' websites to feed into its facial recognition to look for matches. But really the answer can only be: since the technology became available to enforce such a ban. It is a clear case where the availability of a technology leads to worse behavior on the part of its owner.
In 1996, the software engineer turned essayist and novelist Ellen Ujllman wrote about exactly this with respect to databases: they infect their owners with the desire to use their new capabilities. In one of her examples, a man suddenly realized he could monitor what his long-trusted secretary did all day. In another, a system to help ensure AIDS patients were getting all the benefits they were entitled to slowly morphed into a system for checking entitlement. In the case of facial recognition, its availability infinitely extends the British Tories' concept of the hostile environment.
Illustrations: The Rockettes performing in 2008 (via skividal at Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>From the beginning, I've called bitcoin and its sequels as "the currency equivalent of being famous for being famous". Crypto(currency) fans like to claim that the world's fiat currencies don't have any underlying value either, but those are backed by the full faith and credit of governments and economies. Logically, crypto appeals most to those with the least reason to trust their governments: the very rich who resent paying taxes and those who think they have nothing to lose.
This week the US House and Senate both held hearings on the collapse of cryptocurrency exchange and hedge fund FTX and its deposed, arrested, and charged CEO Sam Bankman-Fried. The key lesson: we can understand the main issues surrounding FTX and its fellow cryptocurrency exchanges without understanding either the technical or financial intricacies.
A key question is whether the problem is FTX or the entire industry. Answers largely split along partisan lines. Republican member chose FTX, and tended to blame Securities and Exchange Commission chair Gary Gensler. Democrats were more likely to condemn the entire industry.
As Jesús G. "Chuy" García (D-IL) put it, "FTX is not an anomaly. It's not just one corrupt guy stealing money, it's an entire industry that refuses to comply with existing regulation that thinks it's above the law." Or, per Brad Sherman (D-CA), "My fear is that we'll view Sam Bankman-Fried as just one big snake in a crypto garden of Eden. The fact is, crypto is a garden of snakes."
When Sherrod Brown (D-OH) asked whether FTX-style fraud existed at other crypto firms, all four expert speakers said yes.
Related is the question of whether and how to regulate crypto, which begins with the problem of deciding whether crypto assets are securities under the decades-old Howey test. In its ongoing suit against Ripple, Gensler's SEC argues for regulation as securities. Lack of regulation has enabled crypto "innovation" - and let it recreate practices long banned in traditional financial markets. For an example see Ben McKenzie's and Jacob Silverman's analysis of leading crypto exchange Binance's endemic conflicts of interest and the extreme risks it allows customers to take that are barred under securities regulations.
Regulation could correct some of this. McKenzie gave the Senate committee numbers: fraudulent financier Bernie Madoff had 37,000 clients; FTX had 32 times that in the US alone. The collective lost funds of the hundreds of millions of victims worldwide could be ten times bigger than Madoff.
But: would regulating crypto clean up the industry or lend it legitimacy it does not deserve? Skeptics ask this about alt-med practitioners.
Some background. As software engineer Stephen Diehl explains in his new book, Popping the Crypto Bubble, securities are roughly the opposite of money. What you want from money is stability; sudden changes in value spark cost-of-living crises and economic collapse. For investors, stability is the enemy: they want investments' value to go up. The countervailing risk is why the SEC's requires companies offering securities to publish sufficient truthful information to enable investors to make a reasonable assessment.
In his book, Diehl compares crypto to previous bubbles: the Internet, tulips, the railways, the South Sea. Some, such as the Internet and the railways, cost early investors fortunes but leave behind valuable new infrastructure and technologies on which vast new industries are built. Others, like tulips, leave nothing of new value. Diehl, like other skeptics, believes cryptocurrencies are like tulips.
The idea of digital cash was certainly not new in 2008, when "Satoshi" published their seminal paper on bitcoin; the earliest work is usually attributed to David Chaum, whose 1982 dissertation contained the first known proposal for a blockchain protocol, proposed digital cash in a 1983 paper, and set up a company to commercialize digital cash in 1990 - way too early. Crypto's ethos came from the cypherpunks mailing list, which was founded in 1992 and explored the idea of using cryptography to build a new global financial system.
Diehl connects the reception of Satoshi's paper to its timing, just after the 2007-2008 financial crisis. There's some logic there: many have never recovered.
For a few years in the mid-2010s, a common claim was that cryptocurrencies were bubbles but the blockchain would provide enduring value. Notably disagreeing was Michael Salmony, who startled the 2016 Tomorrow's Transactions Forum by saying the blockchain was a technology in search of a solution. Last week, IBM and Maersk announced they are shutting down their enterprise blockchain because, Dan Robinson writes at The Register, despite the apparently idea use case, they couldn't attract industry collaboration.
More recently we've seen the speculative bubble around NFTs, but otherwise we've heard only about their wildly careening prices in US dollars and the amount of energy mining them consumes. Until this year, when escalating crashes and frauds are taking over. Distrust does not build value.
Illustrations: The Warner Brothers coyote, realizing he's standing on thin air.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>What began as a project to turn the attic (loft) into more usable space has metastasized all over the house (apartment), as every crowded corner gets reevaluated. Behind every piece of furniture, some being moved for the first time since 1991, lurk wires. Wires of all kinds. Speaker wire that ran to the amplifier down the hall. TV cables connecting various items - computer, DVD player, the VCR I can't throw out until all the tapes are gone. Ethernet cables, because wired connections are more stable. Telephone cables running to remote extensions that were replaced with DECT phones 15 years ago. A weird, extraordinarily thin wire for a device called a Rabbit that once connected the TV in my office to the cable box in the living room; an infrared sender even let you change channels. All of the cable box, the Rabbit, and the TV are long gone, but the wire lives on because it runs behind furniture that has settled too deeply into the carpet to move. Even now, I haven't got it all out. And, because this apartment (flat) has just one single electrical outlet per room, multio-way extension cords and plugs *everywhere*.
The phone, stereo system, and TV cabling went in first. Layered on top of all that was an ethernet network that accreted over time to serve various computers in odd locations. There was an extra wifi router in the living room because the original one's wifi didn't reach the kitchen. And so on. So the box of pulled wiring also includes three network switches, which still leaves two in place. This in a four-room flat!
I still haven't touched the Giant Rat of Sumatra's nest behind my desk.
This is the result of 30 years of adding bits that were needed at the time but never subtracting them when their original purpose has gone. If you move frequently this sort of thing doesn't happen because you tear it all down and build back only what you need each time. I know, because between the ages of 17 and 27 I moved nine times. I got really good at packing books and LPs. (Say you're over 60 without saying you're over 60.)
Were I a 30-something modern renter, my entire life would lift out of each successive abode leaving no trace and requiring few boxes. My books, audio, and video would be computer files or streaming subscriptions. All my telecommunications connections would be wireless. And, for best results, any furniture I had would be either on 30-day free trial or inflatable. It's like having a printer: modern people are app people. Wires need not apply. Wires are for old people. Wires...are a sign of privilege.
I now realize that accretion has led me to the equivalent of buying a tractor but continuing to feed and care for the Clydesdale horses it replaced without really noticing they're no longer doing anything useful. Or, in a higher-risk example, this sort of accretion leads older people into overly complex medication regimes as their doctors add new medications, often to control the side effects of the ones they're already on, without reconsidering the whole list; that situation is common enough to have bred a subspecialty of pharmacology to review and rationalize people's medications.
More technologically, there's the phenomenon consultants remark upon of finding ancient machines, even in banks that are running mission-critical but ancient software no one dares touch because no one knows how it works. I suspect that as the time between computer replacements continues to lengthen accretion of this type will be the fate of all computer systems. The reason is simple: adding things to patch localized problems without touching what's already in place will always feel safer than pulling an unlabeled plug and risking breaking the whole system because you didn't understand the complex dependencies. And there's little motivation. For the most part, everything works fine until one day the increasing complexity overwhelms the system and it all falls over - at which point tracing and the fault is excruciatingly difficult, and fixing will likely require a workaround that, like the one for the Y2K bug, has an expiration date when you'll have to trace and replace - or find another workaround.
There are lots of knock-on effects from accretion, most notably unnoticed security vulnerabilities. In her days running RISCS, Angela Sasse used to say that often important solutions to endemic cybersecurity problems are overlooked because they're not specifically technological fixes. Instead, she argued, reducing stress on employees by ensuring they're not overworked and have systems that make their work easier instead of harder, pays dividends in fewer mistakes. Similarly, upgrading and replacing old equipment with newer equipment with better security and usability built in can solve many seemingly intractable problems, over time costing less than continuing to patch the old system.
In my own case, there was a small but definite cost in wasted electricity (those extra switches) and, I imagine, a slightly higher risk of fire (all those extension cords). Life, as Gilbert and Sullivan observed, is a closely complicated tangle.
Illustrations: The box of wires, with more to come.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon or Twitter.
]]>The latter appears to be the situation with smart speakers, which in 2015 were going to take over the world, and today, in 2022, are installed in 75% of US homes. Despite this apparent success, they are losing money even for market leaders Amazon (third) and Google (second), as Business Insider reported this week. Amazon's Worldwide Digital division, which includes Prime Video as well as Echo smart speakers and Alexa voice technology, lost $3 billion in the first quarter of this year alone, primarily due to Alexa and other devices. The division will now be the biggest target for the layoffs the company announced last week.
The gist: they thought smart speakers would be like razors or inkjet printers, where you sell the hardware at or below cost and reap a steady income stream from selling razor blades or ink cartridges. Amazon thought people would buy their smart speakers, see something they liked, and order the speaker to put through the purchase. Instead, judging from the small sample I have observed personally, people use their smart speakers as timers, radios, and enhanced remote controls, and occasionally to get a quick answer from Wikipedia. And that's it. The friends I watched order their smart speaker to turn on the basement lights and manage their shopping list have, as far as I could tell on a recent visit, developed no new uses for their voice assistant in three years of being locked up at home with it.
The system has developed a new feature, though. It now routinely puts the shopping list items on the wrong shopping list. They don't know why.
In raising this topic at The Overspill, Charles Arthur referred back to a 2016 Wired aritcle summarizing venture capitalist Mary Meeker's assessment in her annual Internet Trends report that voice was going to take over the world and the iPhone had peaked. In slides 115-133, Meeker outlined her argument: improving accuracy would be a game-changer.
Even without looking at recent figures, it's clear voice hasn't taken over. People do use speech when their hands are occupied, especially when driving or when the alternative is to type painfully into their smartphone - but keyboards still populate everyone's desks, and the only people I know who use speech for data entry are people for whom typing is exceptionally difficult.
One unforeseen deterrent may be that privacy emerged as a larger issue than early prognosticators may have expected. Repeated stories have raised awareness that the price of being able to use a voice assistant at will is that microphones in your home listen to everything you say waiting for their cue to send your speech to a distant server to parse. Rising consciousness of the power of the big technology companies has made more of us aware that smart speakers are designed more to fulfill their manufacturers' desires to intermediate and monetize our lives than to help us.
The notion that consumers would want to use Amazon's Echo for shopping appears seriously deluded with hindsight. Even the most dedicated voice users I know want to see what they're buying. Years ago, I thought that as TV and the Internet converged we'd see a form of interactive product placement in which it would be possible to click to buy a copy of the shirt a football player was wearing during a game or the bed you liked in a sitcom. Obviously, this hasn't happened; instead a lot of TV has moved to streaming services without ads, and interactive broadcast TV is not a thing. But in *that* integrated world voice-activated shopping would work quite well, as in "Buy me that bed at the lowest price you can find", or "Send my brother the closest copy you can find of Novak Djokovic's dark red sweatshirt, size large, as soon as possible, all cotton if possible."
But that is not our world, and in our world we have to make those links and look up the details for ourselves. So voice does not work for shopping beyond adding items to lists. And if that doesn't work, what other options are there? As Ron Amadeo writes at Ars Technica, the queries where Alexa is frequently used can't be monetized, and customers showed little interest in using Alexa to interact with other companies such as Uber or Domino's Pizza. And, even Google, which is also cutting investment in its voice assistant, can't risk alienating consumers by using its smart speaker to play ads. Only Apple appears unaffected.
"If you build it, they will come," has been the driving motto of a lot of technological development over the last 30 years. In this case, they built it, they came, and almost everyone lost money. At what point do they turn the servers off?
Illustrations: Amazon Echo Dot.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter and/or Mastodon.
]]>Inevitably, the discussion landed on mathematical models, which attempt to provide tools to answer the question, "What if?" This question is the bedrock of science fiction, but science fiction writers' helpfulness has limits: they don't have to face bereaved people if they get it wrong; they can change reality to serve their sense of fictional truth; and they optimize for the best stories, rather than the best outcomes. Beware.
In the case of covid, humanity had experience in combating pandemics, but not covid, which turned out to be unlike the first known virus family people grabbed for: flu. Imperial College epidemiologist Neil Ferguson became a national figure when it became known that his 2006 influenza model suggesting that inaction could lead to 500,000 deaths had influenced the UK government's delayed decision to impose a national lockdown. Ferguson remains controversial; Scotland's The Ferrett offers a fact check that suggests that many critics failed to understand the difference between projection and prediction and the importance of the caveat "if nothing is done". Models offer possible futures, but not immutable ones.
As Erica Thompson writes in her new book, Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It, models also have limits that we ignore at our peril. Chief among them is the fact that the model is always an abstracted version of reality. If it weren't, our computers couldn't calculate them any more than they can calculate all the real world's variables. Thompson therefore asks: how can we use models effectively in decision making without becoming trapped inside the models' internal worlds, where their simplified assumptions are always true? More important, how can we use models to improve our decision making with respect to the many problems we face that are filled with uncertainties?
The science of covid - or of climate change - is only a small part of the factors a government must weigh in deciding how to respond; what science tells us must be balanced against the economic and social impacts of different approaches. In June 2020, Ferguson estimated that locking down a week earlier would have saved 20,000 lives. At the time, many people had already begun withdrawing from public life. And yet one reason the government delayed was the belief that the population would quickly give in to lockdown fatigue and resist restrictions, rendering an important tool unusable later, when it might be needed even more. This assumption turned out to be largely wrong, as was the assumption in Ferguson's 2006 model that 50% of the population would refuse to comply with voluntary quarantine. Thompson calls this misunderstanding of public reaction a "gigantic failure of the model".
What else is missing? she asks. Ferguson had to resign when he himself was caught breaking the lockdown rules. Would his misplaced belief that the population wouldn't comply have been corrected by a more diverse team?
Thompson began her career with a PhD in physics that led her to examine many models of North Atlantic storms. The work taught her more about the inferences we make from models than about storms, and it opened for her the question of how to use the information models provide without falling into the trap of failing to recognize the difference between the real world and Model Land - that is, the assumption-enclosed internal world of the models.
From that beginning, Thompson works through different aspects of how models work and where their flaws can be found. Like Cathy O'Neil's Weapons of Math Destruction, which illuminated the abuse of automated scoring systems, this is a clearly-written and well thought-out book that makes a complex mathematical subject and accessible to a general audience. Thompson's final chapter, which offers approaches to evaluating models and lists of questions to ask modelers, should be read by everyone in government.
Thompson's focus on the dangers of failing to appreciate the important factors models omit leads her to skepticism about today's "AI", which of course is trained on such models: "It seems to me that rather than AI developing towards the level of human intelligence, we are instead in danger of human intelligence descending to the level of AI by concreting inflexible decision criteria into institutional structures, leaving no room for the human strengths of empathy, compassion, a sense of fairness and so on." Later, she adds, "AI is fragile: it can work wonderfully in Model Land but, by definition, it does not have a relationship with the real world other than one mediated by the models that we endow it with."
In other words, AI works great if you can assume a spherical cow.
Illustrations: The spherical cow that mocks unrealistic scientific models drawn jumping over the moon by Ingrid Kallick for the 1996 meeting of the American Astronomical Association (via Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>As noted here last week, it is definitely not so simple as Twitter's loss is Mastodon's/Discord/s/SomeOtherSite's gain.
The general sense of anxiety feels like a localized version of the years of the Trump presidency - that is, people logging in constantly to check, "What's he done now?" Only the "he" is of course new owner Elon Musk, and the "what" is stuff like a team has been fired, someone crucial has quit, there's been a new order to employees ("check this box by 5pm or you're fired!"), making yet another change to the system of blue ticks that may or may not verify a person's identity, or appearing to disable two-factor authentication via SMS shortly after announcing the shutdown of "20% of microservices". This kind of thing makes everyone jumpy. Every tiny glitch could be the first sign that Twitter is crumbling around the edges before cascading into failure, will the process look like HAL losing its marbles in the movie 2001: A Space Odyseey? Ot will it just go black like the end of The Sopranos?
I have never felt so conscious of my data: 15 years of tweets and direct messages all held hostage inside a system with a renegade owner no one trusts. Deleting it feels like killing my past; leaving it in place teems with risks.
The risk level has been abruptly raised by the departure of various security and privacy personnel from Twitter's staff, which led Michael Veale to warn that the platform should be regarded as dangerously vulnerable and insecure. Veale went on to provide instructions for using the law (that is, the General Data Protection Regulation) rather than just Twitter's tools, to delete your data.
Some of my more cautious friends have been regularly deleting their data all along - at the end of every couple of weeks, or every six months, mostly to ensure they can't suddenly become a pariah for something they posted casually five years ago. (It turns out this is a function that Mastodon will automate through user settings.) But, as Veale asks, how do you know Twitter is really deleting the data? Hence his suggestion of applying the law: it gives your request teeth. But is there anyone left at Twitter to respond to legal requests?
The general sense of uncertainty is heightened by things like the reports I saw of strange behavior in response to requests to download account archives: instead of just asking for two-factor authentication before proceeding, the site sent these users to the help center and a form demanding government ID. There seem to be a number of these little weirdnesses, and they're raising users' overall distrust of the system and the sense that we're all just waiting for the thing to break and our data to become an asset in a fire sale - or for a major hack in which all our data gets auctioned on the dark web.
"If you're not paying for the product, you're the product," goes the saying (attribution uncertain). Right now, it feels like we're waiting to find out our product status.
Meanwhile, Apple has spent years now promoting its products by claiming they provide better privacy than the alternatives. It is currently helping destroy the revenue base of Meta (owner of Instagram, Facebook, and WhatsApp) by allowing users to opt to block third-party trackers on its devices. At The Drum, Chris Sutclifee cites estimates that 62% of Apple users have done so; at Forbes Daniel Newman reported in February that Meta projected that the move would cost the company $10 billion in lost ad sales this year. The financial results it's announced since have been accordingly grim.
Part of the point of this is that Apple's promise appeared to be that the money its customers pay for hardware and services also buys them privacy. This week, Tom Germain reported at Gizmodo that Apple's own apps continue to harvest data about users' every move even when those users have - they thought - turned data collection off.
"Even if you're paying for the product, you're the product," Cory Doctorow wrote on discovering this. Double-dipping is familiar in other contexts. But here Apple has broken the pay-with-data bargain that made the web. It may live to regret this; collecting data to which it has exclusive access while shutting down competitors has attracted the attention of German antitrust regulators.
If that's where the commercial world is going, the appeal of something like Mastodon, where we are *not* the product, and where accounts can be moved to other interoperable servers at any time, is obvious. But, as I've written before about professional media, the money to pay for services and servers has to come from *somewhere*. If we're not going to pay with data, then...how?
Illustrations: Twitter flies upside down.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or
Probably everyone who's been online for any length of time has had this experience. That site you visit every day, that's full of memories and familiar people suddenly is no more. Usually, the problem is a new owner, who buys it and closes it down (Television Without Pity, Geocities), or alters it beyond recognition (CompuServe). Or its paradigm falls out of fashion and users leach away until the juice is gone, the fate of many of the early text-based systems.
As the world and all have been reporting - because so many journalists make their online homes there - Twitter is in trouble. A new owner with poor impulse control and a new idea every day - Twitter will be a financial service! (like WeChat?) Twitter will be the world's leading source of accurate information! (like Wikipedia?) Twitter can do multimedia! (like TikTok?), who is driving out what staff he hasn't fired.
The result, Chris Stokel-Walker predicts, will be escalating degradation of the infrastructure - and possibly, Mike Masnick writes, violations of the company's 2011 20-year consent decree with the US Federal Trade Commission, which could ultimately cost the company billions, in addition to the $13 billion in debt Musk added to the company's existing debt load in order to purchase it.
All of that - and the unfolding sequelae Maria Farrell details - will no doubt be a widely used case study at business schools someday.
For me, Twitter has been a fantastic resource. In the 15 years since I created my account, Twitter is where I've followed breaking news, connected with friends, found expert communities. Tight clusters are, Peter Coy finds at the New York Times, why Twitter has been unexpectedly resilient despite its lack of profitability.
But my use of Twitter has nothing in common with its use by those with millions of followers. At that level, it's a broadcast medium. My own experience of chatting with friends or responding randomly to strangers' queries is largely closed to them. Like traveling on the subway, they *can* do it, but not the way the rest of us can. For someone in that position, Twitter is a large audience that fortuitously includes journalists, politicians, and entertainers. The writer Stephen King had the right reaction to the suggestion that verified accounts should pay $20 a month (since reduced to $8) for the privilege: screw *that*. Though even average Twitter users will resist paying to be sold to the advertisers who ultimately fund it the service.
Unusually, a number of alternative platforms are ready and waiting for disaffected Twitter users to experiment with. Chief among them is Mastodon, which looks enough like Twitter to suggest an easy learning curve. There are, however, profound differences, most of them good. Mastodon is a protocol, not a site; like the web, email, or Usenet, anyone can set up a server ("instance") using open source software and connect to other instances. You can form a community on a local instance - or you can use your account as merely a convenient address from which to access postings by users at dozens of other instances. One consequence of this is that hashtags are very much more important in helping people find each other and the postings they're interested in.
Over the last week, I've seen a lot of people trying to be considerate of the natives and their culture, most particularly that they are much more sensitive about content warnings. The reality remains, though, that Mastodon's user base has doubled in a week, and that level of influx will inevitably bring change - if they stay and post, and particularly if many of them adopt a bit of software that allows automated cross-posting between the two services.
All of this has happened without a commercial interest: no one owns Mastodon, it has no ads, and no one is recruiting Twitter users. But that right there may be the biggest problem: the huge influx of new users doesn't bring revenue or staff to help manage it. This will be a big, unplanned test of the system's resilience.
Many are now predicting Twitter's total demise, not least because new owner Elon Musk himself has told employees that the company may become bankrupt due to its burn rate (some of which is his own fault, as previously noted). Barring the system going offline, though, habit is a strong motivator, and it's more likely that many people will treat the new accounts they've set up as "in case of need".
But some will move, because unlike other such situations, whole communities can move together to Mastodon, aided by its ability to ingest lists. I'm seeing people compile lists of accounts in various academic fields, of journalists, of scientists. There are even tools that scans the bios of your Twitter contacts for Mastodon addresses and compiles them into a personal list, which, again, can be easily imported.
If Mastodon works for Twitter's hundreds of millions, there is a big upside: communities don't have to depend for their existence on the grace and favor of a commercial owner. Ultimately, the reason Musk now owns Twitter is he offered shareholders a lucrative exit. They didn't have to care about *us*. And they didn't.
Illustrations: Twitter versus lettuce (via Sheon Han on Twitter).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or Mastodon
]]>As we've observed before, it's only for the most privileged that *not* being online or *not* carrying a smartphone comes without cost.
Livingstone was speaking on a panel on digital inequalities at this week's UK IGF, an annual forum that mulls UK concerns over Internet governance in order to feed them into the larger global conversation on such matters (IGF). The panel highlighted two groups most vulnerable to digital exclusion: old people and children.
According to Ofcom's 2022 Online Nations report, in 2021 6% of British over-18s did not have Internet access at home. That average is, however, heavily skewed by over-65s, 20% of whom don't have Internet access at home and another 7% of whom have Internet access at home but don't use it. In the other age groups, the percentage without home access starts at 1% for 18-24 and rises to 3% for 44-54. The gap across ages is startlingly larger than the gap across economic groups, although obviously there's overlap: Age UK estimated in 2021 that 2 million pensioners were living in poverty.
I know one of the people in that 20%. She is adamant that there is nothing the Internet has to offer that she could possibly want. (I feel this way about cryptocurrencies.) Because, fortunately, the social groups she's involved in are kind, tolerant, and small, the impact of this refusal probably falls more on them than on her: they have to make the phone calls and send the printed-out newsletters to ensure she's kept in the loop. And they do.
Another friend, whose acquaintance with the workings of his computer is so nodding that he gets his son round to delete some files when his hard drive fills up, would happily do without it - except that his failing mobility means that he finds entertainment by playing online poker. To him, the computer is a necessary, but despised, evil. In Ofcom's figures, he'd look all right - Internet access at home, uses it near-daily. But the reality is that despite his undeniable intelligence he's barely capable of doing much beyond reading his email and loading the poker site. Worse, he has no interest in learning anything more; he just hates all of it. Is that what we mean by "Internet access"?
These two are what people generally think of when they talk about the "digital divide".
As Sally West, policy manager for Age UK, noted, if you're not online it's becoming increasingly difficult to do mundane things like book a GP appointment or do any kind of banking. Worse, isolation during the pandemic led some to stop using the Internet because they didn't have their customary family support. In its report on older people and the Internet, Age UK found that about half a million over-65s have stopped using the Internet. And, West said, unlike riding a bike, Internet skills don't necessarily stay with you when you stop using them. Even if they do, they lose relevance as the technology changes.
For children, lack of access translates into educational disadvantage and severely constricted life opportunities. Despite the government's distribution of laptops. Nominet's Digital Youth Index finds that a quarter of young people lack access to one, and 16% rely primarily on mobile data. And, said Jess Barrett, children lack understanding of privacy and security yet are often expected to be their family's digital expert.
More significantly, the Ofcom report finds that 20% of people - and a *third* of people aged 25-34 - used only a smartphone to go online 2021. That's *double* the number in 2020. Ofcom suggests that staying home much of 2020 and newer smartphones' larger screens may be relevant factors. I'd guess that economic uncertainty played an important role and that 2022's cost-of-living crisis will cause these numbers to rise again. There's also a generational aspect; today's 30-year-olds got their teenaged independence via smart phones.
To Old Net Curmudgeons, phone-only access isn't really *Internet* access; it's walled-garden apps. Where the open Internet promised that all of us could build and distribute things, apps limit us to consuming what the apps' developers allow. This is not petty snobbery; creating the next generation of technology pioneers requires learning as active users instead of lurkers.
This disenfranchisement led Lizzie Coles-Kemp to an approach that's rarely discussed: "We need to think how to design services for limited access, and we need to think what access means. It's not binary." This approach is essential as the of the mobile phone world's values risk overwhelming those of the open Internet.
In response, Livingstone mooted the idea of "meaningful access": the right device for the context and sufficient skills and knowledge that you can do what you need to.
The growing cost-of-living crisis, exacerbated this week by an interest rate rise, makes it easy to predict a marked further rise in households that jettison fixed-line broadband. This year may be the first since the Internet began in which online access in the UK shrinks.
"We are just highlighting two groups," Livingstone concluded. "But the big problem is poverty and exclusion. Solve those, and it fixes it."
Illustrations: UK IGF's panel on digital inequalities: Cliff Manning, Sally West, Sonia Livingstone, Lizzie Coles-Kemp, Jess Barrett,
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week on Twitter or @wendyg@mastodon.xyz.
]]>The British Airways gate attendant at Chicago's O'Hare airport tapped the screen and a big green checkmark appeared.
"Customs." That was all the explanation she offered. It had all happened so fast there was no opportunity to object.
Behind me was an unforgiving line of people waiting to board. Was this a good time to stop to ask:
- What is the specific purpose of collecting my image?
- What legal basis do you have for collecting it?
- Who will be storing the data?
- How long will they keep it?
- Who will they share it with?
- Who is the vendor that makes this system and what are its capabilities?
It was not.
I boarded, tamely, rather than argue with a gate attendant who certainly didn't make the decision to install the system and was unlikely to know much about its details. Plus, we were in the US, where the principles of the data protection law don't really apply - and even if they did, they wouldn't apply at the border - even, it appears, in Illinois, the only US state to have a biometric privacy law.
I *did* know that US Customs and Border Patrol had begun trialing facial recognition in selected airports, beginning in 2017. Long-time readers may remember a net.wars report from the 2013 Biometrics Conference about the MAGICAL [sic] airport, circa 2020, through which passengers flow unimpeded because their face unlocks all. Unless, of course, they're "bad people" who need to be kept out.
I think I even knew - because of Edward Hasbrouck's indefatagable reporting on travel privacy - that at various airports airlines are experimenting with biometric boarding. This process does away entirely with boarding cards; the airline captures biometrics at check-in and uses them to entirely automate the "boarding process" (a favorite bit of airline-speak of the late comedian George Carlin). The linked explanation claims this will be faster because you can have four! automated lanes instead of one human-operated lane. (Presumably then the four lanes merge into a giant pile-up in the single-lane jetway.)
It was nonetheless startling to be confronted with it in person - and with no warning. CBP proposed taking non-US citizens' images in 2020, when none of us were flying, and Hasbrouck wrote earlier this year about the system's use in Seattle. There was, he complained, no signage to explain the system despite the legal requirement to do so, and the airport's website incorrectly claimed that Congress mandated capturing biometrics to identify all arriving and departing international travelers.
According to Biometric Update, as of last February, 32 airports were using facial recognition on departure, and 199 airports were using facial recognition on arrival. In total, 48 million people had their biometrics taken and processed in this way in fiscal 2021. Since the program began in 2018, the number of alleged impostors caught: 46.
"Protecting our nation, one face at a time," CBP calls it.
On its website, British Airways says passengers always have the ability to opt out except where biometrics are required by law. As noted, it all happened too fast. I saw no indication on the ground that opting out was possible, even though notice is required under the Paperwork Reduction Act (1980).
As Hasbrouck says, though, travelers, especially international travelers and even more so international travelers outside their home countries, go through so many procedures at airports that they have little way to know which are required by law and which are optional, and arguing may get you grounded.
He also warns that the system I encountered is only the beginning. "There is an explicit intention worldwide that's already decided that this is the new normal, All new airports will be designed and built with facial recognition built into them for all airlines. It means that those who opt out will find it more and more difficult and more and more delaying."
Hasbrouck, who is probably the world's leading expert on travel privacy, sees this development as dangerous. Largely, he says, it's happening unopposed because the government's desire for increased surveillance serves the airlines' own desire to cut costs through automating their business processes - which include herding travelers onto planes.
"The integration of government and business is the under-noticed aspect of this. US airports are public entities but operate with the thinking of for-profit entities - state power merged with the profit motive. State *monopoly* power merged with the profit motive. Automation is the really problematic piece of this. Once the infrastructure is built it's hard for airline to decide to do the right thing." That would be the "right thing" in the sense of resisting the trend toward "pre-crime" prediction.
"The airline has an interest in implying to you that it's required by government because it pressures people into a business process automation that the airline wants to save them money and implicitly put the blame on the government for that," he says. "They don't want to say 'we're forcing you into this privacy-invasive surveillance technology'."
Illustrations: Edward Hasbrouck in 2017.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>Here in 2022, although Western countries believe the acute emergency phase of the pandemic is past, the reality is that covid is still killing thousands of people a week across the world, and there is no guarantee we're safe from new variants with vaccine escape. Nonetheless, the UK and US at least appear to accept this situation as if it were the same old "normal". Except: there's a European war, inflation, strikes, a cost of living crisis, energy shortages, and a load of workplace monitoring and other privacy invasions that would have been heavily resisted in previous times. (And, in the UK, a government that has lost its collective mind; as I type no one dares move the news cameras away from the doors of Number 10 Downing Street in case the lettuce wins.)
Laws last longer than pandemics, as the human rights lawyer Adam Wagner writes in his new book, Emergency State: How We Lost Our Freedoms in the Pandemic and Why It Matters. For the last couple of years, Wagner has been a constant presence in my Twitter feed, alongside numerous scientists and health experts posting and examining the latest new research. Wagner studies a different pathology: the gaps between what the laws actually said and what was merely guidance. and between overactive police enforcement and people's reasonable beliefs of what the laws should be.
In Emergency State, Wagner begins by outlining six characteristics of the power of emergency-empowered state: mighty, concentrated, ignorant, corrupt, self-reinforcing, and, crucially, we want it to happen. As a comparison, Wagner notes the surveillance laws and technologies rapidly adopted after 9/11. Much of the rest of the book investigates a seventh characteristic: these emergency-expanded states are hard to reverse. In an example that's frequently come up here, see Britain's World War II ID card, which took until 1952 to remove, and even then it took Harry Wilcock to win in court after refusing to show his papers on demand.
Most of us remember the shock and sudden silence of the first lockdown. Wagner remembers something most of us either didn't know or forgot: when Boris Johnson announced the lockdown and listed the few exceptional circumstances under which we were allowed to leave home, there was as yet no law in place on which law enforcement could rely. That only came days later. The emergency to justify this was genuine: dying people were filling NHS hospital beds. And yet: the government response overturned the basis of Britain's laws, which traditionally presume that everything is permitted unless it's specifically forbidden. Suddenly, the opposite - everything is forbidden unless explicitly permitted - was the foundation of daily life. And it happened with no debate.
Wagner then works methodically through Britain's Emergency State, beginning by noting that the ethos of Boris Johnson's government, continuing the conservatives' direction of travel, coincidentally was already disdainful of Parliamentary scrutiny (see also: prorogation of Parliament) and ready to weaken both the human rights act and the judiciary. As the pandemic wore on, Parliamentary attention to successive waves of incoming laws did not improve; sometimes, the laws had already changed by the time they reached the chamber. In two years, Parliament failed to amend any of them. Meanwhile, Wagner notes, behind closed doors government members ignored the laws they made.
The press dubbed March 18, 2022 Freedom Day, to signify the withdrawal of all restrictions. And yet: if scientists' worst fears come true, we may need them again. Many covid interventions - masks, ventilation, social distancing, contact tracing - are centuries old, because they work. The novelty here was the comprehensive lockdowns and widespread business closures, which Wagner suggests may have come about because the first country to suffer and therefore to react was China, where this approach was more acceptable to its authoritarian government. Would things have gone differently had the virus surfaced in a democratic country? We will never know. Either way, the effects of the cruelest restrictions - the separation among families and friends, the isolation imposed on the elderly and dying - cannot be undone.
In Britain's case, Wagner points to flaws in the Public Health Act (1984) that made it too easy for a months-old prime minister with a distaste for formalities to bypass democratic scrutiny. He suggests four remedies: urgently amend the act to include safeguards; review all prosecutions and fines under the various covid laws; codify stronger human rights, either in a written constitution or a bill of rights; and place human rights at the heart of emergency decision making. I'd add: elect leaders who will transparently explain which scientific advice they have and haven't followed and why, and who will plan ahead. The Emergency State may be in abeyance, but current UK legislation in progress seeks to undermine our rights regardless.
Illustrations: The Daily Star's QE2 lettuce declaring victory as 44-day prime minister Liz Truss resigns.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
]]>