January 20, 2023

New music

Nick_Cave_by_Amelia_Troubridge-370.jpgThe news this week that "AI" "wrote" a song "in the style of" Nick Cave (who was scathing about the results) seemed to me about on a par with the news in the 1970s that the self-proclaimed medium Rosemary Brown was able to take dictation of "new works" by long-dead famous composers. In that: neither approach seems likely to break new artistic ground.

In Brown's case, musicologists, psychologists, and skeptics generally converged on the belief that she was channeling only her own subconscious. AI doesn't *have* a subconscious...but it does have historical inputs, just as Brown did. You can say "AI" wrote a set of "song lyrics" if you want, but that "AI" is humans all the way down: people devised the algorithms and wrote the computer code, created the historical archive of songs on which the "AI" was trained, and crafted the prompt that guided the "AI"'s text generation. But "the machine did it by itself" is a better headline.


Forty-two years after the first one, I have been recording a new CD (more details later). In the traditional folk world, which is all I know, getting good recordings is typically more about being practiced enough to play accurately while getting the emotional performance you want. It's also generally about very small budgets. And therefore, not coincidentally, a whole lot less about sound effects and multiple overdubs.

These particular 42 years are a long time in recording technology. In 1980, if you wanted to fix a mistake in the best performance you had by editing it in from a different take where the error didn't appear, you had to do it with actual reels of tape, an edit block, a razor blade, splicing tape...and it was generally quicker to rerecord unless the musician had died in the interim. Here in digital 2023, the studio engineer notes the time codes, slices off a bit of sound file, and drops it in. Result! Also: even for traditional folk music, post-production editing has a much bigger role.

Autotune, which has turned many a wavering tone into perfect pitch, was invented in 1997. The first time I heard about it - it alters the pitch of a note without altering the playback speed! - it sounded indistinguishable from magic. How was this possible? It sounded like artificial intelligence - but wasn't.

The big, new thing now, however, *is* "AI" (or what currently passes for it), and it's got nothing to do with outputting phrases. Instead, it's stem splitting - that is, the ability to take a music file that includes multiple instruments and/or voices, and separate out each one so each can be edited separately.

Traditionally, the way you do this sort of thing is you record each instrument and vocal separately, either laying them down one at a time or enclosing each musician/singer into their own soundproof booth, from where they can play together by listening to each other over headphones. For musicians who are used to singing and playing at the same time in live performance, it can be difficult to record separate tracks. But in recording them together, vocal and instrumental tracks tend to bleed into each other - especially when the instrument is something like an autoharp, where the instrument's soundboard is very close to the singer's mouth. Bleed means you can't fix a small vocal or instrumental error without messing up the other track.

With stem splitting, now you can. You run your music file through one of the many services that have sprung up, and suddenly you have two separated tracks to work with. It's being described to me as a "game changer" for recording. Again: sounds indistinguishable from magic.

This explanation makes it sound less glamorous. Vocals and instruments whose frequencies don't overlap can be split out using masking techniques. Where there is overlap, splitting relies on a model that has been trained on human-split tracks and that improves with further training. Still a black box, but now one that sounds like so many other applications of machine learning. Nonetheless, heard in action it's startling: I tried LALAL_AI on a couple of tracks, and the separation seemed perfect.

There are some obvious early applications of this. As the explanation linked above notes, stem splitting enables much finer sampling and remixing. A singer whose voice is failing - or who is unavailable - could nonetheless issue new recordings by laying their old vocal over a new instrumental track. And vice-versa: when, in 2002, Paul Justman wanted to recreate the Funk Brothers' hit-making session work for Standing in the Shadows of Motown, he had to rerecord from scratch to add new singers. Doing that had the benefit of highlighting those musicians' ability and getting them royalties - but it also meant finding replacements for the ones who had died in the intervening decades.

I'm far more impressed by the potential of this AI development than of any chatbot that can put words in a row so they look like lyrics. This is a real thing with real results that will open up a world of new musical possibilities. By contrast, "AI"-written song lyrics rely on humans' ability to conceive meaning where none exists. It's humans all the way up.

Illustrations: Nick Cave in 2013 (by Amanda Troubridge, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 5, 2023


metThumbnail image for Metropolis-openingshot.pngFor the last five years a laptop has been whining loudly in my living room. It hosts my mail server.

I know: who has their own mail server any more? Even major universities famed for their technological leadership now outsource to Google and Microsoft.

In 2003, when I originally set it up, lots of geeky friends had them. I wanted my email to come to the same domain as my website, which by then was already eight years old. I wanted better control of spam than I was getting with the email addresses I was using at the time. I wanted to consolidate the many email addresses I had accrued through years of technology reporting. And I wanted to be able to create multiple mailboxes at that domain for different purposes, so I could segregate the unreadable volume of press releases from personal email (and use a hidden, unknown address for sensitive stuff, like banking). At the time, I had that functionality via an address on the now-defunct Demon Internet, but Demon had become a large company in its ten years of existence, and you never knew...

In 2015, when Hillary Clinton came under fire for running her own mail server, I explained all this for Scientific American. The major benefit of doing it yourself, I seem to recall concluding at the time, was one Clinton's position barred her from gaini ng: the knowledge that if someone wants your complete historical archive they can't get it by cutting a secret deal with your technology supplier.

For about the first ten years, running my own mail server was a reasonably delightful experience. Being able to use IMAP to synchronize mail across multiple machines or log into webmail on my machine hanging at the end of my home broadband made me feel geekishly powerful, like I owned at least this tiny piece of the world. The price seemed relatively modest: two days of pain every couple of years to update nad upgrade it. And the days of pain weren't that bad; I at least felt I was gaining useful experience in the process.

Around me, the technological world chnaged. Gmail and other services got really good at spam control. The same friends with mail servers first began using Gmail for mailing lists, and then, eventually, for most things.

And then somehow, probably around six or seven years ago, the manageable two days of pain crossed into "I don' wanna" territory. Part of the problem was deciding whether to stick with Windows as the operating system or shift to Linux. Shifting to Linux required a more complicated and less familiar installation process as well as some extra difficulty in transferring the old data files. Staying with Windows, however, meant either sticking with an old version heading for obsolescence or paying to upgrade to a new version I didn't really want and seemed likely to bring its own problems. I dithered.

I dithered for a long time.

Meanwhile, dictionary attacks on that server became increasingly relentless. This is why the laptop is whining: its limited processing power can't keep up with each new barrage of some hacker script trying endless user names to find the valid ones.

There have been weirder attacks. One, whose details I have mercifully reppressed, overwhelmed the server entirely; I was only able to stop it by barring a succession of Internet addresses.

Things broke and didn't get repaired, awaiting the upgrade that never happened. At some point, I lost the ability to log in remotely via the web. I'm fairly sure the cause was that I changed a setting and not some hacker attack, but I've never been able to locate and fix it. This added to the dither of upgrading, as did the discovery that my server software appeared to have been bought by a Russian company.

Through all this, the outside world became more hostile to small servers, as part of efforts to improve spam blocking security against attacks. Delaying upgrading the server has also meant not keeping up well enough with new protocols and preventions as they've developed. Administrators I deal with began warning me about resulting incompatibilities. Gmail routinely dropped my email to friends into spam folders. I suspect this kind of concentration will be the future of the Mastodon Fediverse if it reaches mainstream use.

The warnings this fall that Britain might face power outages this winter broke the deadlock. I was going to have to switch to hosted email like everyone else. Another bit of unwiring.

I can see already that it will be a great relief not worrying about the increasingly fragile server any more. I can reformat and give away that old laptop and the less old one that was supposed to replace it. I will miss the sense of technological power having it gave me, but if I'm honest I haven't had that in a long time now. In fact, the server itself seems to want to be put out of its misery: it stopped working a few days before Christmas, and I'm running on a hosted system as a failover. Call it my transitional server.

If I *really* miss it, I suppose I can always set up my own Mastodon instance. How hard can it be, right?

Illustrations: A still from Fritz Lang's 1927 classic, Metropolis, in celebration of its accession into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon.or Twitter.

The ad delusion

Thumbnail image for Facebook-76536_640.pngThe fundamental lie underlying the advertising industry is that people can be made to like ads. People inside the industry sometimes believe this to a delusional degree - at an event some years ago, for example, I remember a Facebook representative suggesting that correctly targeted ads could be even more compelling to the site's users than *pictures of their grandchildren*. As if.

Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.

Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.

This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.

This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.

Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".

At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.

At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.

At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.

The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.

The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

The ad delusion

Thumbnail image for Facebook-76536_640.pngThe fundamental lie underlying the advertising industry is that people can be made to like ads. People inside the industry sometimes believe this to a delusional degree - at an event some years ago, for example, I remember a Facebook representative suggesting that correctly targeted ads could be even more compelling to the site's users than *pictures of their grandchildren*. As if.

Apple's design change last year to bar apps from tracking its users unless said users specifically opted in has shown the reality of this. As of April 2022, only 25% have opted in. Meanwhile, Meta estimates that this decision cost it $10 billion in revenues in 2022.

Fair to remember, though, that Apple itself still appears to track users, however, and the company is facing two class action suits after Gizmodo showed that Apple goes on tracking users even when their privacy settings are set to disable tracking completely.

This week, Ireland's Data Protection Commissioner issued Meta with a fine of €390 million and a ruling, forced on it by the European Data Protection Board, to the effect that the company cannot claim that requiring users to agree to its lengthy terms and conditions and including a clause allowing it to serve ads based on their personal data constitutes a "contract". The DPC, which wanted to rule in Meta's favor, is apparently appealing this ruling, but it's consistent with what most of us perceive to be a core principle of the General Data Protection Regulation - that is, that companies can't claim consent as a legal basis for using personal data if users haven't actively and specifically opted in.

This principle matters because of the crucial importance of defaults. As research has repeatedly shown, as many as 95% of users never change the default settings in the software and devices they use. Tech companies know and exploit this.

Meta has three months to bring its data processing operations into compliance. Its "data processing operations" are, of course, better known as Facebook, Instagram, and (presumably) WhatsApp. As a friend has often observed, how much less appealing they would sound if Meta called them that rather than use their names, and accurately described "adding a friend" as "adding a link in the database".

At the Guardian, Dan Milmo reports that 25% of its total, or $19 billion in 2021. Meta says it will appeal the against the decision, that in any case noyb's interpretation is wrong, and that the decision relates "only to which legal basis" Meta uses for "certain advertising. And, it said, carefully, "Advertisers can continue to use our platforms to reach potential customers, grow their business and create new markets." In other words, like the repeatedly failing efforts to stretch GDPR to enable data transfers between the EU and US, Meta thinks it can make a deal.

At the International Association of Privacy Professionals blog, Jennifer Bryant highlights the disagreement between EDPP and the Irish DPC, which argued that Meta was not relying on user consent as the legal basis for processing personal data - the DPC was willing to accept advertising as part of the "personalized" service Instagram promises. The key question: can Meta find a different legal basis that will pass muster not only with GDPR but with the Digital Markets Act, which comes into force on May 2? Meta itself, in a blog post includes personalized ads as a "necessary and essential part" of the personalized services Facebook and Instagram provide - and complains about regulatory uncertainty. Which, if they really wanted it, isn't so hard to achieve: comply with the most restrictive ruling and the most conservative interpretation of the law, and be done with it.

At Wired, Morgan Meaker argues that the threat to Meta's business model posed by the EDPB's ruling may be existential for more than just that one company. *Every* Silicon Valley company depends on the "contract" we all "sign" (that is, the terms and conditions we don't read) when we open our accounts as a legal basis for whatever they want to do with our data. If the business model is illegal for Meta, it's illegal for all of them. The death of surveillance capitalism has begun, the headline suggests optimistically.

The reality is most most people's tolerance for ads is directly proportional to their ability to ignore them. We've all learned to accept some level of advertising as the price of "free" content. The question here is whether we have to accept being exploited as well. No amount of "relevance" improves ads' intrusiveness for me. But that's a separate issue from the data exploitation none of us intentionally sign up for.

The "1984" Apple Super Bowl ad (YouTube) encapsulates the irony of our present situation: the price of viewing football at the time, it promised a new age in which information technology empowered us. Now we're in the ad's future, and what we got was an age in which information technology has become something that is done to us. This ruling is the next step in the battle to reverse that. It won't be enough by itself.

Illustrations: Image of Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon or Twitter.

December 9, 2022


wires-370.jpgSay you haven't moved (house) in 30 years without saying you haven't moved in 30 years. Or say you're over 50 without saying you're over 50. "I just pulled a big box of wires out of my house."

What began as a project to turn the attic (loft) into more usable space has metastasized all over the house (apartment), as every crowded corner gets reevaluated. Behind every piece of furniture, some being moved for the first time since 1991, lurk wires. Wires of all kinds. Speaker wire that ran to the amplifier down the hall. TV cables connecting various items - computer, DVD player, the VCR I can't throw out until all the tapes are gone. Ethernet cables, because wired connections are more stable. Telephone cables running to remote extensions that were replaced with DECT phones 15 years ago. A weird, extraordinarily thin wire for a device called a Rabbit that once connected the TV in my office to the cable box in the living room; an infrared sender even let you change channels. All of the cable box, the Rabbit, and the TV are long gone, but the wire lives on because it runs behind furniture that has settled too deeply into the carpet to move. Even now, I haven't got it all out. And, because this apartment (flat) has just one single electrical outlet per room, multio-way extension cords and plugs *everywhere*.

The phone, stereo system, and TV cabling went in first. Layered on top of all that was an ethernet network that accreted over time to serve various computers in odd locations. There was an extra wifi router in the living room because the original one's wifi didn't reach the kitchen. And so on. So the box of pulled wiring also includes three network switches, which still leaves two in place. This in a four-room flat!

I still haven't touched the Giant Rat of Sumatra's nest behind my desk.

This is the result of 30 years of adding bits that were needed at the time but never subtracting them when their original purpose has gone. If you move frequently this sort of thing doesn't happen because you tear it all down and build back only what you need each time. I know, because between the ages of 17 and 27 I moved nine times. I got really good at packing books and LPs. (Say you're over 60 without saying you're over 60.)

Were I a 30-something modern renter, my entire life would lift out of each successive abode leaving no trace and requiring few boxes. My books, audio, and video would be computer files or streaming subscriptions. All my telecommunications connections would be wireless. And, for best results, any furniture I had would be either on 30-day free trial or inflatable. It's like having a printer: modern people are app people. Wires need not apply. Wires are for old people. Wires...are a sign of privilege.

I now realize that accretion has led me to the equivalent of buying a tractor but continuing to feed and care for the Clydesdale horses it replaced without really noticing they're no longer doing anything useful. Or, in a higher-risk example, this sort of accretion leads older people into overly complex medication regimes as their doctors add new medications, often to control the side effects of the ones they're already on, without reconsidering the whole list; that situation is common enough to have bred a subspecialty of pharmacology to review and rationalize people's medications.

More technologically, there's the phenomenon consultants remark upon of finding ancient machines, even in banks that are running mission-critical but ancient software no one dares touch because no one knows how it works. I suspect that as the time between computer replacements continues to lengthen accretion of this type will be the fate of all computer systems. The reason is simple: adding things to patch localized problems without touching what's already in place will always feel safer than pulling an unlabeled plug and risking breaking the whole system because you didn't understand the complex dependencies. And there's little motivation. For the most part, everything works fine until one day the increasing complexity overwhelms the system and it all falls over - at which point tracing and the fault is excruciatingly difficult, and fixing will likely require a workaround that, like the one for the Y2K bug, has an expiration date when you'll have to trace and replace - or find another workaround.

There are lots of knock-on effects from accretion, most notably unnoticed security vulnerabilities. In her days running RISCS, Angela Sasse used to say that often important solutions to endemic cybersecurity problems are overlooked because they're not specifically technological fixes. Instead, she argued, reducing stress on employees by ensuring they're not overworked and have systems that make their work easier instead of harder, pays dividends in fewer mistakes. Similarly, upgrading and replacing old equipment with newer equipment with better security and usability built in can solve many seemingly intractable problems, over time costing less than continuing to patch the old system.

In my own case, there was a small but definite cost in wasted electricity (those extra switches) and, I imagine, a slightly higher risk of fire (all those extension cords). Life, as Gilbert and Sullivan observed, is a closely complicated tangle.

Illustrations: The box of wires, with more to come.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Mastodon or Twitter.

December 2, 2022

Hearing loss

amazon-echo-dot-charcoal-front-on-370.jpgSome technologies fail because they aren't worth the trouble (3D movies). Some fail because the necessary infrastructure and underlying technologies aren't good enough yet (AI in the 1980s, pen computing in the 1990s). Some fail because the world goes another, simpler, more readily available way (Open Systems Interconnection). Some fail because they are beset with fraud (the fate that appears to be unfolding with respect to cryptocurrencies), And some fail even though they work as advertised and people want them and use them because they make no money to sustain their development for their inventors and manufacturers.

The latter appears to be the situation with smart speakers, which in 2015 were going to take over the world, and today, in 2022, are installed in 75% of US homes. Despite this apparent success, they are losing money even for market leaders Amazon (third) and Google (second), as Business Insider reported this week. Amazon's Worldwide Digital division, which includes Prime Video as well as Echo smart speakers and Alexa voice technology, lost $3 billion in the first quarter of this year alone, primarily due to Alexa and other devices. The division will now be the biggest target for the layoffs the company announced last week.

The gist: they thought smart speakers would be like razors or inkjet printers, where you sell the hardware at or below cost and reap a steady income stream from selling razor blades or ink cartridges. Amazon thought people would buy their smart speakers, see something they liked, and order the speaker to put through the purchase. Instead, judging from the small sample I have observed personally, people use their smart speakers as timers, radios, and enhanced remote controls, and occasionally to get a quick answer from Wikipedia. And that's it. The friends I watched order their smart speaker to turn on the basement lights and manage their shopping list have, as far as I could tell on a recent visit, developed no new uses for their voice assistant in three years of being locked up at home with it.

The system has developed a new feature, though. It now routinely puts the shopping list items on the wrong shopping list. They don't know why.

In raising this topic at The Overspill, Charles Arthur referred back to a 2016 Wired aritcle summarizing venture capitalist Mary Meeker's assessment in her annual Internet Trends report that voice was going to take over the world and the iPhone had peaked. In slides 115-133, Meeker outlined her argument: improving accuracy would be a game-changer.

Even without looking at recent figures, it's clear voice hasn't taken over. People do use speech when their hands are occupied, especially when driving or when the alternative is to type painfully into their smartphone - but keyboards still populate everyone's desks, and the only people I know who use speech for data entry are people for whom typing is exceptionally difficult.

One unforeseen deterrent may be that privacy emerged as a larger issue than early prognosticators may have expected. Repeated stories have raised awareness that the price of being able to use a voice assistant at will is that microphones in your home listen to everything you say waiting for their cue to send your speech to a distant server to parse. Rising consciousness of the power of the big technology companies has made more of us aware that smart speakers are designed more to fulfill their manufacturers' desires to intermediate and monetize our lives than to help us.

The notion that consumers would want to use Amazon's Echo for shopping appears seriously deluded with hindsight. Even the most dedicated voice users I know want to see what they're buying. Years ago, I thought that as TV and the Internet converged we'd see a form of interactive product placement in which it would be possible to click to buy a copy of the shirt a football player was wearing during a game or the bed you liked in a sitcom. Obviously, this hasn't happened; instead a lot of TV has moved to streaming services without ads, and interactive broadcast TV is not a thing. But in *that* integrated world voice-activated shopping would work quite well, as in "Buy me that bed at the lowest price you can find", or "Send my brother the closest copy you can find of Novak Djokovic's dark red sweatshirt, size large, as soon as possible, all cotton if possible."

But that is not our world, and in our world we have to make those links and look up the details for ourselves. So voice does not work for shopping beyond adding items to lists. And if that doesn't work, what other options are there? As Ron Amadeo writes at Ars Technica, the queries where Alexa is frequently used can't be monetized, and customers showed little interest in using Alexa to interact with other companies such as Uber or Domino's Pizza. And, even Google, which is also cutting investment in its voice assistant, can't risk alienating consumers by using its smart speaker to play ads. Only Apple appears unaffected.

"If you build it, they will come," has been the driving motto of a lot of technological development over the last 30 years. In this case, they built it, they came, and almost everyone lost money. At what point do they turn the servers off?

Illustrations: Amazon Echo Dot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter and/or Mastodon.

November 18, 2022

Being the product

Twtter-bird-upside-down-370.jpgThe past week of Twitter has been marked by a general sense of waiting for the crash, heightened because no one knows when the bad thing will happen or what form it will take. On Twitter itself, I see everyone mourning the incoming loss and setting up camp elsewhere; on professional media journalists are frantically trying to report on what's going on at HQ, where there is now no communications team and precious few engineers.

As noted here last week, it is definitely not so simple as Twitter's loss is Mastodon's/Discord/s/SomeOtherSite's gain.

The general sense of anxiety feels like a localized version of the years of the Trump presidency - that is, people logging in constantly to check, "What's he done now?" Only the "he" is of course new owner Elon Musk, and the "what" is stuff like a team has been fired, someone crucial has quit, there's been a new order to employees ("check this box by 5pm or you're fired!"), making yet another change to the system of blue ticks that may or may not verify a person's identity, or appearing to disable two-factor authentication via SMS shortly after announcing the shutdown of "20% of microservices". This kind of thing makes everyone jumpy. Every tiny glitch could be the first sign that Twitter is crumbling around the edges before cascading into failure, will the process look like HAL losing its marbles in the movie 2001: A Space Odyseey? Ot will it just go black like the end of The Sopranos?

I have never felt so conscious of my data: 15 years of tweets and direct messages all held hostage inside a system with a renegade owner no one trusts. Deleting it feels like killing my past; leaving it in place teems with risks.

The risk level has been abruptly raised by the departure of various security and privacy personnel from Twitter's staff, which led Michael Veale to warn that the platform should be regarded as dangerously vulnerable and insecure. Veale went on to provide instructions for using the law (that is, the General Data Protection Regulation) rather than just Twitter's tools, to delete your data.

Some of my more cautious friends have been regularly deleting their data all along - at the end of every couple of weeks, or every six months, mostly to ensure they can't suddenly become a pariah for something they posted casually five years ago. (It turns out this is a function that Mastodon will automate through user settings.) But, as Veale asks, how do you know Twitter is really deleting the data? Hence his suggestion of applying the law: it gives your request teeth. But is there anyone left at Twitter to respond to legal requests?

The general sense of uncertainty is heightened by things like the reports I saw of strange behavior in response to requests to download account archives: instead of just asking for two-factor authentication before proceeding, the site sent these users to the help center and a form demanding government ID. There seem to be a number of these little weirdnesses, and they're raising users' overall distrust of the system and the sense that we're all just waiting for the thing to break and our data to become an asset in a fire sale - or for a major hack in which all our data gets auctioned on the dark web.

"If you're not paying for the product, you're the product," goes the saying (attribution uncertain). Right now, it feels like we're waiting to find out our product status.

Meanwhile, Apple has spent years now promoting its products by claiming they provide better privacy than the alternatives. It is currently helping destroy the revenue base of Meta (owner of Instagram, Facebook, and WhatsApp) by allowing users to opt to block third-party trackers on its devices. At The Drum, Chris Sutclifee cites estimates that 62% of Apple users have done so; at Forbes Daniel Newman reported in February that Meta projected that the move would cost the company $10 billion in lost ad sales this year. The financial results it's announced since have been accordingly grim.

Part of the point of this is that Apple's promise appeared to be that the money its customers pay for hardware and services also buys them privacy. This week, Tom Germain reported at Gizmodo that Apple's own apps continue to harvest data about users' every move even when those users have - they thought - turned data collection off.

"Even if you're paying for the product, you're the product," Cory Doctorow wrote on discovering this. Double-dipping is familiar in other contexts. But here Apple has broken the pay-with-data bargain that made the web. It may live to regret this; collecting data to which it has exclusive access while shutting down competitors has attracted the attention of German antitrust regulators.

If that's where the commercial world is going, the appeal of something like Mastodon, where we are *not* the product, and where accounts can be moved to other interoperable servers at any time, is obvious. But, as I've written before about professional media, the money to pay for services and servers has to come from *somewhere*. If we're not going to pay with data,

Illustrations: Twitter flies upside down.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or | | Comments (0) | TrackBacks (0)

November 11, 2022

Moving day

twitter-lettuce-FgnQBzFVIAEox-K-370.jpeg""On the Internet, your home always leaves you," someone observed on Twitter some months back.

Probably everyone who's been online for any length of time has had this experience. That site you visit every day, that's full of memories and familiar people suddenly is no more. Usually, the problem is a new owner, who buys it and closes it down (Television Without Pity, Geocities), or alters it beyond recognition (CompuServe). Or its paradigm falls out of fashion and users leach away until the juice is gone, the fate of many of the early text-based systems.

As the world and all have been reporting - because so many journalists make their online homes there - Twitter is in trouble. A new owner with poor impulse control and a new idea every day - Twitter will be a financial service! (like WeChat?) Twitter will be the world's leading source of accurate information! (like Wikipedia?) Twitter can do multimedia! (like TikTok?), who is driving out what staff he hasn't fired.

The result, Chris Stokel-Walker predicts, will be escalating degradation of the infrastructure - and possibly, Mike Masnick writes, violations of the company's 2011 20-year consent decree with the US Federal Trade Commission, which could ultimately cost the company billions, in addition to the $13 billion in debt Musk added to the company's existing debt load in order to purchase it.

All of that - and the unfolding sequelae Maria Farrell details - will no doubt be a widely used case study at business schools someday.

For me, Twitter has been a fantastic resource. In the 15 years since I created my account, Twitter is where I've followed breaking news, connected with friends, found expert communities. Tight clusters are, Peter Coy finds at the New York Times, why Twitter has been unexpectedly resilient despite its lack of profitability.

But my use of Twitter has nothing in common with its use by those with millions of followers. At that level, it's a broadcast medium. My own experience of chatting with friends or responding randomly to strangers' queries is largely closed to them. Like traveling on the subway, they *can* do it, but not the way the rest of us can. For someone in that position, Twitter is a large audience that fortuitously includes journalists, politicians, and entertainers. The writer Stephen King had the right reaction to the suggestion that verified accounts should pay $20 a month (since reduced to $8) for the privilege: screw *that*. Though even average Twitter users will resist paying to be sold to the advertisers who ultimately fund it the service.

Unusually, a number of alternative platforms are ready and waiting for disaffected Twitter users to experiment with. Chief among them is Mastodon, which looks enough like Twitter to suggest an easy learning curve. There are, however, profound differences, most of them good. Mastodon is a protocol, not a site; like the web, email, or Usenet, anyone can set up a server ("instance") using open source software and connect to other instances. You can form a community on a local instance - or you can use your account as merely a convenient address from which to access postings by users at dozens of other instances. One consequence of this is that hashtags are very much more important in helping people find each other and the postings they're interested in.

Over the last week, I've seen a lot of people trying to be considerate of the natives and their culture, most particularly that they are much more sensitive about content warnings. The reality remains, though, that Mastodon's user base has doubled in a week, and that level of influx will inevitably bring change - if they stay and post, and particularly if many of them adopt a bit of software that allows automated cross-posting between the two services.

All of this has happened without a commercial interest: no one owns Mastodon, it has no ads, and no one is recruiting Twitter users. But that right there may be the biggest problem: the huge influx of new users doesn't bring revenue or staff to help manage it. This will be a big, unplanned test of the system's resilience.

Many are now predicting Twitter's total demise, not least because new owner Elon Musk himself has told employees that the company may become bankrupt due to its burn rate (some of which is his own fault, as previously noted). Barring the system going offline, though, habit is a strong motivator, and it's more likely that many people will treat the new accounts they've set up as "in case of need".

But some will move, because unlike other such situations, whole communities can move together to Mastodon, aided by its ability to ingest lists. I'm seeing people compile lists of accounts in various academic fields, of journalists, of scientists. There are even tools that scans the bios of your Twitter contacts for Mastodon addresses and compiles them into a personal list, which, again, can be easily imported.

If Mastodon works for Twitter's hundreds of millions, there is a big upside: communities don't have to depend for their existence on the grace and favor of a commercial owner. Ultimately, the reason Musk now owns Twitter is he offered shareholders a lucrative exit. They didn't have to care about *us*. And they didn't.

Illustrations: Twitter versus lettuce (via Sheon Han on Twitter).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or Mastodon

November 4, 2022

Meaningful access

Screenshot from 2022-11-04 12-56-46-370.jpg"We talk as if being online is a choice," Sonia Livingstone commented on , "but we live in a context and all the decisions around us matter."

As we've observed before, it's only for the most privileged that *not* being online or *not* carrying a smartphone comes without cost.

Livingstone was speaking on a panel on digital inequalities at this week's UK IGF, an annual forum that mulls UK concerns over Internet governance in order to feed them into the larger global conversation on such matters (IGF). The panel highlighted two groups most vulnerable to digital exclusion: old people and children.

According to Ofcom's 2022 Online Nations report, in 2021 6% of British over-18s did not have Internet access at home. That average is, however, heavily skewed by over-65s, 20% of whom don't have Internet access at home and another 7% of whom have Internet access at home but don't use it. In the other age groups, the percentage without home access starts at 1% for 18-24 and rises to 3% for 44-54. The gap across ages is startlingly larger than the gap across economic groups, although obviously there's overlap: Age UK estimated in 2021 that 2 million pensioners were living in poverty.

I know one of the people in that 20%. She is adamant that there is nothing the Internet has to offer that she could possibly want. (I feel this way about cryptocurrencies.) Because, fortunately, the social groups she's involved in are kind, tolerant, and small, the impact of this refusal probably falls more on them than on her: they have to make the phone calls and send the printed-out newsletters to ensure she's kept in the loop. And they do.

Another friend, whose acquaintance with the workings of his computer is so nodding that he gets his son round to delete some files when his hard drive fills up, would happily do without it - except that his failing mobility means that he finds entertainment by playing online poker. To him, the computer is a necessary, but despised, evil. In Ofcom's figures, he'd look all right - Internet access at home, uses it near-daily. But the reality is that despite his undeniable intelligence he's barely capable of doing much beyond reading his email and loading the poker site. Worse, he has no interest in learning anything more; he just hates all of it. Is that what we mean by "Internet access"?

These two are what people generally think of when they talk about the "digital divide".

As Sally West, policy manager for Age UK, noted, if you're not online it's becoming increasingly difficult to do mundane things like book a GP appointment or do any kind of banking. Worse, isolation during the pandemic led some to stop using the Internet because they didn't have their customary family support. In its report on older people and the Internet, Age UK found that about half a million over-65s have stopped using the Internet. And, West said, unlike riding a bike, Internet skills don't necessarily stay with you when you stop using them. Even if they do, they lose relevance as the technology changes.

For children, lack of access translates into educational disadvantage and severely constricted life opportunities. Despite the government's distribution of laptops. Nominet's Digital Youth Index finds that a quarter of young people lack access to one, and 16% rely primarily on mobile data. And, said Jess Barrett, children lack understanding of privacy and security yet are often expected to be their family's digital expert.

More significantly, the Ofcom report finds that 20% of people - and a *third* of people aged 25-34 - used only a smartphone to go online 2021. That's *double* the number in 2020. Ofcom suggests that staying home much of 2020 and newer smartphones' larger screens may be relevant factors. I'd guess that economic uncertainty played an important role and that 2022's cost-of-living crisis will cause these numbers to rise again. There's also a generational aspect; today's 30-year-olds got their teenaged independence via smart phones.

To Old Net Curmudgeons, phone-only access isn't really *Internet* access; it's walled-garden apps. Where the open Internet promised that all of us could build and distribute things, apps limit us to consuming what the apps' developers allow. This is not petty snobbery; creating the next generation of technology pioneers requires learning as active users instead of lurkers.

This disenfranchisement led Lizzie Coles-Kemp to an approach that's rarely discussed: "We need to think how to design services for limited access, and we need to think what access means. It's not binary." This approach is essential as the of the mobile phone world's values risk overwhelming those of the open Internet.

In response, Livingstone mooted the idea of "meaningful access": the right device for the context and sufficient skills and knowledge that you can do what you need to.

The growing cost-of-living crisis, exacerbated this week by an interest rate rise, makes it easy to predict a marked further rise in households that jettison fixed-line broadband. This year may be the first since the Internet began in which online access in the UK shrinks.

"We are just highlighting two groups," Livingstone concluded. "But the big problem is poverty and exclusion. Solve those, and it fixes it."

Illustrations: UK IGF's panel on digital inequalities: Cliff Manning, Sally West, Sonia Livingstone, Lizzie Coles-Kemp, Jess Barrett,

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week on Twitter or

August 26, 2022

Zero day

Tesla-crash-NYTimes-370.pngYears ago, an alarmist book about cybersecurity threats concluded with the suggestion that attackers' expertise at planting backdoors could result in a "zero day" when, at an attacker-specified time, all the world's computers could be shut down simultaneously.

That never seemed likely.

But if you *do* want to take down all of the computers in an area the easiest way is to cut off the electricity supply. Which, if the worst predictions for this year's winter in Britain come true, is what could happen, no attacker required. All you need is a government that insists, despite expert warnings, that there will be plenty of very expensive energy to go round for those who can afford it - even while the BBC reports that in some areas of West London the power grid is so stretched by data centers' insatiable power demands that new homes can't be built

Lack of electrical power is something even those rich enough not to have to choose between eating and heating can't ignore - particularly because they're also most likely to be dependent on broadband for remote working. But besides that: no power means no Internet: no way for kids to do their schoolwork or adults to access government sites to apply for whatever grants become available. Exponentially increasing energy prices already threatens small businesses, charities, care homes, child care centers, schools, food banks, hospitals, and libraries, as well as households. It won't be much consolation if we all wind up "saving" money because there's no power available to pay for.


In an earlier, analog, era, parents taking innocent nude photos of their kids were sometimes prosecuted when they tried to have them developed at the local photo shop. In the 2021 equivalent, Kashmir Mill reports at the New York Times, Google flagged pictures two fathers took of their young sons' genitalia in order to help doctors diagnose an infection, labeled them child sexual abuse material, ordered them deleted, suspended the fathers' accounts, and reported them to the police.

It's not surprising that Google has automated content moderation systems dedicated to identifying abuse images, which are illegal almost everywhere. What *has* taken people aback, however, was these fathers' complete inability to obtain redress, even after the police exonerated them. Most of us would expect Google to have a "human in the loop" review process to whom someone who's been wrongfully accused can appeal.

In reality, though, the result is more likely to be like what happened in the so-called Twitter joke trial. In that case, a frustrated would-be airline passenger trying to visit his girlfriend posted on Twitter that he might blow up the airport if he still couldn't get a flight. Everyone who saw the tweet, from the airport's security staff to police, agreed he was harmless - and yet no one was willing to be the person who took the risk of signing off on it, just in case. With suspected child abuse, the same applies: no one wants to risk being the person who wrongly signs off on dropping the accusations. Far easier to trust the machine, and if it sets of a cascade of referrals that cost an innocent parent their child (as well as all their back GMail, contacts list, and personal data),'s not your fault. This goes double for a company like Google, whose bottom line depends on providing as little customer services as possible.


Even though all around us are stories about the risks of trusting computers not to fail, last week saw a Twitter request for the loan of a child. For the purpose of: having it run in front of a Tesla operating on Full Self-Drive to prove the car would stop. At the Guardian, Arwa Mahdawi writes that said poster did find a volunteer, albeit with this caveat: "They just have to convince their wife." Apparently several wives were duly persuaded, and the children got to experience life as crash test dummies - er, beta testers. Fortunately, none were harmed .

Reportedly, Google/YouTube is acting promptly to get the resulting videos taken down, though is not reporting the parents, who, as a friend quipped, are apparently unaware that the Darwin Award isn't meant to be aspirational.


The last five years of building pattern recognition systems - facial recognition, social scoring, and so on - have seen a lot of evidence-based pushback against claims that these systems are fairer because they eliminate human bias. In fact they codify it because they are trained on data with the historical effects of those biases already baked in.

This week saw a disturbing watershed: bias has become a selling point. An SFGate story by Joshua Bote (spotted at BoingBoing) highlights Sanos, a Bay Area startup that offers software intended to "whiten" call center workers' voices by altering their accents into "standard American English". Having them adopt obviously fake English pseudonyms apparently wasn't enough.

Such as system, as Bote points out, will reinforce existing biases. If it works, it's perfectly designed to expand prejudice and entitlement along the lines of "Why should I have to deal with anyone whose voice or demeanor I don't like?" It's worse than virtual reality, which is at least openly a fictional simulation; it puts a layer of fake over the real world and makes us all less tolerant. This idea needs to fail.

Illustrations: One of the Tesla crashes investigated in New York Times Presents, discussed here in June.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 19, 2022

Open connections

Better Call Saul - S06e13-.jpgIt's easy to say the future is hybrid. Much harder to say how to make it - and for "it" read "conferences and events" - hybrid and yet still a good experience for all involved.

I should preface this by stating the obvious: writers don't go to events the same way as other people. For one thing, our work is writing about what we find. So where a "native" (say, a lawyer at a conference on robots, policy, and law will be looking to connect their work to the work others at the conference are doing, the writer is always thinking, "Is that worth writing about?" or "Why is everyone excited about this paper?" You're also always looking around: who would be interesting to schmooze over the next lunch break?

For writers, then - or at least, *this* writer - attending remotely can be unsatisfying, more like reviewing a TV show. After one remote event last year, I approached a fellow attendee on Twitter and suggested a Zoom lunch to hash over what we'd just witnessed. She thought it was weird. In person, wandering up to join her lunchtime conversation would have been unremarkable. The need to ask makes people self-conscious.

And yet, there is a big advantage in being able to access many more events than you could ever afford to fly to. So I want hybrid events to *work*.

In a recent editorial note, a group of academic researchers set out guidelines and considerations for hybrid conferences, the result of discussions held in July 2021 at the Dagstuhl seminar on Climate Friendly Internet Research. They divide hybrid conferences into four types: passive (in which the in-person conference is broadcast to the remote audience, who cannot present or comment); semi-passive (in which remote participants can ask questions but not present or act as panelists); true (in which both local and remote participants have full access and capabilities); and distributed (in which local groups form clusters or nodes, which link together to form the main event).

I have encountered the first three of these (although I think the fourth holds a lot of promise). My general rule: the more I can participate as a remote attendee the better I like it and the more I feel like the conference must be joined in real time. A particular bugaboo is organizers who disable the chat window. At one in-person-only event this year, several panels were composed solely of remote speakers, who needed a technician's help to get audience feedback.

As the Dagstuhl authors write, hybrid events are not new. One of the organizations I'm involved with has enabled remote participation in council meetings for more than 15 years. At pre-pandemic meetings a telephone hookup and conference speaker provided dial-in access. Alongside, two of us typed into a live chat channel updates that both became the meeting's minutes and helped clarify what was being said and who was speaking. Those two also monitored the chat for remote participants who needed help being heard.

Folk music gatherings have developed practices that might be more broadly useful. For one thing, they set up many more breakout "rooms" than seems needed at first glance. One becomes the "parking lot" - a room where participants can leave their computer logged in, mic and camera off, so they can resume the session at any time without having to log in again. There's usually a "kitchen" or some such where people can chat with new and old friends. Every music session has both a music host and a technical assistant who keeps things running smoothly. And there is always an empty period following each session, so people can linger and the next session has ample set-up time. A lobby is continuously staffed by a host who helps incomers find the sessions they want and provides a point of contact if something is going wrong.

As both these examples suggest, enabling remote attendees to be full participants requires a lot of on-site support. In a discussion about this on Twitter, Jon Crowcroft, one of the note's authors, said, for example, that each in-person participant should also have a Zoom (or whatever) login so they could interact fully with remote participants, including accessing the chat window. I would second this. At a multi-track workshop earlier this year, some of the event's tracks were inaccessible because the room's only camera and microphone were poorly placed, making it impossible to see or understand commenters. At the end of each session the conference split in two; those of us on Zoom chatted to each other, while the in-person attendees wandered off to the room where the refreshments were. Crowcroft's recommendation would have helped a lot.

It's a lot of effort, but there is a big reason to do it, which the Dagstuhl authors also discuss: embracing diversity. The last two years have enabled all of us to gain contact with people who could never muster the funding or logistics to travel to distant events. Treating remote participants as an add-on sends the message that we're back to exclusionary business as previous normal. In locking us down, the pandemic also opened up much more of the world to participation. It would be wrong to close it back down again.

Illustrations: The second shot of the final episode of Better Call Saul (because I couldn't think of anything).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 12, 2022

Nebraska story

Thumbnail image for Facebook-76536_640.pngThis week saw the arrest of a Nebraska teenager and her mother, who are charged with multiple felonies for terminating the 17-year-old's pregnancy at 28 weeks and burying (and, apparently, trying to burn) the fetus. Allegedly, this was a home-based medication abortion...and the reason the authorities found out is that following a tip-off the police got a search warrant for the pair's Facebook accounts. There, the investigators found messages suggesting the mother had bought the pills and instructed her daughter how to use them.

Cue kneejerk reactions. "Abortion" is a hot button. Facebook privacy is a hot button. Result: in reporting these gruesome events most media have chosen to blame this horror story on Facebook for turning over the data.

As much as I love a good reason to bash Facebook, this isn't the right take.

Meta - Facebook's parent - has responded to the stories with a "correction" that says the company turned over the women's data in response to valid legal warrants issued by the Nebraska court *before* the Supreme Court ruling. The company adds, "The warrants did not mention abortion at all."

What the PR folks have elided is that both the Supreme Court's Dobbs decision, which overturned Roe v. Wade, and the wording of the warrants are entirely irrelevant. It doesn't *matter* that this case was about an abortion. Meta/Facebook will *always* turn over user data in compliance with a valid legal warrant issued by a court, especially in the US, its home country. So will every other major technology company.

You may dispute the justice of Nebraska's 2019 Pain-Capable Unborn Child Act, under which abortion is illegal after 20 weeks from fertilization (22 weeks in normal medical parlance). But that's not Meta's concern. What Meta cares about is legal compliance and the technical validity of the warrant. Meta is a business, not a social justice organization, and while many want Mark Zuckerberg to use his personal judgment and clout to refuse to do business with oppressive regimes (by which they usually mean China, or Myanmar), do you really want him and his company to obey only laws they agree with?

There will be many much worse cases to come, because states will enact and enforce the vastly more restrictive abortion laws that Dobbs enables, and there will be many valid legal warrants that force them to hand data to police bent on prosecuting people in excruciating pregnancy-related situations - and in many more countries. Even in the UK, where (except for Northern Ireland) abortion has been mostly non-contentious for decades, lurking behind the 1967 law which legalized abortion until 24 weeks is an 1861 statute under which abortion is criminal. That law, as Shanti Das recently wrote at the Guardian, has been used to prosecute dozens of women and a few men in the last decade. (See also Skeptical Inquirer.)

So if you're going to be mad at Facebook, be mad that the platform hadn't turned on end-to-end encryption for its messaging. That, as security engineer Alec Muffett has been pointing out on Twitter, would have protected the messages against access by both the system itself and by law enforcement. At the Guardian, Johana Bhuiyan reports the company is now testing turning on end-to-end encryption by default. Doubtless, soon to be followed by law enforcement and governments demanding special access.

Others advocate switching to other encrypted messaging platforms that, like Signal, provide a setting that allows you to ensure that messages automatically vape themselves after a specified number of days. Such systems retain no data that can be turned over.

It's good advice, up to a point. For one thing, it ignores most people's preference for using the familiar services their friends use. Adopting a second service just for, say, medical contacts adds complications; getting everyone you know to switch is almost impossible.

Second, it's also important to remember the power of metadata - data about data, which includes everything from email headers to search histories. "We kill people based on metadata," former NSA head Michael Hayden said in 2014 in a debate on the constitutionality of the NSA. (But not, he hastened to add, metadata collected from *Americans*.)

Logs of who has connected to whom and how frequently is often more revealing than the content of the messages sent back and forth. For example: the message content may be essentially meaningless to an outsider ("I can make it on Monday at two") until the system logs tell you that the sender is a woman of childbearing age and the recipient is an abortion clinic. This is why so many governments have favored retaining Internet connection data. Governments cite the usual use cases - organized crime, drug dealers, child abusers, and terrorists - when pushing for data retention, and they are helped by the fact that most people instinctively quail at the thought of others reading the *content* of their messages but overlook metadata's significance.intuitively grasp the importance of metadata - data about data, as in system logs, connection records - has helped enable mass Internet surveillance.

The net result of all this is to make surveillance capitalism-driven technology services dangerous for the 65.5 million women of childbearing age in the US (2020). That's a fair chunk of their most profitable users, a direct economic casualty of Dobbs.

Illustrations: Facebook.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 29, 2022

On the Internet, they always knew you were a dog

RunJoshRun0106-370.pngThis week: short cuts.

Much excitement that Disney's copyright in the first Mickey Mouse film will expire in...2024. Traditionally, Disney would be lobbying to extend copyright terms - as it did in 1998, when the Copyright Term Extension Act lengthened it to life plus 70 years for authors and 95 years for corporations. In 1928, when Disney released Mickey's first cartoon, copyright lasted 28 years, renewable once. In 1955, Disney duly renewed it until 1984. The 1976 Copyright Act extended that until 2003, and the 1998 law pushed it through 2023. Other companies also profit from these extensions, but Disney is the most notorious.

The losers have been us: the acts froze the public domain for decades. In the interim, as both the Guardian and the Authors Alliance report, Disney has registered trademarks in the character, and even shorn of copyright Mickey remains protected.

The weird reason Disney is unlikely to get another extension *this* time is that the US Republican party is picking a fight with Disney over LGBTQ+ rights. US Senator Josh Hawley (R-MO) is pushing a copyright term *reduction* bill as a *punishment*. I want to laugh at the bonkersness of this, but can't because: sucks to be the humans whose rights are caught in this crossfire. But yay! public domain.


Airlines do it. Scalpers do it. Even educated algorithms do it. Which is how this week angry Bruce Springsteen fans complained that concert tickets hit $5,500. The reason: Ticketmaster's demand-driven dynamic pricing. Spingsteen's manager, Jon Landau, called the *average* pricing of $200 "fair"; Ticketmaster says only 1% of tickets sold for over $1,000, and 18% sold for under $99.

A Ticketmaster option adjusts pricing to the perceived market. Those first in the queue when sales opened saw four-figure prices; waiting and searching would, reports, have found other sites with more modest prices.

In the Internet's early days, many expected it to advantage consumers by making market information transparent. On eBay, this remains somewhat true. Elsewhere, corporate consolidation and automation have eliminated that insight. In the Springsteen case, as your hand hovers on the purchase button you have seconds to decide on the price in front of you. You aren't really paying for Springsteen, you're paying for *certainty*.


The 1998 copyright term extension coincided with the beginnings of the MIT Media Lab's Things That Think, which presaged today's "smart" Internet of Things. Coupling that with the nascent software industry move from purchase to subscription and the history of digital rights management, made limitations on ownership of *things* imaginable.

This week, BMW offered British drivers this exact dystopia: it will charge £10 per month for heated seats for those whose car, when new, didn't include them. Of course that means that all the necessary hardware infrastructure is present in every car, and BMW activates a subscription by toggling a line of code to "true" - an infuriating reason to pay extra.


The shrinking company Meta is unhappy about leap seconds, joining a history of computer industry objections to celestial mechanics. For computer folks, leap seconds pose thorny synchronization problems (see also GPS); for astronomers and physicists, leap seconds crucially align human time with celestial time. When I first wrote about this in 2005, here and at Scientific American, proposals to eliminate them were already on the table at the International Telecommunications Union. That year's vote deferred the decision to its 2015 World Radiocommunications Congress - 2014noted here in 2014 - which duly deferred it again to 2023. Hence the present revival.

Meta is pushing the idea of "smearing" the leap second over 17 hours, which sounds like the kind of magic technology that was supposed to solve the Northern Ireland-Brexit conundrum. Personally, I'm for the astronomers and physicists; as the pandemic, the climate, and the war remind, it's unwise to forget our dependence on the natural world. Prediction: the 2023 meeting will defer it again because the two sides will never agree. Different people need different kinds of time, and that's how it is.


The problem with robots and AIs is that they expect consistency humans rarely provide. This week, a chess-playing robot broke a seven-year-old's finger during a game in the Moscow Chess Open when the boy began his move faster than it was programmed to expect. As Madeline Claire Elish predicted in 2016 in positing moral crumple zones, the tournament organizer seemed to blame the child for not giving the robot enough time. Autonomous vehicle, anyone?


And finally: remaining a meme almost 30 years after its first publication in The New Yorker is Peter Steiner's cartoon of a dog at a computer telling another dog, "On the Internet no one knows you're a dog". It's a wonderful wish-it-were-truth. But it was dubious even in 1993, when most online contacts were strangers who could, theoretically, safely assume fake identities. However, it's hard to lie consistently over a period of time, and even harder to disguise fundamental characteristics that shape life experience. Today's surveillance capitalism would spot the dog immediately - but its canine nature would be obvious anyway from its knee-level world view. On the Internet everyone always knew you were a dog - they just didn't used to care.

Illustrations: US Senator Josh Hawley (R-MO), running to expand the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 8, 2022

Orphan consciousness

icelandverse.pngWhat if, Paul Bernal asked late in this year's Gikii, someone uploaded a consciousness and then we forgot where we got it from? Taking an analogy from copyrighted works whose owners are unknown - orphan works, an orphan consciousness. What rights would it have? Can it commit crimes? Is it murder to erase it? What if it met fellow orphan consciousness and together they created a third? Once it's up there without a link to humanity, then what?

These questions annoyed me less than proposals for robot rights, partly because they're more obviously a thought experiment, and partly because they specifically derived from Greg Daniels' science fiction series Upload, which inspired many of this year's gikii presentations. The gist: Nathan (Robbie Arnell), whose lung is collapsing after an autonomous vehicle crash, is offered two choices: take his chances in the operating room, or have his consciousness uploaded into Lakeview, a corporately owned and run "paradise" where he can enjoy an afterlife in considerable comfort. His girlfriend, Ingrid (Allegra Edwards), begs him to take the afterlife, at her family's expense. As he's rushed into signing the terms and conditions, I briefly expected him to land at the waystation in Albert Brooks' 1991 film Defending Your Life.

Instead, he wakes in a very nice country club hotel where he struggles to find his footing among his fellow uploaded avatars and wrangle the power dynamics in his relationship with Ingrid. What is she willing to fund? What happens if she stops paying? (A Spartan 2GB per day, we find later.) And, as Bernal asked, what are his neurorights?

Fictional use cases, as Gikii proves every year (2021): provide fully-formed use cases through which to explore the developing ethics and laws surrounding emergent technologies. For the current batch - the Digital Markets Act (EU, passed this week), the Digital Services Act (ditto), the Online Safety bill (UK, pending), the Platform Work Directive (proposed, EU), the platform-to-business regulations (in force 2020, EU and UK), and, especially, the AI Act (pending, EU) - Upload couldn't be more on point.

Side note: in-person attendees got to sample the Icelandverse, a metaverse of remarkable physical reality and persistence.

Upload underpinned discussions of deception and consent laws (Burkhard Schäfer and Chloë Kennedy), corporate objectification (Mauricio Figueroa ), and property rights - English law bans perpetual trusts. Can uploads opt out? Can they be murdered? Maybe like copyright, give them death plus 70 years?

Much of this has direct relevance to the "metaverse", which Anna-Maria Piskopani called "just one new way to do surveillance capitalism". The show's perfect example: when sex fails to progress, Ingrid yells out, "Tech support!".

In life, Nora (Andy Allo), the "angel" who arrives to help, works in an open plan corporate dystopia where her co-workers gossip about the avatars they monitor. As in this year's other notable fictional world, Dan Erickson's Severance, the company is always watching, a real pandemic-accelerated trend. In our paper, Andelka Phillips and I noted that although the geofenced chip implanted in Severance's workers prevents their work selves ("innies") from knowing anything about their out-of-hours selves ("outies"), their employer has no such limitation. Modern companies increasingly expect omniscience.

Both series reflect the growing ability of cyber systems to effect change in the physical world. Lachlan Urquhart, Lilian Edwards, and Derek McAuley used the science fiction comedy film Ron's Gone Wrong to examine the effect of errors at scale. The film's damaged robot, Ron, is missing safety features and spreads its settings to its counterparts. Would the AI Act view Ron as high or low risk? It may be a distinction without a difference; MacAuley reminded there will always be failures in the field. "A one-bit change can make changes of orders of magnitude." Then that chip ships by the billion, and can be embedded in millions of devices before it's found. Rinse, repeat, and apply to autonomous vehicles.

In Japan, however, as Naomi Lindvedt explained, the design culture surrounding robots has been far more influenced by the rules written for Astro Boy in 1951 by creator Tezuka Osamu than by Asimov's Laws. These rules are more restrictive and prescriptive, and designers aim to create robots that integrate into society and are user-friendly.

In other quick highlights, Michael Veale noted the Deliveroo ads that show food moving by itself, as if there are no delivery riders, and noted that technology now enforces the exclusivity that used to be contractual, so that drivers never see customer names and contact information, and so can't easily make direct arrangements; Tima Otu Anwana and Paul Eberstaller examined the business relationship between Only Fans and its creators; Sandra Schmitz-Berndt and Paula Contreras showed the difficulty of reporting cyber incidents given the multiple authorities and their inconsistent requirements; Adrian Aronsson-Storrier produced an extraordinary long-lest training video (Super-Betamax!) for a 500-year-old Swedish copyright cult; Helen Oliver discussed attitudes to privacy as revealed by years of UK high school students' entries for a competition to design fictional space stations; and Andy Phippen, based on his many discussions with kids, favors a harm reduction approach to online safety. "If the only horse in town is the Online Safety bill, nothing's going to change."

Illustrations: Image from the Icelandverse (by Inspired by Iceland).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 1, 2022

Negative externalities

There are plenty of readily available reasons why everything is suddenly so much more expensive: pandemic-blighted supply chains, staff shortages, rising energy prices that push everything else up, the war in Ukraine, monopolistic consolidation that has created a "profits-inflation spiral", per Matt Stoller, and, in the UK, Brexit. But there's another factor also at work: the rising cost of capital.

Throughout the last 15 years of low interest rates, venture capitalists have, by pouring funding into money-losing technology-adjacent companies, been funding what some have called the "millennial lifestyle". I doubt it's limited to millennials; people of all ages have taken advantage of what has been an era of predatory loss-leading pricing intended to undercut the competition until it goes away and they can raise prices.

Amazon did not invent this tactic, but it may have been the first web company to really exploit it. It lost money the first five years it was a public company, and again at other times in its history. Cheap prices were an important part of getting people to use the site; Bezos famously chose its Seattle location to avoid sales taxes on the books it began with. As long ago as 2014, however, people had begun warning that it was now often the more expensive option. And, these days, its search results are full of clutter, ads, "sponsored products", and weird brand names.

I began using Amazon so early in its history that I have an insulated mug the company sent its customers one mid-1990s Christmas. These days, I sometimes go for months at a time without using it.

It's not easy because, as "honest broker" Ted Gioia points out, the long tail Chris Anderson touted in 2004, first in a Wired article and then in a book, doesn't really work. Instead of niche products dominating the market, we continue to have blockbusters and what Gioia calls the "short tail". Companies like Netflix and Amazon, who made their names selling the widest possible range, have since narrowed their offerings. (As Gioia deoesn't say, in its early days Amazon didn't actually have warehouses full of every possible book title; it let the distributor Ingram do that, and sent runners over to collect copies of obscure titles when they were ordered. Now, the long tail is often handled by third-party merchants in its Marketplace.)

As Gioia concludes, the 80/20 rule won and kept winning - which also means that 20 percent of online retailers do 80 percent of the business, and occupy 80 percent of the search listings, and that 20 percent becomes harder and harder to find.

But back to the "millennial lifestyle". "If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you've interacted with seven companies that will collectively lose nearly $14 billion this year," Derek Thompson wrote at The Atlantic in 2019 just after the WeWork crash. Thompson went on to predict that WeWork's example was going to make venture capitalists much less willing to finance all that free living in future.

Last week, he published an update, noting that while the combination of spiking energy and labor costs is getting all the headlines, rising prices among the "millennial lifestyle" companies are also part of why life feels so much more expensive for urbanites. Tl;dr: those companies can't afford the subsidy any longer. Rising interest rates surely play a part, too, particularly for a company like Netflix, which used easy access to cheap money to acquire substantial debt with which to finance building its own content library. It didn't have much choice, since it was inevitable that eventually content producers like Disney and the legacy broadcast networks would want to reserve their content for their own streaming services. Now, however, with subscriber numbers under pressure from cost-of-living decisions, its prices are going up and it's adding an advertising-supported tier.

At the New York Times, Kevin Roose reports the same experience as Thompson: "For years, these subsidies allowed us to live Balenciaga lifestyles on Banana Republic budgets." Today...well, less $16 for an Uber ride across greater Los Angeles, more $250 to get from midtown Manhattan to JFK airport. (Pro tip: there's an express bus from just outside Grand Central station that runs every 30 minutes and gets you there in under an hour for $19.)

The startup extravagance Roose describes - his used car was delivered by a white-gloved valet and adorned with a giant bow - is utterly 1999, when startups recklessly burned through their all-too-easily-raised capital by installing in-office chefs and TGIF bartenders. We know what happened to that: market collapse, followed by more sensible burn rates. WeWork provided a similar, but much crazier, cautionary tale, which Stoller dubbed- counterfeit capitalism.

This approach was never going to be sustainable. So now these services - Stoller lists Bird, Lyft, and Uber (which transport industry expert Hubert Horan notes has lost $31 billion over its lifetime) - are being forced to adopt realistic pricing. In the long run, hopefully it will improve competition and be better for the workers in those industries. For right now, though, it's going to hurt.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 10, 2022

Update needed

In public discussions or Internet governance, only two organizations feature much: the Internet Corporation for Assigned Names and Numbers, founded in 1998, and the Internet Governance Forum, set up in 2005. The former performs the crucial technical role of ensuring that the domain name system that allow humans to enter a word-like Internet address and computers to translate and route it to a numbered device continues to function correctly. The second...well, it hosts interesting conferences on Internet governance.

Neither is much known to average users, who would probably guess the Internet is run by one or more of the big technology companies. Yet they're the best-known of a clutch of engineering-led organizations that set standards and make decisions that affect all of us. In 2011, the Economist described the Internet as shambolically governed (yet concluded that multistakeholder "chaos" is preferable to the alternative of government control).

In a report for the Tony Blair Institute, journalist and longstanding ICANN critic Kieren McCarthy considers that much of Internet governance as currently practiced needs modernization. This is not about the application-layer debates such as content moderation and privacy that occupy the minds of rights activists and governments. Instead, McCarthy is considering the organizations that devised and manage the technical underpinnings that most people ignore. These things matter; the fact that any computer can join the Internet and set up a service without asking anyone's permission or that a website posted in 1995 is remains readable is due to the efforts of organizations like the Internet Engineering Task Force, the Internet Architecture Board, the Internet Society, the World Wide Web Consortium (W3C), and so on. And those are just part of the constellation of governance organizations, well-known compared to the Regional Internet Registries or the tiny group of root server operators.

As unknown as these organizations are to most people (even W3C is vastly less famous than its founder, Tim Berners-Lee), they still have decisive power over the Internet's development. When, shortly after February's Russian invasion, a Ukrainian minister asked ICANN to block Internet traffic to and from Russia. ICANN, prioritizing the openness, interconnectedness, and unity of the global network, correctly said no. But note: ICANN, whose last ties to the US government were severed in 2016, made its decision without consulting either governments or a United Nations committee.

McCarthy's main points: these legacy organizations do not coordinate their efforts; they lack strategy beyond maintaining and evolving the network as it stands; they are internally disorganized; and they are increasingly resistant to new ideas and new participants. They are "essential to maintaining a global, interoperable Internet" - yet McCarthy finds a growing list of increasingly contentious topics and emerging technologies that escape the current ecosystem: censorship, content moderation, AI, web3 and blockchain, privacy and data protection, If these organizations don't rise to those occasions, governments will seek to fill the gap, most likely creating a more fragmented and less functional network. Even now this happens in small ways: four years after the EU's GDPR came into force many US media sites still block European readers rather than find a compliant way to serve us.

From the beginning, ensuring that the technical organizations remain narrowly focused has been seen as essential. See for example the critics who monitored ICANN's development during its first decade, suspicious that it might stray into enforcing government-mandated censorship.

The guiding principles of new governments are always based on a threat model. The writers of the US Constitution, for example, feared the installation of a king and takeover by a foreign country (England). Internet organizations' threat model also has two prongs: first, fragmentation), and second, takeover by governments, specifically the ">International Telecommunication Union, the United Nations agency that manages worldwide telecommunications and which regards itself as the Internet's natural governor. Internet pioneers still believe there could be no worse fate, citing decades of pre-Internet stagnation in the fully-controlled telephone networks.

The ITU has come sort-of-close several times: in 1997 ($), when widespread opposition led instead to ICANN's creation, in the early 2000s, when the World Summit on the Information Society instead created the IGF, and in 2012, when a meeting to update the ITU's regulations led many, including the Trade Union Congress, to fear a coup, Currently, concern that governments will carve things up surrounds negotiations over cybersecurity,

The approach that created today's multistakeholder organizations is, however, just one of four that University of Southampton professors Wendy Hall and Kieron O'Hara examine in their 2021 book, The Four Internets and find are being contested. Our legacy version they dub the "open Internet", and connect it with San Francisco and libertarian ideology. The other three: the "bourgeois Brussels" Internet that the EU is trying to regulate into being with laws like the Digital Services Act, the AI Act, and the Digital Market Act; the commercial ("DC") Internet; and the "paternalistic" Internet of countries like China and Russia, who want to ringfence what their citizens can access. Any of them, singly or jointly, could lead to the long-feared "splinternet".

McCarthy concludes that the threat now is that Internet governance as practiced to date will fail through stagnation. His proposal is to create a new oversight body which he compares to a root server that provides coordination and authoritative information. Left for another time: who? And how?


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 13, 2022

False economy

Thumbnail image for coyote-roadrunner-cliff.pngThis week, every cryptocurrency was unhappy in its own way. It has not been a good year for cryptocurrency speculators in general, but Wednesday was a disaster: almost all "major" cryptocurrencies crashed by about 25%, and even venerable bitcoin dropped by 14% (although it still is twice its 2017 peakl. Which sounds great until you realize that on November 10, 2021 people *bought* bitcoin for $68,789 and El Salvador has been "buying the dip" all year.

Especially notable were the losses among cryptocurrencies intended to stay pegged to the US dollar - "stablecoins" - which fell off a cliff, pricewise. One previously unfamiliar "stablecoin", Luna, dropped 99.7%, leading some posters in the Terraluna subReddit to post suicide helpline numbers.

Do not gloat. Heed Hamilton Nolan's warning at In These Times about the dangers when a class of young (mostly) men who hate government become angry, bitter, and hopeless.

First: what happened? You can value a company, as Warren Buffett does, by studying it: its business, market sector, competitors, financial stability, and prospects. There's always some element of uncertainty. New managers could derail the company (Boeing), new, well-funded competitors could enter the field (Netflix), new technology could overrun its business model, or it could be lying about its revenues - er, painting a rosier picture than is actually merited by the facts. If you have the mad skillz of Buffett (and his professor, Benjamin Graham), thinking through all that should lead you to a reasonable purchase price, and not overpaying allows you to profit from your investment at relatively modest risk.

However, a cryptocurrency is not a business, and it has no real-world usefulness. Like gold, which Buffett has never liked, it costs money to hold, it produces nothing, and, "You can fondle it, but it will not respond". But at least gold has some industrial uses. Cryptocurrencies have none; they are the currency equivalent of being famous for being famous, held aloft only through fear, greed, and mythology. In any crisis, toilet paper, chocolate, cigarettes, booze, or toothpaste are all more useful currencies.

Luna is the most interesting. Here's how Coindesk describes its collapse: "A change in market dynamics caused Luna prices to snap at a breakneck pace. Luna plummeted through several support levels as terraUSD (UST), a Terra-issued stablecoin that's meant to be priced 1:1 to the U.S. dollar, lost its peg."

Let's pick this apart. "Market dynamics" could simply mean "interest rates are going up", which drives money away from the riskiest assets, which sets off a cycle of selling.

"Support levels" is a term for a tealeaves-reading approach to stock market pricing called technical analysis. Proponents believe that the shapes of price charts over time have significance in and of themselves. It has nothing to do with underlying value, Effectively, the fundamental claim is that past performance predicts future results, the exact opposite of what every financial product is required to tell prospective buyers. It would be complete nonsense, *except* that so many people believe in it that those patterns really do move markets, at least short-term. So "breaking support levels" becomes "let's panic and sell, ferchrissake!"

HowToGeek tells us that UST is the stablecoin on the Terra blockchain. Terra is a company providing "programmable money for the Internet", and its blockchain "brings DeFi to the masses". DeFi is short for decentralized finance, and its appearance means we're entering web3 territory - the folks who want to reclaim the Internet through redecentralization. Let's leave that part aside for today.

Traditionally (!) what makes a stablecoin stable is that for every coin (for example, Tether, which also slipped, to $0.95) its issuer holds an actual $1 in its reserves. However, it turns out there is a *second* type of stablecoin, which is backed by an algorithm rather than an asset representing some government's full faith and credit.

So the UST "stablecoin" is pegged to Terra's Luna stablecoin, and the idea is that an algorithm - a smart contract - keeps them pegged to each other by buying, selling, and converting them so they both reliably stay at a value of about $1. This is the theory.

It *sounds* like a folie à deux - that is, a shared delusion in which the partners reinforce each other's belief but neither leads the other closer to any form of outside reality. Apparently enough people distrust governments so much that algorithm! seems appealing and five weeks ago Luna's market cap was $39 billion more than it is now. Yes, money is flowing away from stock market risk, too, but more slowly for the reasons outlined above. A chart at the Motley Fool shows clearly that cryptocurrencies aren't a useful hedge against this.

Bottom line: algorithms do not make a coin stable, and if you don't understand what you're buying, don't buy it.

None of this means cryptocurrencies are finished. It doesn't make them good "investments" to "buy on the dip", either. It's just one more piece of mess in an ongoing expanding experiment that has been highly profitable for a few people, and rife with fraud and market manipulation for many more. Just say no.

Illustrations: Wile E. Coyote makes the mistake of looking down as he runs off the edge of a cliff.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 4, 2022

Consent spam

openRTB.pngThis week the system of adtech that constantly shoves banners in our face demanding consent to use tracking cookies was ruled illegal by the Belgian Data Protection Authority, leading 28 EU data protection authorities. The Internet Advertising Bureau, whose Transparency and Consent Framework formed the basis of the complaint that led to the decision, now has two months to redesign its system to bring it into compliance with the General Data Protection Regulation.

The ruling marks a new level of enforcement that could begin to see the law's potential fulfilled.

Ever since May 2018, when GDPR came into force, people have been complaining that so far all we've really gotten from it is bigger! worse! more annoying! cookie banners, while the invasiveness of the online advertising industry has done nothing but increase. In a May 2021 report, for example, Access Now examined the workings of GDPR and concluded that so far the law's potential had yet to be fulfilled and daily violations were going unpunished - and unchanged.

There have been fines, some of them eye-watering, such as Amazon' s 2021 fine of $877 million for its failure to get proper consent for cookies. But even Austrian activist lawyer Max Schrems' repeated European court victories have so far failed to force structural change, despite requiring the US and EU to rethink the basis of allowing data transfers.

To "celebrate" last week's data protection day, Schrems documented the situation: since the first data protection laws were passed,enforcement has been rare. Schrems' NGO, noyb, has plenty of its own experience to drawn on. Of the 51 individual cases noyb has filed in Europe since its founding in 2018, only 15% have been decided wthin a year, none of them pan-European. Four cases filed with the Irish DPA in May 2018, the day after GDPR came into force, have yet to be given a final decision.

Privacy International, which filed seven complaints against adtech companies in 2018, also has an enforcement timeline. Only one, against Experian, resulted in an investigation, and even in that case no action has been taken since Experian's appeal in 2021. A recent study of diet sites showed that they shared the sensitive information they collect with unspecified third parties, PI senior tecnologist Eliot Bendinelli told last week's Privacy Camp. PI's complaint is yet to be enforced, though it has led some companies to change their practices.

Bendinelli was speaking on a panel trying to learn from GDPR's enforcement issues in order to ensure better protection of fundamental rights from the EU's upcoming Digital Services Act. Among the complaints with respect to GDPR: the lack of deadlines to spur action and inconsistencies among the different national authorities.

The complaint at the heart of this week's judgment began in 2018, when Open Rights Group director Jim Killock, UCL researcher Michael Veale, and Irish Council on Civil Liberties senior fellow Johnny Ryan took the UK Information Commissioner's Office to court over the ICO's lack of action regarding real-time bidding, which the ICO itself had found illegal under the UK's Data Protection Act (2018), the UK's post-Brexit GDPR clone. In real-time bidding, your visit to a participating web page launches an instant mini-auction to find the advertiser willing to pay the most to fill the ad space you're about to see. Your value is determined by crunching all the data the site and its external sources have or can get about you.

If all this sounds like it oughtta be illegal under GDPR, well, yes. Enter the IAB's TCF, which extracts your permission via those cookie consent banners. With many of these, dark patterns design make "consent" instant and rejection painfully slow. The Big Tech sites, of course, handle all this by using logins; you agree to the terms and conditions when you create your account and then you helpfully forget how much they learn about you every time you use the site.

In December 2021, the UK's Upper Tribunal refused to require the ICO to reopen the complaint, though it did award Killock and Veal concessions they hope will make the ICO more accountable in future.

And so back to this week's judgment that the IAB's TCF, which is used on 80% of the European Internet, is illegal. The Irish DPA is also investigating Google's similar system, as well as Quantcast's consent management system. On Twitter, Ryan explained the gist: cookie-consent pop-ups don't give publishers adequate user consent, and everyone must delete all the data they've collected.

Ryan and the Open Rights Group also point out that the judgment spikes the UK government's claim that revamping data protection law is necessary to get rid of cookie banners (at the expense of some of the human rights enshrined in the law). Ryan points to DuckDuckGo as an example of the non-invasive alternative: contextual advertising. He also observed that all that "consent spam" makes GDPR into merely "compliance theater".

Meanwhile, other moves are also making their mark. Also this week, Facebook (Meta)'s latest earnings showed that Apple's new privacy controls, which let users opt out of tracking, will cost it $10 billion this year. Apparently 75% of Apple users opt out.

Moral: given the tools and a supportive legal environment, people will choose privacy.

Illustrations: Diagram of OpenRTB, from the Belgian decision.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 14, 2022

The visible computer

Windows_Xp_of_Medea.JPGI have a friend I would like to lend anyone who thinks computers have gotten easier in the last 30 years.

The other evening, he asked how to host a Zoom conference. At the time, we were *in* a Zoom call, and I've seen him on many others, so he seemed competent enough.

"Di you have a Zoom account?" I said.

"How do I get that?"

I directed him to the website. No, not the window with our faces; that's the client. "Open up - what web browser do you use?"

"Er...Windows 10?"

"That's the computer's operating system. What do you use to go to a website?"


Did he know how to press ALT-TAB to see the open windows on his system? He did not. Not even after instruction.

But eventually he found the browser, Zoom's website, and the "Join" menu item. He created a password. The password didn't work. (No idea.) He tried to reset the password. More trouble. He decided to finish it later...

To be fair, computers *have* gotten easier. On a 1992 computer, I would have had to write my friend a list of commands to install the software, and he'd have had to type them perfectly every time and learn new commands for each program's individual interface. But the comparative ease of use of today's machines is more than offset by the increased complexity of what we're doing with them. It would never have occurred to my friend even two years ago that he could garnish his computer with a webcam and host video chats around the world.

I was reminded of this during a talk on new threats to privacy that touched on ubiquitous computing and referenced the 1991 paper The Computer for the 21st Century, by Marc Weiser, then head of the famed Xerox PARC research lab.

Weiser imagined the computer would become invisible, a theme also picked up by Donald Norman in his 1998 book, The Invisible Computer. "Invisible" here means we stop seeing it, even though it's everywhere around us. Both Weiser and Norman cited electric motors, which began as large power devices to which you attached things, and then disappeared inside thousands of small and large appliances. When computers are everywhere, they will stop commanding our attention (except when they go wrong, of course). Out of sight, out of mind - but in constant sight also means out of mind because our brains filter out normal background conditions to focus on the exceptional.

Weiser's group built three examples, which they called tabs (inch-scale), pads (foot-scale), and boards (yard-scale). His tabs sound rather like today's tracking tags. Like the Active Badges at Olivetti Research in Cambridge they copied (the privacy implications of which horrified the press at the time), they could be used to track people and things, direct calls, automate diary-keeping, and make presentations and research portable throughout the networked area. In 2013, when British journalist Simon Bisson revisited this same paper, he read them more broadly as sensors and effectuators. Pads, in Weiser's conception, were computerized sheets of "scrap" paper to be grabbed and used anywhere and left behind for the next person. Weiser called them an "antidote to windows", in that instead of cramming all programs into a window you could spread dozens of pads across a full-sized desk (or floor) to work with. Boards were displays, more like bulletin boards, that could be written on with electronic "chalk" and shared across rooms.

"The real power of the concept comes not from any one of these devices; it emerges from the interaction of all of them," Weiser wrote.

In 2013, Bisson suggested Weiser's "embodied virtuality" was taking shape around us as sensors began enabling the Internet of Things and smartphones became the dominant interface to the Internet. But I like Weiser's imagined 21st century computing better than what we actually have. While cloud services can make our devices more or less interchangeable as long as we have the right credentials, that only works if broadband is uninterruptedly reliable. But even then, has anyone lost awareness of the computer - phone - in their hand or the laptop on their desk? Compare today to what Weiser thought would be the case 20 years later - which would have been 2011:

Most important, ubiquitous computers will help overcome the problem of information overload. There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.

Who feels like that? Certainly not the friend we began with. Even my computer expert friends seem one and all convinced that their computers hate them. People in search of relaxation watch TV (granted, maybe on a computer), play guitar (even if badly), have a drink, hang with friends and family, play a game (again, maybe on a computer), work out, tale a bath. In fact, the first thing people do when they want to relax is flee their computers and the prying interests that use them to spy on us. Worse, we no longer aspire to anything better. Those aspirations have all been lost to A/B testing to identify the most profitable design.

Illustrations: Windows XP's hillside wallpaper (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 7, 2022


Winnie-the-Pooh-north pole_143.pngWe start 2022 with some catch-ups.

On Tuesday, the verdict came down in the trial of Theranos founder Elizabeth Holmes: guilty on four counts of wire fraud, acquitted on four counts, jury hung on three. The judge said he would call a mistrial on those three, but given that Holmes will already go to prison, expectations are that there will be no retrial.

The sad fact is that the counts on which Holmes was acquitted were those regarding fraud against patients. While investment fraud should be punished, the patients were the people most harmed by Theranos' false claims to be able to perform multiple accurate tests on very small blood samples. The investors whose losses saw Holmes found guilty could by and large afford them (though that's no justification). I know the $350 million collectively lost by Trump education secretary Betsy DeVos, Rupert Murdoch, and the Cox family is a lot of money, but it's a vanishingly tiny percentage of their overall wealth (which may help explain DeVos family investment manager Lisa Peterson's startlingly tcasual approach to research). By contrast, for a woman who's already had three miscarriages, the distress of being told she's losing a fourth, despite the eventual happy ending, is vastly more significant.

I don't think this case by itself will make a massive difference in Silicon Valley's culture, despite Holmes's prison sentence - how much did bankers change after the 2008 financial crisis? Yet we really do need the case to make a substantial difference in how regulators approach diagnostic devices, as well as other cyber-physical hybrid offerings, so that future patients don't become experimental subjects for the unscrupulous.


On New Year's Eve, Mozilla, the most important browser that ">only 3% of the market uses, reminded people it accepts donations in cryptocurencies through Bitpay. The message set off an immediate storm, not least among two of the organization's co-founders, one of whom, Jamie Zawinski, tweeted that everyone involved in the decision should be "witheringly ashamed". At The Register, Liam Proven points out that it's not new for Mozilla to accept cryptocurrencies; it's just changed payment providers.

One reason to pay attention to this little fiasco is that while Mozilla (and other Internet-related non-profits and open software projects) appeal greatly to the same people who care about the environment and believe that cryptocurrency mining is wasteful and energy-intensive and deplore the anti-government rhetoric of its most vocal libertarian promoters, the richest people willing to donate to such projects are often those libertarians. Trying to keep both onside is going to become increasingly difficult. Mozilla has now suspended its acceptance of cryptocurrencies to consider its position.


In 2010, fatally frustrated with Google, I went looking for a replacement search engine and found DuckDuckGo. It took me a little while to get the hang of formulating successful queries, but both it and I got better. It's a long time since I needed to direct a search elsewhere.

At the time, a lot of people thought it was bananas for a small startup to try to compete against Google. In an interview, founder Gabriel Weinberg explained that the decision had been driven by his own frustration with Google's results. Weinberg talked most about getting to the source you want more efficiently.

Even at that early stage, embracing privacy was part of his strategy. Nearly 12 years on from the company's founding, its 35.3 billion searches last year - up 46% from 2020 - remain a rounding error compared to Google's many hundreds of billions per day. But the company continues to offer things I actually want. I have its browser on my phone, and (despite still having a personal email server) have signed up for one of its email addresses because it promises to strip out the extensive tracking inserted into many email newsletters. And all without having to buy into Apple's ecosystem.

Privacy has long been a harder sell than most privacy advocates would like to admit, usually because it involves giving up a lot of convenience to get it. In this's easy. So far.


Never doubt that tennis is where cultural clashes come home to roost. Tennis had the first transgender athlete; it was at the forefront of second wave feminism; and now it's the venue for science versus anti-science. And now, as even people who *aren't* interested in tennis have seen, it is the foremost venue for the clash between vaccine mandates and anti-vaxx refuseniks. Result: the men's world number one, Serbian player Novak Djokovic (and, a day later, doubles specialist Renata Voracova), was diverted to a government quarantine hotel room like any non-famous immigrant awaiting deportation.

Every tennis watcher saw this coming months ago. On one side, Australian rules; on the other, a tennis tournament that apparently believed it could accommodate a star's balking at an immigration requirement as unyieldingly binary as pregnancy or the Northern Ireland protocol

Djokovic is making visible to the world a reality that privacy advocates have been fighting to expose: you have no rights at borders. If you think Djokovic, with all his unique resources, should be meeting better treatment, then demand better treatment for everyone, legal or illegal, at all borders, not just Australia's.

Illustrations: Winnie the Pooh, discovering the North Pole, by Ernest Howard Shepard, finally in the public domain (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 31, 2021

That was the year that wasn't

dumpster2020-doubleecreations.jpg"It's not the despair," John Cleese's character in Clockwise moans as he lies by the side of the road, frustrated. "I can handle the despair. It's the *hope*."

Two years ago at this time, we were seeing the first reports of a virus we hoped would not affect us. Last year at this time, after much grief and isolation, there was hope: vaccines! This year, many of us are vaccinated and we have some new treatments for this virus - but we are nonetheless facing a surge of a variant so contagious that normally-unflappable scientists sound frightened and holes are appearing in services we take for granted because so many people are either sick or isolating.

The result is that two years on we have vastly better tools and yet it feels like we've gotten nowhere after a fall when many had begun to believe it was nearly over. Last week, the Guardian reports4% of the UK population had covid. In the US officially there were 344,000 new cases; hard to assess its accuracy given the fragmented patchwork of US health care.

Yet we hope as we hoped last year: that this time *next* year, thanks to moves like the patent-free release and technology transfer of a new vaccine from Texas Children's Hospital, maybe the world will be far more widely vaccinated and maybe we'll be starting to see the end of this thing. (A someday child in history class reading this knowing what happened next may laugh...)


The computers, freedom, and privacy story for 2021 has had a lot of similarities with the covid story. We have much better tools, in the form of a US Federal Trade Commission led by noted antitrust reformer Lina Khan; the EU's power to issue fines over violations of the General Data Protection Regulation that are large enough to feature in companies' annual reports; sites like The Markup that are producing clever, technically-informed journalism that imposes transparency on companies in ways they don't like; and hosts of disaffected employees within those companies who are unionizing, leaking documents, and blowing the whistle generally.

And yet, so far nothing has really changed in any structural way. We've had surface tweaks. Twitter has banned posting people's pictures without their consent. Facebook's Oversight Board began operations, appearing to be composed of good people who don't want to be used as plausible diversions but are limited in their power to effect change.

All of that is on top of the story you could tell most years: governments are increasingly pushing for censorship of various kinds. Outages that should have been contained to single companies turned out to have knock-on effects all over the place. And the biggest companies - especially but not only Facebook - are seeing an increasing drumbeat pushing toward regulation, taxation, reformed and increased antitrust enforcement. Worse (from their point of view), their own employees are increasingly leaking documents and telling the world that some of our worst paranoid fantasies about how they operate are true.

So far, the only concrete punishment has been large fines relating to violations of either privacy law or competition law. In September, the EU fined WhatsApp $267 million for a lack of transparency about how it shares user data with other Meta subsidiaries such as Facebook. In November, Google lost its appeal against the EU's 2017 eyewatering fine of $2.8 billion over illegally favoring its own sites in shopping recommendations<./>. In July, Amazon's annual report revealed an EU fine of $877 million relating to cookie consent. In November, Italy fined Amazon ($77.4 million) and Apple ($151.3 million) for antitrust violations.

However, a new development: Russia has issued revenue-based fines against Google ($100 million) and Facebook ($27 million) for failing to remove banned content - chiefly apps, sites, posts, and videos relating to jailed opposition leader Alexei Navalny and his allegations of corruption at the top of Russian government. We've seen government censorship many times before; a fine this big seems to mark a new escalation.

This may be only the beginning; the UK's proposed Online Safety bill includes a provision for fines of up to £18 million or 10% of global turnover. Other new rules may be coming.

As we start 2022, the entertainment industry - or at least SnoopDogg and Paris Hilton - appears to be colonizing the "metaverse", which still sounds to me like any of a dozen things we have already. Second Life, or any of a number of game worlds.

Similarly, I can't see non-fungible tokens as the revolutionary concept some people seem to believe, at least as they have been used to date. I believe that with very few exceptions they will not improve the economic lot of starving artists. Based on personal experinece on the commercially-scorned folk scene, what matters is building and keeping an audience. I do not see how NFTs will help you do that.

But, as finance futurist Dave Birch pointed out in 2020 that digital currencies are being explored by serious people such as the Bank of England and central banks in China, Mexico, India, and many more. That is going to matter.

Happy new year.

Illustrations: Etsy seller DoubleECreations' 2021 dumpster fire Christmas ornaments

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 17, 2021

Dependencies at scals

xkcd-dependency.pngIt's the complexity that's going to get us. (We're talking cyber system failures, not covid!)

In the 1990s and early 2000s Internet pundits used to have a fun game: what was going to kill the Internet? Or, what was going to kill the Internet *next*? The arrival of the web, which brought a much larger user base and comparatively data-hungry graphics (comparatively as in text; obviously much worse was to come), nearly did it for a bit, which is why a lot of us called it the "World Wide Wait".

Here's one example, a net.wars from 2002, based on a panel from the 1998 Computers, Freedom, and Privacy conference: 50 ways to crash the net. The then-recent crisis that had suggested the panel was a denial-of-service attack on the core 13 routers that form the heart of the domain name system. But also: the idea was partly suggested by a Wired article by Simson Garfinkel about how to crash the Internet, based on both the router incident and another in which a construction crew in Virginia sliced through a crucial fiber optic cable. As early as that, Garfinkel blamed centralization and corporatization; the "Internet" that was built to withstand a bomb outage was the old military Internet, not the commercial one built on its bones.

But that's not what's going to get us. People learn! People fix things! In fact, experts tell me, the engineering that underlies the Internet is nothing like it was even ten years ago. "The Internet" as an engineer would talk about it is remarkably solid and robust. When the rest of us sloppily complain about "the Internet" what we mean is buggy software, underfunded open source projects that depend on one or a few overworked people but underpin software used by billions, human error, database leaks, sloppy security policies, corporate malfeasance, criminal attacks, failures of content moderation on Facebook, and power outages. When these factors come into play and connections break, "the Internet" is actually still fine. The average user, however, when unable to reach Netflix and find many other sites are also unreachable, interprets the situation as "the Internet is out". It's a mental model issue.

A few months ago, we noted the fragile brittleness of today's "Internet" after an incident in which one person made a perfectly ordinary configuration change that should have done nothing more than alter the settings on their account and instead set off a cascade of effects that knocked out a load of other Internet services. Also right around then, a ransomware attack using a leaked password and a disused VPN account led to corporate anxiety that shut down the Colonial pipeline, leading to gas shortages up and down the US east coast. These were not outages of "the Internet", but without the Internet they would not have happened.

This year is ending with more such issues. Last week, Amazon Web Services had an outage service event in which "unexpected behavior" created a feedback loop of increasing congestion that might as well have been a denial-of-service attack. What followed was an eight-hour lesson in service dependence. Blocked during that time: parts of Amazon's own retail and delivery operations, including Whole Foods; Disney+; Netflix; Internet of Things devices including Amazon Ring doorbells, Roomba vacuum cleaners, and connected cat litter boxes; and the teaching platform Canvas.

Separately but almost simultaneously, a vulnerability now dubbed Log4Shell was reported to the Apache Foundation, which notified the world at large on December 9. The vulnerability is one of a classic type in which a program - in this case popular logging software Log4j - interprets an input data string as an instruction to execute. In this case, as Dan Goodin explains at Ars Technica, the upshot is that attackers can execute any Java code they like on the affected computer. The vulnerability, which has been present since 2013, is all over the place, embedded in systems that run...everything. Within a few days 44% of corporate networks had been probed and more than 60 exploit variants had been developed, with some attacks coming from state actors and criminal hacking groups. As Goodin explains, your best hope is that your bank, brokerage, and favorite online shops are patching their systems right now.

The point about all this is that greater complexity breeds more, and more difficult to find and fix, errors. Even many technical experts had never heard of Log4j until this bug appeared. Few would expect a bug in a logging utility to be so broadly dangerous, just as few could predict which major businesses would be taken out by an AWS outage. As Kurt Marko writes at Diginomica, the two incidents show the hidden and unexpected dependencies lurking on today's "Internet". The same permissionlessness that allowed large businesses to start with nothing and scale up means dependencies no one has found (yet). In 2014, shortly after Heartbleed reminded everyone of the dangers of infrastructure dependence on software maintained by one or two volunteers, Farhad Majoo warned at the New York Times about the risks of just this complexity.

Complexity and size bring dependencies at scale - harder to predict than the weather, in part because software is forever. Humans are not good at understanding scale.

Illustrations: XKCD's classic cartoon, "Dependency".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 9, 2021


rotated-birch-contactlessmonopoly-ttf2016.jpgA few weeks ago, digital rights activist Amie Stepanovich was in the news for making a T-shirt objecting to the new abuse of "crypto" to mean "cryptocurrencies". As Stepanovich correctly says, "crypto" has meant "cryptography" for at least 30 years and old-timers do not appreciate its appropriation. I am enough of an oldtimer to agree with her, but fear she's fighting a losing battle. For decades "hackers" meant clever people who bent hardware and software systems to their will. Hackers built the first computers. Hackers made the Internet. "Hacker" was a term of honor, applied by others. And what happened circa the mid-1990s? It was repurposed for petty criminals running scripts to break into websites. Real hackers were furious. Did anyone respond sympathetically? They did not. Hackers are now criminals. So: "Crypto" is doomed. Exhibit A: Jeff John Roberts' 2020 history of Coinbase, Kings of Crypto.

This week, anti-monopolist author Matt Stoller unleashed a rant about "crypto", calling the whole shebang - which for him includes the non-fungible token (NFT) craze, cryptocurrencies, and the blockchain, as well as web3, which we tried to make sense of a couple of weeks ago - "a bunch of bullshit". The only use cases Stoller could find were speculation and money laundering; the tools that exist he dismissed as "don't work". He attributes its anti-monopoly zeitgeist to cryptocurrencies' emergence "out of the financial crisis", adding on Twitter that they were "invented about the same time as the iPhone".

This is when I realized: this use of "crypto" is less evolving language, more loss of culture. We all think the world started when we discovered it.


"Crypto", as in cryptography, is probably as old as humanity, basically because every time someone figures out how to protect a secret someone else tries to crack it. For that history read Simon Singh's Cryptography. The development of the specific type of cryptography the nascent Internet needed, public key cryptography, is thoroughly documented in Steven Levy's Crypto. For cryptography in military communications try David Kahn's The Codebreakers.

Cryptocurrencies as a digital equivalent of cash, are usually traced to 1991, when David Chaum described ecash in Scientific American. In the mid-1990s, Chaum attempted to commercialize ecash via his company, Digicash.

Nothing was ready. Commercial traffic on the Internet began in 1994, soon followed by the first ecommerce companies: eBay, Amazon, and Paypal. Graphical web browsers were slow and bare-bones. People were afraid to use *credit cards* online. Yet Chaum hoped they would opt to turn their familiar, hard-earned money into his incomprehensible mathematical thing and bet they could find somewhere to buy something with it. The web was too small, the user base was too small, and it was all so strange and clever, way too soon. Chaum was not the only one to discover this sad reality.

This timing was due to the unexpected democratization of cryptography, which began in 1976, when Martin Hellman and Whitfield Diffie published the basis of public key cryptography (later, it emerged that the UK spy agency GCHQ had already developed it, but the mathematicians couldn't tell anybody). Besides allowing strangers to communicate spontaneously in a trustworthy way, Diffie's and Hellman's work pulled cryptography out of the spy agencies into entirely new communities. By 1991, a single programmer in his home with a personal computer was able to write a piece of powerful encryption software that anyone could use to protect their data and communications, setting off 30 years of crypto debates. Phil Zimmermann's program, PGP, is still in use today, having withstood the tests cryptoanalysts have thrown at it.

These technical developments inspired the beginnings of the movement and the anti-government motivations that Stoller identifies. To many of this crowd, finding easier and more efficient ways to move money around was only part of its appeal. Many embraced the idea of being able to bypass banks, governments, tax collectors, and all the other trappings of the regulated world by using encryption to create untraceable forms of money. In her 1997 book, Close to the Machine, Ellen Ullman tells the story of her close encounters with one of the 1990s movement's leads, and their inability to understand each other's world.

Throughout the 1990s these ideas were swapped back and forth on the Cypherpunks mailing list. You can get the gist from this CrypoInsider tribute to Timothy C. May or May's Cyphernomicon. At Computers, Freedom, and Privacy 1997, May outlined BlackNet, an anonymous market for everything from assassinations to government secrets, all enabled by untraceable digital cash. May's information market is so like early Wikileaks, that at its inception I failed to take it seriously (Julian Assange has said he read the Cypherpunks list).

However: blockhain-based cryptocurrencies are not untraceable. The 1997 Internet was also awash in libertarian predictions, too - and what got built and who's profiting? Sure, some cryptocurrency nuts want to bypass banks and play anti-regulatory games. But some of today's experimenters with cryptocurrencies are central banks, governments, and credit card companies, as fintech expert Dave Birch writes in his book The Cryptocurrency Cold War. If there are winners, they will be the ones claiming most of the spoils. Unless Web3 works out?

Illustrations: Dave Birch, trying to figure out how to play contactless Monopoly.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 29, 2021

Majority report

Frari_(Venice)_nave_left_-_Monument_to_Doge_Giovanni_Pesaro_-_Statue_of_the_Doge.jpgHow do democracy and algorithmic governance live together? This was the central question of a workshop this week on computational governance. This is only partly about the Internet; many new tools for governance are appearing all the time: smart contracts, for example, and AI-powered predictive systems. Many of these are being built with little idea of how they can go wrong.

The workshop asked three questions:

- What can technologists learn from other systems of governance?
- What advances in computer science would be required for computational systems to be useful in important affairs like human governance?
- Conversely, are there technologies that policy makers can use to improve existing systems?

Implied is this: who gets to decide? On the early Internet, for example, decisions were reached by consensus among engineers, funded by hopeful governments, who all knew each other. Mass adoption, not legal mandate, helped the Internet's TCP/IP protocols dominate over many other 1990s networking systems: it was free, it worked well enough, and it was *there*. The same factors applied to other familiar protocols and applications: the web, email, communications between routers and other pieces of infrastructure. Proposals circulated as Requests for Comments, and those that found the greatest acceptance were adopted. In those early days, as I was told in a nostalgic moment at a conference in 1998, anyone pushing a proposal because it was good for their company would have been booed off the stage. It couldn't last; incoming new stakeholders demanded a voice.

If you're designing an automated governance system, the fundamental question is this: how do you deal with dissenting minorities? In some contexts - most obviously the US Supreme Court - dissenting views stay on the record alongside the majority opinion. In the long run of legal reasoning, it's important to know how judgments were reached and what issues were considered. You must show your work. In other contexts where only the consensus is recorded, minority dissent is disappeared - AI systems, for example, where the labelling that's adopted is the result of human votes we never see.

In one intriguing example, a panel of judges may rule a defendant is guilty or not guilty depending on whether you add up votes by premise - the defendant must have both committed the crime and possessed criminal intent - or by conclusion, in which each judge casts a final vote and only these are counted. In a small-scale human system the discrepancy is obvious. In a large-scale automated system, which type of aggregation do you choose, and what are the consequences, and for whom?

Decentralization poses a similarly knotty conundrum. We talk about the Internet's decentralized origins, but its design fundamentally does not prevent consolidation. Centralized layers such as the domain name system and anti-spam blocking lists are single points of control and potential failure. If decentralization is your goal, the Internet's design has proven to be fundamentally flawed. Lots of us have argued that we should redecentralize the Internet, but if you adopt a truly decentralized system, where do you seek redress? In a financial system running on blockchains and smart contracts, this is a crucial point.

Yet this fundamental flaw in the Internet's design means that over time we have increasingly become second-class citizens on the Internet, all without ever agreeing to any of it. Some US newspapers are still, three and a half years on, ghosting Europeans for fear of GDPR; videos posted to web forums may be geoblocked from playing in other regions. Deeper down the stack, design decisions have enabled surveillance and control by exposing routing metadata - who connects to whom. Efforts to superimpose security have led to a dysfunctional system of digital certificates that average users either don't know is there or don't know how to use to protec themselves. Efforts to cut down on attacks and network abuse have spawned a handful of gatekeepers like Google, Akamai, Cloudflare, and SORBS that get to decide what traffic gets to go where. Few realize how much Internet citizenship we've lost over the last 25 years; in many of our heads, the old cooperative Internet is just a few steps back. As if.

As Jon Crowcroft and I concluded in our paper on leaky networks for this year's this year's Gikii, "leaky" designs can be useful to speed development early on even though they pose problems later, when issues like security become important. The Internet was built by people who trusted each other and did not sufficiently imagine it being used by people who didn't, shouldn't, and couldn't. You could say it this way: in the technology world, everything starts as an experiment and by the time there are problems it's lawless.

So this the main point of the workshop: how do you structure automated governance to protect the rights of minorities? Opting to slow decision making to consider the minority report impedes decision making in emergencies. If you limit Internet metadata exposure, security people lose some ability to debug problems and trace attacks.

We considered possible role models: British corporate governance; smart contracts;and, presented by Miranda Mowbray, the wacky system by which Venice elected a new Doge. It could not work today: it's crazily complex, and impossible to scale. But you could certainly code it.

Illustrations: Monument to the Doge Giovanni Pesaro (via Didier Descouens at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 15, 2021

The future is hybrid

grosser-somebody.JPGEvery longstanding annual event-turned-virtual these days has a certain tension.

"Next year, we'll be able to see each other in person!" says the host, beaming with hope. Excited nods in the Zoom windows and exclamation points in the chat.

Unnoticed, about a third of the attendees wince. They're the folks in Alaska, New Zealand, or Israel, who in normal times would struggle to attend this event in Miami, Washington DC, or London because of costs or logistics.

"We'll be able to hug!" the hosts say longingly.

Those of us who are otherwhere hear, "It was nice having you visit. Hope the rest of your life goes well."

When those hosts are reminded of this geographical disability, they immediately say how much they'd hate to lose the new international connections all these virtual events have fostered and the networks they have built. Of course they do. And they mean it.

"We're thinking about how to do a hybrid event," they say, still hopefully.

At one recent event, however, it was clear that hybrid won't be possible without considerable alterations to the event as it's historically been conducted - at a rural retreat, with wifi available only in the facility's main building. With concurrent sessions in probably six different rooms and only one with the basic capability to support remote participants, it's clear that there's a problem. No one wants to abandon the place they've used every year for decades. So: what then? Hybrid in just that one room? Push the facility whose selling point is its woodsy distance from modern life to upgrade its broadband connections? Bring a load of routers and repeaters and rig up a system for the weekend? Create clusters of attendees in different locations and do node-to-node Zoom calls? Send each remote participant a hugging pillow and a note saying, "Wish you were here"?

I am convinced that the future is hybrid events, if only because businesses sound so reluctant to resume paying for so much international travel, but the how is going to take a lot of thought, collaboration, and customization.


Recent events suggest that the technology companies' own employees are a bigger threat to business-as-usual than portending regulation and legislation. Facebook's had two major whistleblowers - Sophie Zhang and Frances Haugen in the last year, and basically everyone wants to fix the site's governance. But Facebook is not alone...

At Uber, a California court ruled in August that drivers are employees; a black British driver has filed a legal action complaining that Uber's driver identification face-matching algorithm is racist; and Kenyan drivers are suing over contract changes they say have cut their takehome pay to unsustainably low levels.

Meanwhile, at Google and Amazon, workers are demanding the companies pull out of contracts with the Israeli military. At Amazon India, a whistleblower has handed Reuters documents showing the company has exploited internal data to copy marketplace sellers' products and rig its search engine to display its own versions first. *And* Amazon's warehouse workers continue to consider unionizing - and some cities back them.

Unfortunately, the bigger threat of the legislation being proposed in the US, UK, New Zealand, Canada is *also* less to the big technology companies than to the rest of the Internet. For example, in reading the US legislation Mike Masnick finds intractable First Amendment problems. Last week I liked this idea of focusing on content social media companies' algorithms amplify, but Masnick persuasively argues it's not so simple, citing Daphne Koller, who thought more critically about the First Amendment problems that will arise in implementing that idea.


The governor of Missouri, Mike Parson, has accused Josh Renaud, a journalist with the St Louis Post-Dispatch, of hacking into a government website to view several teachers' social security numbers. From the governor's description, it sounds like Renaud hit either CTRL-U or hit F12, looked at the HTML code, saw startlingly personal data, and decided correctly that the security flaw was newsworthy. (He also responsibly didn't publish his article until he had notified the website administrators and they had fixed the issue.)

Parson disagrees about the legitimacy of all this, and has called for a criminal investigation into this incident of "hacking" (see also scraping). The ability to view the code that makes up a web page and tells the browser how to display it is a crucial building block of the web; when it was young and there were no instruction manuals, that was how you learned to make your own page by copying. A few years ago, the Guardian even posted technical job ads in its pages' HTML code, where the right applicants would see them. No password, purloined or otherwise, is required. The code is just sitting there in plain sight on a publicly accessible server. If it weren't, your web page would not display.

Twenty-five years ago, I believed that by now governments would be filled with 30-somethings who grew up with computers and the 2000-era exploding Internet and could restrain this sort of overreaction. I am very unhappy to be wrong about this. And it's only going to get worse: today's teens are growing up with tablets, phones, and closed apps, not the open web that was designed to encourage every person to roll their own.

Illustrations: Exhibit from Ben Grosser's "Software for Less, reimagining Facebook alerts, at the Arebyte Gallery until end October.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 20, 2021


Thumbnail image for Jacinda_Ardern_at_the_University_of_Auckland_(cropped).jpg"One case!" railed a computer industry-adjacent US libertarian on his mailing list recently. He was scathing about the authoritarianism he thought implicit in prime minister Jacinda Ardern's decision to lock down New Zealand because one covid-positive case had been found in Auckland.

You would think that an intelligent guy whose life has been defined by the exponential growth of Moore's Law would understand by now. One *identified* case of unknown origin means a likely couple of dozen others who are all unknowingly going to restaurants, bars, concerts, and supermarkets and infecting other people. Put together the highly-transmissible Delta variant, which has ravaged India, caused huge spikes in the UK and Israel despite relatively high vaccination levels, and is vacuuming up ICU beds in vaccine-resistant US states, and the fact that under 20% of New Zealanders are vaccinated. Ardern, whose covid leadership has been widely admired all along, has absorbed the lessons of elsewhere. Locking down for a few days with so few cases buys time to do forwards and backwards contact tracing, 26 deaths, not tens of thousands, and an unstressed health care system. New Zealand has had months of normality punctuated by days of lockdown instead of, as elsewhere, months of lockdown punctuated by days of nervous attempts at socializing. Her country agrees with her. What more do you want?

The case was found Tuesday; lockdown began Wednesday. By Thursday, the the known case count was 21, with models predicting that the number of infected people was probably around 100. If all those people were walking around, that one case - imported, it now appears, from Australia - would be instigating thousands. Ardern has, you should excuse the expression, balls - and a touch of grace. I can't think of any other national leader who's taken the trouble to *thank* the index case for coming forward to get tested and thereby saving countless of his fellow citizens' lives.

Long ago - March 2020 - Ardern's public messaging included the advice "Be kind". This message could usefully be copied elsewhere - for example, the US, where anti-maskers are disrupting school board meetings andclassrooms, and anti-vaccination protests have left a man stabbed in Los Angeles. On Twitter and in other media, some states' medical staff report that among their hospitals'97%-unvaccinated covid caseloads are some who express regret, too late. Timothy Bella reports at the Washington Post that a Mobile, Alabama doctor has told patients that as of October 1 he won't treat anyone who is not vaccinated against covid. Alabama's vaccination rate, 36%, is the lowest in the US, the state is reporting nearly 4,000 new cases per day, and its hospitals have run out of ICU beds. His reaction is understandable. Useful motto for 2021: everyone is entitled to be anxious about the pandemic however they want.

Twitter has several "more of this, please"-type reactions. Tempting: there's the risk to other patients in the waiting room; the desire to push people to get vaccinated; the human reluctance to help people who won't help themselves to avoid dying of a preventable illness; the awareness of the frustration, burn-out, stress, and despair of hospital-based counterparts. And yet. This doctor isn't required by lack of resources to do triage. He just doesn't want to invest in treating people and be forced to watch their miserable, preventable deaths. I understand. But it's dangerous when doctors pick and choose whom they treat. Yes, barring medical contraindications, refusing covid vaccinations is generally a mistake. But being wrong isn't a reason to deny health care.

Ardern has - as she says - the advantage of being last. Working with less information, countries scrambling earlier to cope with new variants will inevitably make more mistakes. At the Atlantic, Howard Markel argues that we need to stop looking back to 1918 for clues to handling this one.

It's certainly true that the 1918 model has led us astray in significant ways, chiefly consequences of confusing covid with flu. In the UK, that confusion led the government to focus on washing hands and cleaning surfaces and ignore ventilation, a mistake it still hasn't fully rectified 18 months later. In the US, "it's a mild flu" is many people's excuse for refusing masks, vaccines, and other cautions. The 1918 example was, however, valuable as a warning of how devastating a pandemic can be without modern tools to control it. Even with today's larger population, 100 million deaths is too significant to ignore. For them, masks, ventilation, and lockdowns were the only really available tools. For us, they bought time for science to create better ones - vaccines. What we lack, however, is societal and political trust (whether or not you blame the Internet) and the will to spread manufacturing across the world. In 1918, the future, post-pandemic and post-war, was a "roaring" decade of celebration. Our post-pandemic future is more pandemics unless we pay attention to public health and building pandemic resistance, especially as climate change brings new microbes into direct contact with humans,

Markel is a professor at the University of Michigan, and his uncomfortable message is this: we are in uncharted territory. No wonder we cling to the idea that the pandemic of 2020-present is kinda-sorta 1918: without that precedent we are facing conditions of radical uncertainty. Be kind.

Illustrations: New Zealand prime minister Jacinda Ardern campaigning in 2017 (Brigitte Neuschwander-Kasselordner, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 18, 2021

Libera me man walks into his bar and one there.

OK, so the "man" was me, and the "bar" was a Reddit-descended IRC channel devoted to tennis...but the shock of emptiness was the same. Because tennis is a global sport, this channel hosts people from Syracuse NY, Britain, Indonesia, the Netherlands. There is always someone commenting, checking the weather wherever tennis is playing, checking scores, or shooting (or befriending) the channel's frequent flying ducks.

Not now: blank, empty void, like John Oliver's background for the last, no-audience year. Those eight listed users are there in nickname only.

A year ago at this time, this channel's users were comparing pandemic restrictions. In our lockdowns, I liked knowing there was always someone in another time zone to type to in real time. So: slight panic. Where *are* they?

IRC dates to the old cooperative Internet. It's a protocol, not a service, so anyone can run an IRC server, and many people do, even though the mainstream, especially the younger mainstream, long since moved on through instant messaging and on to Twitter, WhatsApp groups, Telegram channels, Slack, and Discord. All of these undoubtedly look prettier and are easier to use, but the base functionality hasn't changed all that much.

IRC's enduring appeal is that it's all plain text and therefore bandwidth-light, it can host any size of conversation from a two-person secret channel to a public channel of thousands, multiple clients are available on every platform, and it's free. Genuinely free, not pay-with-data free - no ads! Accordingly, it's still widely used in the open source community. Individual channels largely set their own standards and community norms...and their own games. Circa 2003, I played silly trivia quizzes on a TV-related channel. On this one...ducks. A sample:

゜゜・。 ​ 。・゜゜\_o​< FLAP​ FLAP!

However, the fact that anyone *can* run their own server doesn't mean that everyone *does*, and like other Internet services (see also: open web, email), IRC gravitated towards larger networks that enable discovery. If you host your own server, strangers can only find it if you let them; on a large network users can search for channels, find topics they're interested in, and connect to the nearest server. While many IRC networks still survive, in recent years by far the biggest, according to Netsplit, is Freenode, largely because of its importance in providing connections and support for the open source community. Freenode is also where the missing tennis channel was hosted until about Tuesday, three days before I noticed it was silent. As you'll see in the Netsplit image above, that was when Freenode traffic plummeted, countered by a near-vertical rise in traffic on Libera Chat. That is where my channel turned out to be restored to its usual bustling self.

What happened is both complicated and pretty simple: ownership changed hands without anyone's quite realizing what it was going to mean. To say that IRC is free to use does not mean there are no costs: besides computers and bandwidth, the owners of IRC servers must defend their networks against attacks. Freenode, Wikipedia explains, began as a Linux support channel on another network run by four people, who went on to set up their own network, which eventually became the largest support network for the open source community. A series of ownership changes led from a California charity through a couple of steps to today's owner, the UK-based private company Freenode Ltd, which is owned by Andrew Lee, a technology entrepreneur and founder of the Private Internet Access VPN. No one appears to have thought much about this until last month, when 20 to 30 of the volunteers who run Freenode ("staff") resigned accusing Lee of executing a hostile takeover. Some of them promptly set up Libera as an alternative.

What makes this story about a somewhat arcane piece of the old Internet interesting - aside from the book that demands to be written about IRC's rich history, culture, and significance - is that this is the second time in the last 18 months that a significant piece of the non-profit infrastructure has been targeted for private ownership. The other was the .org top-level domain. These underpinnings need better protection.

On the day traffic plummeted, Lee made deciding to move really easy: as part of changing the network's underlying software, he decided to remove the entire database of registered names and channels - committing suicide, some called it. Because, really: if you're going to have to reregister and reconstruct everything anyway, the barrier to moving to that identical new network over there with all the familiar staff and none of the new owner mishegoss is gone. Hence the mass exodus.

This is why IRC never spawned a technology giant: no lock-in. Normally when you move a conversation it dies. In this case, the entire channel, with its scripts and games and familiar interface, could be recreated at speed and resume as if nothing had happened. All they had to do was tell people. Five minutes after I posted a plaintive query on Reddit, someone came to retrieve me.

So, now: a woman logs into an IRC channel and finds all the old regulars. A duck flaps past. I have forgotten the ".bang" command. I type ".bef" instead. The duck is saved.

Illustrations: Netsplit's graph of IRC network traffic from June 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 7, 2021

Decision not decision

Screenshot from 2021-01-07 13-17-20.pngIt is the best of decisions, it is the worst of decisions.

For some, this week's decision by Facebook's Oversight Board in the matter of "the former guy" Donald J. Trump is a deliberate PR attempt at distraction. For many, it's a stalling tactic. For a few, it is a first, experimental stab at calling the company to account.

It can be all these things at once.

But first, some error correction. Nothing the Facebook Oversight Board does or doesn't do tells us anything much about governing the Internet. Although there are countries where zero-rating deals with telcos make Facebook effectively the only online access most people have, Facebook is not the Internet and it's not the web. Facebook is a commercial company's walled garden that is reached over the Internet and via both the web and apps that bypass the web entirely. Governing Facebook is about how we regulate and govern commercial companies that use the Internet to achieve global reach. Like Trump, Facebook has no exact peer, so it is difficult to generalize from decisions about either to reach wider principles of content moderation.

It's also important to recognize that Trump used/uses different social media sites in different ways. Facebook was important to Trump for organizing campaigns and advertising, as well as getting his various messages amplified and spread by supporters. But there's little doubt that personally he'd rather have Twitter back; its public nature and instant response made it his id-to-fingers direct connection to the media. Twitter fed him the world's attention. Those were the postings that had everyone waking up in the middle of the night panicked in case he had abruptly declared war on North Korea. After his ban, the service was full of tweets expressing relief at the silence.

The board's decision has several parts. First, it says the company was right to suspend Trump's account. However, it goes on to say, the company erred in applying an "indeterminate and standardless penalty of indefinite suspension". It goes on to tell Facebook to develop "clear, necessary, and proportionate policies that promote public safety and freedom of expression". The board's charter requires Facebook to make an initial response within 30 days, and the decision itself orders Facebook to review the case to "determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform". It appears that the board is at least trying not to let itself be used as a shield.

At the New York Times, Kara Swisher calls the non-decision kind of perfect. At the Washington Post, Margaret Sullivan calls the board a high-priced fig leaf. At Lawfare, Evelyn Douek believes the decision shows promise but deplores the board's reluctance to constrain Facebook, On Wednesday's episode of Ben Wittes's and Kate Klonick's In Lieu of Fun, panelists speculated what indicators would show the board was achieving legitimacy. Carole Cadwalladr, who broke the Cambridge Analytica story in 2016, calls Facebook, simply, cancer and views the oversight board as a "dangerous distraction".

When the board first began issuing decisions, Jeremy Lewin commented that the only way the board - "a dangerous sham" - could show independence was to reverse Facebook's decisions, which in all cases, that would mean restoring deleted posts since the board has no role in evaluating decisions to retain posts. It turns out that's not true. In the Trump decision, the board found a third way: calling out Facebook for refusing to answer its questions, failing to establish and follow clear procedures, and punting on its responsibilities.

However, despite the decision's legalish language, the Oversight Board is not a court, and Facebook's management is not a government. For both good and bad: as Orin Kerr reminds Facebook can't fine, jail, or kill its users; as many others will note, as a commercial company its goals are profits and happy shareholders, not fairness, transparency, or a commitment to uphold democracy. If it adopts any of those latter goals, it's because the company has calculated that it will cost more not to. Therefore, *every* bit of governance it attempts is a PR exercise. In pushing the ultimate decision back to Facebook and demanding that the company write and publish clear rules the board is trying to make itself more than that. We will know soon whether it has any hope of success.

But even if the board succeeds in pushing Facebook into clarifying its approach to this case, "success" will be constrained. Here's the board's mission: "The purpose of the board is to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook's content policies." Nothing there permits the board to raise its own cases, examine structural defects, or query the company's business model. There is also no option for the board to survey Trump's case and the January 6 Capitol invasion and place it in the context of evidence on Facebook's use to incite violence in other countries - Myanmar, Sri Landa, Kindia, Indonesia, Mexico, Germany, and Ethiopia. In other words, the board can consider individual cases when it is assigned them, but not the patterns of behavior that Facebook facilitates and are in greatest need of disruption. That will take governments and governance.

Illustrations: The January 6 invasion of the US Capitol.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.

Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.

Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 5, 2021

Covid's children

LSE-Livingstone-panel-2021-03.pngI wonder a lot about how the baby downstairs will develop differently because of his September 2020 birth date. In his first five months, the only humans who have been in close contact are his two parents, a smattering of doctors and nurses, and a stray neighbor who occasionally takes him for walks. Walks, I might add, in which he never gets out of his stroller but in which he exhibits real talent for staring contests (though less for intelligent conversation). His grandparents he only knows through video calls. His parents think he's grasped that they're real, though not present, people. But it's hard to be sure.

The effects of the pandemic are likely to be clear a lot sooner for the older children and young people whose lives and education have been disrupted over the past year. This week, as part of the LSE Post-Covid World Festival, Sonia Livingstone (for whose project I wrote some book reviews a few years ago) led a panel to discuss those effects.

Few researchers in the UK - Livingstone, along with Andy Phippen, is one of the exceptions, as is, less formally, filmmaker and House of Lords member Beeban Kidron, whose 2013 film InRealLife explores teens' use of the Internet - ever bother to consult children to find out what their online experiences and concerns really are. Instead, the agenda shaped by politicians and policy makers centers on adults' fears, particularly those that can be parlayed into electoral success. The same people who fret that social media is posing entirely new problems today's adults never encountered as children refuse to find out what those problems look like to the people actually experiencing them. Worse, the focus is narrow: protecting children from pornography, grooming, and radicalization is everywhere, but protecting them from data exploitation is barely discussed. In the UK, as Jen Persson, founder of DefendDigitlMe, keeps reminding us, collecting children's data is endemic in education.

This was why the panel was interesting: all four speakers are involved in projects aimed to understand and amplify children's and young people's own concerns. From that experience, all four - Konstantinos Papachristou, the youth lead for the #CovidUnder19 project, Maya Götz, who researches children, youth, and television, Patricio Cuevas-Parra, who is part of a survey of 10,000 children and young people, and Laurie Day - highlighted similar issues of lack of access and inequality - not just to the Internet but also to vaccines and good information.

In all countries, the shift to remote leaning has been abrupt, exposing infrastructure issues that were always urgent, but never quite urgent enough to fix. Götz noted that in some Asian countries and Chile she's seeing older technologies being pressed into service to remedy some of this - technologies like broadcast TV and radio; even in the UK, after the first lockdown showed how many low-income families could not afford sufficient data plans, the the BBC began broadcasting curriculum-based programming.

"Going back to normal," Day said, "needs a rethink of what support is needed." Yet for some students the move to online learning has been liberating, lightening social and academic pressures and giving space to think about their values and the opportunity to be creative. We don't hear so much about that; British media focus on depression and loss.

By the time the baby downstairs reaches school age, the pandemic will be over, but its footprint will be all over how his education proceeds.

Persson, who focuses on the state's use of data in education, says that one consequence of the pandemic is that Microsoft and Google have entrenched themselves much more deeply into the UK's education infrastructure.

"With or without covid, schools are dependent on them for their core infrastructure now, and that's through platforms joining up their core personal data about students and staff - email addresses, phone numbers, names, organizational data - and joining all that up," she says. Parents are encouraged to link to their children's accounts, and there is, for the children concerned, effectively, "no privacy". The software, she adds, was really designed for business and incompletely adapted for education. For example, while there are controls schools can use for privacy protection, the defaults, as always, are towards open sharing. In her own children's school, which has 2,000 students, the software was set up so every user could see everyone else's email address.

"It's a huge contrast to [the concern about] online harms, child safety, and the protection mantra that we have to watch everything because the world is so unsafe," she says. Partly, this is also a matter of perception: policy makers tend to focus on "stranger danger" and limiting online content rather than ID theft, privacy, and how all this collected data may be used in the future. The European Digital Rights Initiative (EDRi) highlights the similar thinking behind European Commission proposals to require the platforms to scan private communications as part of combating child sexual abuse online.

All this awaits the baby downstairs. The other day, an 18-month-old girl ran up to him, entranced. Her mother pulled her back before she could touch him or the toys tied to his stroller. For now, he, like other pandemic babies, is surrounded by an invisible barrier. We won't know for several decades what the long-term effect will be.

Illustrations: Illustrations: Sonia Livingstone's LSE panel.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 5, 2021

Dead cat trampoline

If you want to understand the story of why a bunch of Redditors have used Gamestop shares to squeeze a load of profitability out of a couple of hedge funds you could do worse than to read the 1994 Wired article about the time the Usnet newsgroup alt.tasteless invaded rec.pets.cats, by Josh Quittner. A horde of "little guys" invading the protected territory of a handful of stodgy, entitled billionaires and hedge fund managers could be the Internet's origin story. For example: bitcoin.

In brief, for those who've missed the breathless coverage: the "troubled" retail chain Gamestop. whose share price opened 2021 at around $17 and which dropped as low as $2.57 during 2020, suddenly spiked (briefly) last week to $483. The technical explanation is that this is an extreme version of a short squeeze, a vicious spiral in which a company's rising share price forces traders who have bet that it will go down scramble to cover their losses before they can escalate further.

Rule of thumb: when your get-rich-quick strategy appears on CNBC, it's time to cash out.

Calling Gamestop "troubled" is polite. Offline retail in general and particularly malls, where Gamestop outlets are located, are struggling. The company's revenues slid badly in 2019. That December - still 2019 - the best suggestions for recovery were to leverage the company's 5,600 physical locations to create experiences that can't be replicated online and to build its own line of products while it waited for the launch of new game consoles to goose its business. *Then* came the pandemic and its shutdowns to accelerate the spiral downwards. The company seems unlikely to be able to mount a comeback. Terrible for its employees, terrible for the malls and towns that depended on sales and other taxes, terrible for other local dependent businesses, but an opportunity for short sellers who get their timing exactly right.

In January, a Reddit group (subReddit Wall Street Bets) spotted that short sellers' commitments amounted to more than double the number of outstanding Gamestop shares and correctly recognized that they were looking at a spring-loaded slingshot. Ordinary retail investors can't, individually, buy enough to set a squeeze in motion, but a crowdsourcing, coordinated through an online forum, could indeed move the needle. The Redditors were also aided by 2019's industry-wide elimination of commissions on retail stock trades, which makes very small trades newly viable. The persistence of friction-inducing costs is why the Reddit scenario is unlikely to be replicated in the UK: British brokers still charge commissions on trades and the government adds stamp duty.

The markets are broken, short seller Carson Block tells Julia LaRoche at Yahoo Finance, in response to this incident. Like many over the last four years, he notes the widening gap between fundamental value and market pricing, between the real economy in which millions of Americans were struggling to afford rent even before the pandemic and the market, where 84% of the value is held by 10% of Americans, a level of inequality seen in England in 1966. This is not good news. WallStreetBets may be a messenger telling us that things are worse than we thought, but decades of underlying trends have fueled today's overpriced market: the extraordinarily low interest rates since 2008, the lack of alternatives for small, ongoing savings, the decades of replacing pensions with shares-filled 401(k) plans, and most recently Trump's tax cuts. The result is distorting the entire economy and robbing working Americans of a decent living.

Much of the Reddit action centered on Robinhood, a brokerage that markets itself as democratizing finance. At Slate, Alex Kershner says no: Robinhood's retail investors are the product, and Robinhood's real customers are Wall Street's market makers, who pay for the privilege of executing its stock orders. This arcane subject is best explained by Michael Lewis in Flash Boys. Because of the way it reduced friction for small-time retail traders - free commissions, instant access to deposited money, margin trading - Robinhood contributed to the volatility, but it's not really the story by itself. It is merely the last stop on a decades-old journey toward making it possible for retail investors to take risks previously limited to people who could provably afford the losses. The good side of that approach is to protect ordinary people from losing their homes; the bad side is to reserve the biggest profits for people who don't really need them.

If past decades are any guide, breaking those protections will hurt people. On Monday, February 1, Gamestop dropped 75%; on Tuesday it dropped 60%. On Wednesday, it rose slightly - about 2.5% in what experienced investors would call a "dead cat bounce", as Thursday saw it drop another 42%. Price Thursday night: $53.33. No one who bought at $483 will get their money back. As Farhad Manjou warns at the New York Times, in the end the house always wins. In the long term, fundamentals *should* matter. because the value of having the market in the first place isn't to make people rich but to help channel investment to viable businesses. If it doesn't fulfill that function it's time for real reform.

Illustrations: Chart of Gamestop's share price for the three months ending close of business February 4, 2021 (from Big Charts.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 22, 2021

In the balance

Thumbnail image for 800px-Netherlands-4589_-_Lady_of_Justice_&_William_of_Orange_Coat-o-Arms_(12171086413).jpgAs the year gets going, two conflicts look like setting precedents for Internet regulation: Australia's push to require platforms to pay license fees for linking to their articles; and Facebook's pending decision whether to make former president Donald Trump's ban permanent, as Twitter already has.

Facebook has referred Trump's case to its new Oversight Board and asked it to make policy recommendations for political leaders. The Board says it will consider whether Trump's content violated Facebook community standards and "values", and whether its removal respected human rights standards. It expects to report within 90 days; the decision will be binding on Facebook.

On Twitter, Kate Klonick, an assistant professor at St. John's University of Law, who has been following the Oversight Board's creation and development in detail, says the important aspect is not the inevitably polarizing decision itself, but the creation of what she hopes will be a "transparent global process to adjudicate these human rights issues of speech". In a Yale Law Journal articledocumenting the board's history so far, she suggests that it could set a precedent for collaborative governance of private platforms.

Or - and this seems more likely - it could become the place where Facebook dumps the controversial cases where making its own decision gains the company nothing. Trump is arguably one of these. No matter how much money Trump's presidential campaign (which seems unlikely to have any future) netted the company, it surely must be a drop in the ocean of its overall revenues. With antitrust suits pending and a politically controversial decision, why *wouldn't* Facebook want to hand it off? Would the company do the same in a case where the company's business model was at stake, though? If it does and the decision goes against Facebook's immediate business interests, will shareholders sue?

Those questions won't be answered for some years. Meanwhile, this initial case will be a milestone in Internet history, as Klonick says. If the board does not create durable principles that can be applied across other countries and political systems, it will have failed. The larger question, however, which is the circulation of deliberate lies and misinformation, is more complex.

For that, letters sent this week by US Congress members Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) may be more germane: they have asked the CEOs of Facebook, Google, YouTube, and Twitter to alter their algorithms to stop promoting conspiracy theories at scale. Facebook has been able to ignore previous complaints it was inciting violence in markets less essential to its bottom line and of less personal significance.

The Australian case is smaller, and kind of a rerun, but still interesting. We noted in September that the Australian government had announced the draft News Media Bargaining Code, a law requiring Google and Facebook (to start with) to negotiate license fees for displaying snippets of news articles. By including YouTube, user postings, and search engine results, Australia hoped to ensure the companies could not avoid the law by shutting down, which was what happened in 2014 when Spain enacted a similar law that caught only Google News. Early reports indicated that its withdrawal resulted in a dramatic loss of traffic to publishers' sites.

However, by 2015, Spain's Association of Newspaper Editor was saying members were reporting just a 12% loss of traffic, and a 2019 assessment argues that in fact the closure (which persists) made little long-term difference to publishers. If this is true, it's unarguably better for publishers not to be dependent on a third-party company to send them traffic out of the goodness of their hearts. The more likely underlying reality, however, is that people have learned to use generic search engines and social media to find news stories - in which case the Australian law could still be damaging to publishers' revenues.

It is, as journalist Michael West points out, exceptionally difficult to tease out what portion of Google's or Facebook's revenues are attributable to news content. West argues that a better solution to those companies' rise is regulating their power and taxing them appropriately; neither Google nor Facebook is in the business of reporting the news and are not in direct competition with the traditional publishers - the biggest of which, in Australia, are owned by Rupert Murdoch and so filled with climate change denial that Murdoch's own son left the company because of it.

In December, Google and Facebook won a compromise that will allow Google to include in the negotiations the value it brings in the form of traffic; limit the data it has to share with publishers; and lower the requirement for platforms to share algorithm changes with the publishers. Prediction: the publishers aren't going to wind up getting much out of this.

For the rest of us, though, the notion that users could be stopped from sharing news links (as Facebook is threatening) should be alarming; open, royalty-free linking, as web inventor Tim Berners-Lee told Bloomberg above, is the fundamental characteristic of the web. We take the web so much for granted now that it's easy to forget that the biggest decision Berners-Lee made, with the backing of his employers at CERN, was to make it open instead of proprietary. The Australian law is the latest attempt to modify that decision. I wish I could say it will never catch on.

Illustrations: Justitia outside the Delft Town Hall, the Netherlands (via Dennis Jarvis at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 15, 2021

One thousand

net.wars-the-book.gifIn many ways, this 1,000th net.wars column is much like the first (the count is somewhat artificial, since net.wars began as a 1998 book, then presaged by four years of news analysis pieces for the Daily Telegraph, and another book in 2001...and a lot of my other writing also fits under "computers, freedom, and privacy"; *however*). That November 2001 column was sparked by former Home Office minister Jack Straw's smug assertion that after 9/11 those of us who had defended access to strong cryptography must be feeling "naive". Here, just over a week after the Capitol invasion, three long-running issues are pertinent: censorship; security and the intelligence failures that enabled the attack; and human rights when demands for increased surveillance capabilities surface, as they surely will.

Censorship first. The US First Amendment only applies to US governments (a point that apparently requires repeating). Under US law, private companies can impose their own terms of service. Most people expected Twitter would suspend Donald Trump's account approximately one second after he ceased being a world leader. Trump's incitement of the invasion moved that up, and led Facebook, including its subsidiaries Instagram and WhatsApp, Snapchat, and, a week after the others, YouTube to follow suit. Less noticeably, a Salesforce-owned email marketing company ceased distributing emails from the Republican National Committee.

None of these social media sites is a "public square", especially outside the US, where they've often ignored local concerns. They are effectively shopping malls, and ejecting Trump is the same as throwing out any other troll. Trump's special status kept him active when many others were unjustly banned, but ultimately the most we can demand from these services is clearly stated rules, fairly and impartially enforced. This is a tough proposition, especially when you are dependent on social media-driven engagement.

Last week's insurrection was planned on numerous openly accessible sites, many of which are still live. After Twitter suspended 70,000 accounts linked to QAnon, numerous Republicans complaining they had lost followers seemed to be heading to Parler, a relatively new and rising alt-right Twitterish site backed by Rebekah Mercer, among others. Moving elsewhere is an obvious outcome of these bans, but in this crisis short-term disruption may be helpful. The cost will be longer-term adoption of channels that are harder to monitor.

By January 9 Apple was removing Parler from the App Store, to be followed quickly by Android (albeit less comprehensively, since Android allows side-loading). Amazon then kicked Parler off its host, Amazon Web Services. It is unknown when, if ever, the site will return.

Parler promptly sued Amazon claiming an antitrust violation. AWS retaliated with a crisp brief that detailed examples of the kinds of comments the site felt it was under no obligation to host and noted previous warnings.

Whether or not you think Parler should be squashed - stipulating that the imminent inauguration requires an emergency response - three large Silicon Valley platforms have combined to destroy a social media company. This is, as Jillian C. York, Corynne McSherry, and Danny O'Brien write at EFF, a more serious issue. The "free speech stack", they write, requires the cooperation of numerous layers of service providers and other companies. Twitter's decision to ban one - or 70,000 - accounts has limited impact; companies lower down the stack can ban whole populations. If you were disturbed in 2010, when, shortly after the diplomatic cables release, Paypal effectively defunded Wikleaks after Amazon booted it off its servers, then you should be disturbed now. These decisions are made at obscure layers of the Internet where we have little influence. As the Internet continues to centralize, we do not want just these few oligarchs making these globally significant decisions.

Security. Previous attacks - 9/11 in particular - led to profound damage to the sense of ownership with which people regard their cities. In the UK, the early 1990s saw the ease of walking into an office building vanish, replaced by demands for identification and appointments. The same happened in New York and some other US cities after 9/11. Meanwhile, CCTV monitoring proliferated. Within a year of 9/11, the US passed the PATRIOT Act, and the UK had put in place a series of expansions to surveillance powers.

Currently, residents report that Washington, DC is filled with troops and fences. Clearly, it can't stay that way permanently. But DC is highly unlikely to return to the openness of just ten days ago. There will be profound and permanent changes, starting with decreased access to government buildings. This will be Trump's most visible legacy.

Which leads to human rights. Among the videos of insurrectionists shocked to discover that the laws do apply to them were several in which prospective airline passengers discovered they'd been placed preemptively on the controversial no-fly list. Many others who congregated at the Capitol were on a (separate) terrorism watch list. If the post-9/11 period is any guide, the fact that the security agencies failed to connect any of the dots available to them into actionable intelligence will be elided in favor of insisting that they need more surveillance powers. Just remember: eventually, those powers will be used to surveil all the wrong people.

Illustrations: net.wars, the book at the beginning.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 7, 2021

The most dangerous game

Screenshot from 2021-01-07 13-17-20.pngThe chaos is the point.

Among all the things to note about Wednesday's four-hour occupation of the US Capitol Building - the astoundingly ineffective blue line of police, the attacks on journalists, the haphazard mix of US, Trump, Confederate, and Nazi costumes and flags, the chilling in a hotel lobby - is this: no one seemed very clear about the plan. In accounts and images, once inside, some of the mob snap pictures, go oh, look! emails!, and grab mementos like dangerous and destructive tourists. Let's not glorify them and their fantasies of heroism; they are vandals, they are criminals, they are incipient felons, they are thugs. They are certainly not patriots.

One reason, of course, is that their leader, having urged them to storm the Capitol, went home to his protective Secret Service and the warmth of watching the wreckage on TV inside one of the most secure buildings on the planet. Trump is notoriously petty and vengeful against anyone who has crossed him. Why wouldn't he push the grievance-filled conspiracy theorists whose anger he harnessed for personal gain to destroy the country that dared to reject him? The festering anger that Trump's street-bully smarts (and those of his detonator, Roger Stone) correctly spotted as a political opportunity was perfectly poised for Trump's favorite chaos creation game: "Let's you and him fight".

"We love you," and "You are very special," Trump told the rioters to close out the video clip he issued to tell them to go home, as if this were a Hollywood movie and with a bit of sprinkled praise his special effects crew could cage the Kraken until he next wanted it.

The someday child studying this period in history class will marvel at our willful blindness to white violence openly fomented while applying maximum deterrence to Black Lives Matter.

Our greatest ire should be reserved for the cynically exploitative, opportunistic Trump and supporting senators Josh Hawley (R-MO) and Ted Cruz (R-TX), whom George F. Will says will permanently wear a scarlet "S" for "seditionist" and Trump's many other politicians and enablers who consciously lied, a list to which Marcy Wheeler adds senator Tommy Tuberville (R-AL). It's fashionable to despise former Trump fixer-lawyer Michael Cohen, but we should listen to him; his book, Disloyal, is an addict's fourth and fifth steps (moral inventory and admitting wrongs) that unflinchingly lays bare his collaboration in Trump's bullying exploitation.

The invasion perversely hastened Biden/Harris's final anointing; Republicans dropped most challenges in the interests of Constitutional honor (read: survival). Mitch McConnell (R-KY), who as Senate Majority Leader has personally made governance impossible, sounded like a man abruptly defibrillated into sanity, and Senator Lindsey Graham's (R-SC) careening wait-for-his-laugh "That's it! I'm done!" speech led some on Twitter to surmise he was drunk. Only Hawley (R-MO), earlier seen fist-pumping the rioters-in-waiting, seemed undeterred.

High-level Trump administration members - those who can afford health insurance are fleeing. Apparently we have finally found the line they won't cross, though it may not be the violence but the prospect of having to vote on invoking the 25th Amendment.

An under-discussed aspect of the gap between politics - Beltway or Westminster - and life as ordinary people know it is that for many politicians and media, making proposterous claims they don't really believe is a game. Playing exhibitionist contrarian for provocation is a staple of British journalism. Boris Johnson famously wrote pre-referendum columns arguing both Leave and Remain before choosing pro-Leave's personal opportunities. They appear to care little for the consequences, measured in covid deaths, food bank use, deportations, and shattered lives.

All these posturers score against each other from comfortable berths and comfortably assume they are beyond repercussions. It's the same dynamic as the one at work among the advocates of letting the virus rip through the population at large, as if infection is for the little people and our desperately overstressed, traumatized health care workers are replaceable parts rather than a precious resource.

Perhaps the most extraordinary aspect is that this entire thing was planned out in the open. There was no need to backdoor encryption. They had merch; Trump repeatedly tweeted his intentions; planning was on public forums. In September, the Department of Homeland Security warned that white supremacy is the "most lethal threat" to the US. On Tuesday, Bellingcat warned that a dangerous meld of numerous right-wing constituencies was setting out for DC. Talia Lavin's 2020 book, Culture Warlords, thoroughly documented the online hate growing into real-world violence.

Wednesday also saw myriad mostly peaceful statehouse protests: Texas, Utah, Michigan, California, Oregon, Arizona, Arkansas, Kansas, Wisconsin, Nevada (with a second protest in Las Vegas), Florida, and Georgia. Pause to remember Wednesday's opener: Democrats Jon Ossoff and Raphael Warnock won Georgia's Senate seats.

Trump has 12 more days. Twitter and Facebook, which CNN reporter Donie Sullivan calls complicit, have locked Trump's accounts; Shopify has closed his shops. The far-right forums are considering the results while the FBI makes arrests and Biden builds his administration.

The someday child will know the next part faster than we will.

Illustrations: Screenshot of Wednesday's riot in progress.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 31, 2020

Build back

New_Years_2014_Fireworks_-_London_Eye-WM.jpgIn my lifetime there has never been a New Year that has looked so bleak. At 11pm last night, Big Ben tolled the final severance of the UK's participation in the European Union. For the last few days, as details of the trade agreement agreed last night become known, Twitter has been filling up with graphics and text explaining the new bureaucracy that will directly or indirectly affect every UK resident and the life complications still facing the 3 million EU citizens resident in the UK and the UK expatriates in the EU. Those who have pushed for this outcome for many years will I'm sure rejoice, but for many of us it's a sad, sad moment and we fear the outcome.

The bright spot of the arriving vaccines is already being tarnished by what appears to be a panic response pushing to up-end the conditions under which they were granted an emergency license. Case numbers are rising out of control, and Twitter is filled with distress signals from exhausted, overwhelmed heath care workers. With Brexit completed and Trump almost gone, 2021 will be a year of - we hope - renewed sanity and sober remediation, not just of the damage done this year in specific but of the accrued societal and infrastructural technical debt that made everything in 2020 so much worse. It is already clear that the cost of this pandemic will be greater than all the savings ever made by cuts to public health and social welfare systems.

Still, it *is* a new year (because of human-made calendars), and because we love round numbers - defining "round" as the number of digits our hands happen to have - there's a certain amount of "that was the decade" about it. There is oddly less chatter about the twenty years since the turn of the millennium, which surprises me a bit: we've completed two-fifths of the 21st century!

Even the pre-pandemic change was phenomenal. Ten years ago - 2010 - was when smartphones really took off, pouring accelerant on Facebook, Twitter, and other social media, which were over-credited for 2011's "Arab Spring" ("useful but not sufficient", the linked report concludes). At Gikii 2019, Andres Guademuz described this moment as "peak cyber-utopia". In fact, it was probably the second peak, the first having been circa 1999, but who's counting? Both waves of cyber-utopianism seem quaint now, in the face of pandemic-fueled social and economic disruption. We may - we do - look to social media for information - but we've remembered we need governments for public health measures, economic support, and leadership. The deliberate thinning of the institutions we now need to save us in countries like the US and UK is one legacy of the last 30 years of technology-fueled neoliberalism. Ronald Reagan, US president from 1980 to 1988, liked to say that the most frightening words in the English language were "I'm from the government and I'm here to help". Far more frightening is the reality of a government that can't, won't, or chooses not to help.

Twenty years ago - 2000 - was the year of the dot-com peak, when AOL disastrously merged with Time-Warner. The crash was well underway when 9/11 happened and ushered in 20 years of increasing surveillance: first an explosion of CCTV cameras in the physical world and, on the Internet, data retention and interception, and finally, in the last year or so, the inescapability of automated facial recognition, rolled out without debate or permission.

Despite having argued against all these technologies as they've come along, I wish I could report that investing in surveillance instead of public health had paid dividends in the Year of Our Pandemic 2020. Contact tracing apps, which we heard so much about earlier in the year, have added plenty of surveillance capabilities and requirements to our phones and lives, but appear to have played little part in reducing infection rates. Meanwhile, the pandemic is fueling the push to adopt the sort of MAGIC flowthrough travel industry execs have imagined since 2013. Airports and our desire to travel will lead the way to normalizing pervasive facial recognition, fever-scanning cameras, and, soon, proof of vaccination.

This summer, many human rights activists noted the ethical issues surrounding immunity passports. Early in the year this was easy pickings because the implementations were in China. Now, however, anyone traveling to countries like Canada and the US must be able to show a negative covid test within 72 hours before traveling from the UK. Demand for vaccination certificates is inevitable. Privacy International taken the view that " Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." Being careful about this is essential, because unfairness entrenched while we rebuild will be *very* hard to dislodge.

So, two big things to work towards in 2021. The first is to ensure that new forms of unfairness do not become the new normal. The second, which will take a lot of luck, even more diligence, and a massive scientific effort, is to ensure that one item on the Mindset list of 2040's 18-year-olds will be "There has never been a pandemic."

Happy new year.

Illustrations: New year's eve fireworks in London, 2014 (via Clarence Ji).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 25, 2020

Year out

Katalin_Kariko.jpgSometime like five or 15 years from now, I imagine someone will look back and see that the seeds of some wonderful new technology were sown off-camera during this year and be surprised we never noticed. But the reality is that from March onwards the coronavirus swallowed up the news, challenged only - and only in the UK - by the awful crawl to Brexit.

Even the advance in AI - or what passes for it - represented by DeepMind's having solved protein folding only occupied the news for a day or so, then sank under the unrelenting sameness of watching the latest case numbers and getting by, a day at a time (and that was the *privileged* version of life in the pandemic). In retrospect, the overwhelming information technology trend was the culmination of years of rising awareness of the many adverse consequences of the things net.wars complains about: consolidation, centralization, and users' loss of privacy and autonomy.

The giant exception to both the general inattention and technological discontent was the collaborative scientific muscle on display in biotech, from the first rapid sequencing of the novel coronavirus's genome to the successful, cavalry-to-the-rescue arrival of the new mRNA vaccine platform that has been in the making for 20-odd years. In this case, the Internet delivered as promised, from enabling scientists to exchange preprint research and collaborate across the globe to giving individuals direct access to solid science, to providing a safe and necessary alternative to high-risk in-person action.

Three big technology stories did achieve traction:

- The new and aggressive push in the US to rein in the four biggest technology companies. Forty-six states, plus Guam and Washington, DC, and the Federal Trade Commission have filed antitrust suits against Facebook, which elsewhere is being described as a Doomsday Machine that may wipe out the planet. Ten states and the Department of Justice have filed suits against Google. Amazon, already subject to antitrust action in the EU surely won't be far behind. Apple, the last of the four whom Congress summoned last summer, won't escape even if it's never sued directly because the Google suit targets the $8 to $12 billion it pays Apple every year to make its search engine the default.

- The discovery that Russia has mounted a long and successful cyber attack on US federal agencies, with slowly-emerging ramifications for countries and companies all over the world.

- The speed with which both governments and industry jumped on surveillance technologies in response to the health crisis. Some of it is not bad. Wastewater epidemiology, a polite term for surveilling sewage for early warnings of virus outbreaks, isn't personal and is a longstanding public health technique, although one can conceive of unfair and intrusive implementations. Many other technologies - immunity passports, fever scanning, and contact tracing apps most obviously, but also automated facial recognition - have yet to fully take hold, but it seems likely that despite warnings about unfairness and intrusion they will be too tempting for governments to resist in the name of safety, particularly for travel. All of this will be hard to dislodge later. The UK in particular has ignored expert advice to take advantage of the person-centuries of contact tracing experience in local authorities, instead paying billions to cronies and companies like Serco. Palantir in particular appears to be embedding itself for the longer term.

Everything else is dithering.

Prominent among the dithering is Section 230 of the Communications Decency Act, which Jeff Kosseff, the law's biographer, has explained all year on Twitter. Every content moderation discontent is being blamed on this short law limiting intermediary liability. With the antitrust suits pending and so many other crises - and with repeal-happy Donald Trump's departure from power - it's hard to believe that this law will change in 2021.

In the UK, the last-second Brexit deal leaves data protection and the online harms legislation lurking in wait.

The big lessons of this tortured year:

- Basic research can pay off in unexpected ways. As Charles Arthur has noted, the speed of the novel coronavirus's genetic sequencing was a result of the Human Genome Project, whose value at the time was purely speculative. The carrot was personalized medicine, which, with a few exceptions, has yet to fulfill its imagined promise. DNA sequencing did, however, spawn an industry of genealogical sites and services promising to use DNA for everything from finding your soul mate to predicting your medical future; I'm not a fan or either for both privacy and scientific validity reasons. But that blue-sky project is now saving both our individual lives and our civilization.

- It really is, as Bruce Schneier writes, long past time to stop imagining that "we" "good guys" deserve exceptional access to the rest of the world's computers. It. Does. Not. Work. As I keep writing, a hole is a hole. Neither the coronavirus nor the hole cares about race, wealth, class, or perceived virtue. This applies as much to the long-running battle over requiring backdoors in encryption as to a nation's broader cybersecurity. Politicians and PR people take the view that the best defense is a good offense; in this case, the best offense is a good defense.

Merry Christmas. Only one more week before 2021.

Illustrations: Katalin Karikó, the Hungarian biochemist behind the mRNA vaccines.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 11, 2020

Facebook in review

parliament-whereszuck.jpgLed by New York attorney general Letitia James, this week 46 US states, plus Guam, and Washington, DC, and, separately, the Federal Trade Commission filed suits against Facebook alleging that it has maintained an illegal monopoly while simultaneously reducing privacy protections and services to boost its bottom line. The four missing states: Alabama, Georgia, South Carolina, and South Dakota.

As they say, we've had this date from the beginning.

It's seemed likely for months that legal action against Facebook was on the way. There were the we-mean-business Congressional hearings and the subsequent committee report, followed by the suit against Google the Department of Justice filed in October.

Facebook seems peculiarly deserving. It began in 2004 as a Harvard-only network, using its snob appeal to expand to the other Ivy League schools, then thousands of universities and high schools, and finally the general public. Mass market adoption grew in tandem with the post-2009 explosion of smart phones. By then, Facebook had frequently tweaked its privacy settings and repeatedly annoyed users with new privacy-invasive features in the (sadly correct) and arrogant belief they'd never leave. By 2010, Zuckerberg was claiming that "privacy is no longer a social norm", adding that were he starting then he would make everything public by default, like Twitter.

It's hard to pick Facebook's creepiest moments out of so many, but here are a few: in 2011 it began auto-recognizing user photographs, in 2012 it dallied with in-network "democracy" - a forerunner of today's unsatisfactory oversight board, and in 2014 it tested emotionally manipulating its users.

In 2011, based on the rise and fall of earlier services like CompuServe, AOL, Geocities, LiveJournal, and MySpace you can practically carbon-date people by their choice of social media - some of us wrongly surmised that perhaps Facebook had peaked. "The [online] party keeps moving" is certainly true; what was different was that Zuckerberg knew it and launched his program of aggressive and defensive acquisitions.

The 2012 $1 billion acquisition of Instagram and 2014 $19 billion purchase of WhatsApp are the heart of the suits. The lawsuits suggest that without Facebook's intervention we'd have social media successfully competing on privacy. In his summary, Matt Stoller credits this idea to Dina Srinivasan, who argued in 2019 that Facebook saw off then-dominant MySpace by presenting itself as "privacy-centered" at a time when the press was claiming that MySpace's openness made it unsafe for children. Once in pole position, Facebook began gradually pushing greater openness on its users - bait and switch, I called it in 2010.

I'm less convinced that MySpace's continued existence could have curbed Facebook's privacy invasion. In 2004, the year of Facebook's birth, Australian privacy activist Roger Clarke surveyed the earliest social networks - chiefly Plaxo - and predicted that all social networks would inevitably exploit their users. "The only logical business model is the value of consumers' data," he told me for the Independent (TXT). I think, therefore, that the privacy-destructive race to the bottom-of-the-business-model was inevitable given the US's regulatory desert. Google began heading that way soon after its 2004 IPO; by 2006 privacy advocates were already warning of its danger.

Srinivasan details Facebook's progressive privacy invasion: the cooption of millions of third parties via logins and the Like button propagandize its service to collect and leverage vast amounts of personal data while it became a vector for the unscrupulous to hack elections. This is all without considering non-US issues such as Free Basics, which has made Facebook effectively the only Internet service in parts of the world. Facebook also had Silicon Valley's venture capital ethos at its back and Facebook's share structure, which awards Zuckerberg full and permanent control.

In a useful paper on nascent competitors, Tim Wu and C. Scott Hemphill discuss how to spot anticompetitive acquisitions. As I recall, though, many - notably the ever-prescient Jeff Chester - protested the WhatsApp and Instagram acquisitions at the time; the EU only agreed because Facebook promised not to merge the user databases, and issued a €110 million fine when it realized the company lied. Last year Facebook announced it would merge the databases, which critics saw as a preemptive move to block a potential breakup. Allowing the mergers to go ahead seems less dumb, however, if you remember that it took until 2017 and Lina Khan to realize that the era of two guys in a garage up-ending entrenched monopolists was over.

The suits ask the court to find Facebook guilty under Section 2 of the Sherman Act (which is a felony) and Section 7 of the Clayton Act, block it from making further acquisitions valued at $10 million or above, and require it to divest or restructure illegally acquired companies or current Facebook assets or business lines. Restoring some competition to the Internet ecosystem in general and social media in particular seems within reach of this action - though there are many other cases that also need attention. It won't be enough to fixing the damage to democracy and privacy, but perhaps the change in attitude it represents will ensure the next Facebook doesn't become a monster.

Illustrations: Mark Zuckerberg's empty chair at last year's Grand Committee hearing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 27, 2020

Data protection in review

Thumbnail image for 2015_Max_Schrems_(17227117226).jpgA tax on small businesses," a disgusted techie called data protection, circa 1993. The Data Protection Directive became EU law in 1995, and came into force in the UK in 1998.

The narrow data protection story of the last 25 years, like that of copyright, falls into three parts: legislation, government bypasses to facilitate trade, and enforcement. The broader story, however, includes a power struggle between citizens and both public and private sector organizations; a brewing trade war; and the difficulty of balancing conflicting human rights.

Like free software licenses, data protection laws seed themselves across the world by requiring forward compliance. Adopting this approach therefore set the EU on a collision course with the US, where the data-driven economy was already taking shape.

Ironically, privacy law began in the US, with the Fair Credit Reporting Act (1970), which gives Americans the right to view and correct the credit files that determine their life prospects. It was joined by the Privacy Act (1974), which covers personally identifiable information held by federal agencies, and the Electronic Communications Privacy Act (1986), which restricts government wiretaps on transmitted and stored electronic data. Finally, the 1996 Health Insurance Portability and Accountability Act protect health data (with now-exploding exceptions. In other words, the US's consumer protection-based approach leaves huge unregulated swatches of the economy. The EU's approach, by contrast, grew out of the clear historical harms of the Nazis' use of IBM's tabulation software and the Stasi's endemic spying on the population, and regulates data use regardless of sector or actor, minus a few exceptions for member state national security and airline passenger data. Little surprise that the results are not compatible.

In 1999, Simon Davies saw this as impossible to solve for Scientific American (TXT): "They still think that because they're American they can cut a deal, even though they've been told by every privacy commissioner in Europe that Safe Harbor is inadequate...They fail to understand that what has happened in Europe is a legal, constitutional thing, and they can no more cut a deal with the Europeans than the Europeans can cut a deal with your First Amendment." In 2000, he looked wrong: the compromise Safe Harbor agreement enabled EU-US data flows.

In 2008, the EU began discussing an update to encompass the vastly changed data ecosystem brought by Facebook, YouTube, and Twitter, the smartphone explosion, new types of personally identifiable information, and the rise and fall of what Andres Guadamuz last year called "peak cyber-utopianism". By early 2013, it appeared that reforms might weaken the law, not strengthen it. Then came Snowden, whose revelations reanimated privacy protection. In 2016, the upgraded General Data Protection Regulation was passed despite a massive opposing lobbying operation. It the month before GDPR came into force">came into force in 2018, but even now many US sites still block European visitors rather than adapt because "you are very important to us".

Everyone might have been able to go on pretending the fundamental incompatibility didn't exist but for two things. The first is the 2014 European Court of Justice decision requiring Google to honor "right to be forgotten" requests (aka Costeja). Americans still see Costeja as a terrible abrogation of free speech; Europeans more often see it as a balance between conflicting rights and a curb on the power of large multinational companies to determine your life.

The second is Austrian lawyer Max Schrems. While still a student, Schrems saw that Snowden's revelations utterly up-ended the Safe Harbor agreement. He filed a legal case - and won it, in 2016, just as GDPR was being passed.The EU and US promptly negotiated a replacement, Privacy Shield. Schrems challenged again. And won again, this year. "There must be no Schrems III!", EU politicians said in September. In other words: some framework must be found to facilitate transfers that passes muster within the law. The US's approach appears to be trying to get data protection and localization laws barred via trade agreements despite domestic opposition. One of the Trump administration's first acts was to require federal agencies to exempt foreigners from Privacy Act protections.

No country is more affected by this than the UK, which as a new non-member can't trade without an adequacy decision and no longer gets the member-state exception for its surveillance regime. This dangerous high-wire moment for the UK traps it in that EU-US gap.

Last year, I started hearing complaints that "GDPR has failed". The problem, in fact, is enforcement. Schrems took action because the Irish Data Protection Regulator, in pole position because companies like Facebook have sited their European headquarters there, was failing to act. The UK's Information Commissioner's Office was under-resourced from the beginning. This month, the Open Rights Group sued the ICO to force it to act on the systemic breaches of the GDPR it acknowledged in a June 2019 report (PDF) on adtech.

Equally a problem are the emerging limitations of GDPR and consent, which areentirely unsuited for protecting privacy in the onrushing "smart" world in which you are at the mercy of others' Internet of Things. The new masses of data that our cities and infrastructure will generate will need a new approach.

Illustrations: Max Schrems in 2015.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 20, 2020

Open access in review

Edward_Jenner._Oil_painting._Wellcome_V0023503.jpgLast week's review of 30 years of writing about the Internet and copyright focused on rightsholders' efforts to protect a business model developed for physical media and geographical restrictions in the face of new, global, digital media. Of the counter-efforts, mainstream attention has focused on the illegal ones; I squeezed in links to most of my past writing on "pirate" sites, although I missed pieces on The Pirate Bay, BitTorrent, and new business models. I also missed out discussing large-scale appropriation by companies that are apparently too big to sue, such as Google books and the more recent fuss over the Internet Archive's Controlled Digital Lending and National Emergency Library.

More interesting, however, are the new modes of access the Internet clearly could open up to niche material and frustrated artists, creators, and collaborators. At the MIT Media Lab's 1994 open day (TXT), a remarkable collection of Hollywood producers, and creative artists predicted that the Internet would unlock a flood of (American) creativity that previously had no outlet (although Penn Jillette doubted the appeal of interactive storytelling).

Lots of this has actually happened. Writers have developed mainstream audiences through self-publishing; web-based publishing enabled generations of cartoonists; and YouTube and TikTok offer options that would never fit into a TV schedule. Mass collaboration has also flourished: Wikipedia, much despised in some quarters 15 years ago, has ripened into an invaluable resource (despite its flaws that need fixing), as has OpenStreetMap, which was outed this week as a crucial piece of infrastructure for Facebook, Apple, Amazon, and Microsoft.

Developing new forms of copyright law has been a critical element in all this, beginning with the idea of copyleft, first used in 1976 and fleshed out in more detail by Richard Stallman in 1985. Traditionally, either you copyrighted the work and claimed all rights or you put the work into the public domain for everyone to use for free, as the satirist Tom Lehrer has recently done.

Stallman, however, wanted to ensure that corporate interests couldn't appropriate the work of volunteers, and realized that he could write a copyright license that dictates those terms, paving the way for today's open source community. In 2001, Lawrence Lessig, Hal Abelson, and Eric Eldred founded Creative Commons to make it easy for people posting new material to the web to specify whether and how others can use it. It's easy to forget now how big an undertaking it was to create licenses that comply with so many legal systems. I would argue that it's this, rather than digital rights management that has enabled widespread Internet creative publishing.

The third piece of this story has played a crucial role in this pandemic year of A.D. 2020. In the halls of a mid-1990s Amsterdam conference on copyright, a guy named Christopher Zielinski made this pitch: a serious problem was brewing around early paywall experiments. How were people in poorer countries going to gain access to essential scientific and medical information? He had worked for the WHO, I think; in a later email I remember a phrase about information moving through disadvantaged countries in "armored trucks".

Zielinski was prescient. In 2015, the Ebola virus killed 10,000 people in Liberia, Sierra Leone, and Guinea, in part because received wisdom held that Ebola was not present in West Africa, slowing the initial response. It was only later that three members of a team drafting Liberia's Ebola recovery plan discover that scientific researchers had written articles establishing its presence as long ago as 1982. None of the papers were co-written with Liberian scientists, and they were published in European journals, which African researchers cannot afford. In this case, as writers Bernice Dahn, Vera Mussah, and Cameron Nutt laid out, closed access cost lives: "Equity must be an indispensable goal in protecting from threats like Ebola, and in the quality of care delivered when prevention fails."

Meanwhile, in another part of the early as 1991 others saw the potential of using the Internet to speed up scientific publishing and peer review, leading Paul Ginsparg to respond by creating the arXiv repository to share preprints of physics journal articles. Numerous copies for other fields followed. In 2003, leading research, scientific, and cultural institutions created and signed the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities laying out steps to promote the Internet as a medium for disseminating global knowledge. By 2006, the six-year-old Public Library of Science had set up PLOS ONE, the first peer-reviewed open access scientific journal for primary research in science and medicine.

While there are certainly issues to be solved, such as the proliferation of fake journals, improving peer review, and countering enduring prejudice that ties promotions and prestige to traditional proprietary journals, open access continues to grow. Those who believe that the Internet is going to destroy science are likely to be wrong, and publishers who don't plan for this future are likely to crater.

The global distribution accessible to artists and creators is valuable, but openness is critical to the scientific method of building knowledge. The open approach has been critical during the pandemic. As vaccine candidates prepare for takeoff, we can thank the Internet and the open access movement that it's taken a year, not decades.

Illustrations: Edward Jenner, who created the first vaccine, for smallpox (from the Wellcome images collection, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 2, 2020

Searching for context

skyler-gundason-social-dilemma.pngIt's meant, I think, to be a horror movie. Unfortunately, Jeff Orlowski's The Social Dilemma comes across as too impressed with itself to scare as thoroughly as it would like.

The plot, such as it is: a group of Silicon Valley techies who have worked on Google, Facebook, Instagram, Palm (!), and so on present mea culpas. "I was co-inventor...of the Like button," Tristan Harris says by way of introduction. It seems such a small thing to include. I'm sure it wasn't that easy, but Slashdot was upvoting messages when Mark Zuckerberg was 14. The techies' thoughts are interspersed with those of outside critics. Intermittently, the film inserts illustrative scenarios using actors, a technique better handled in The Big Short. In these, Vincent Kartheiser plays a multiplicity of evil algorithmic masterminds doing their best to exploit their target, a fictional teenage boy (Skyler Gisondo) who has accepted the challenge of giving up his phone for a week with the predictable results of an addiction film. As he becomes paler and sweatier, you expect him to crash out in a grotty public toilet, like Julia Ormond's character in Traffik. Instead, he face-plants when the police arrest him at Charlottesville.

The first half of the movie is predominantly a compilation of favorite social media nightmares: teens are increasingly suffering from depression and other mental health issues; phone addiction is a serious problem; we are losing human connection; and so on. As so often, causality is unclear. The fact that these Silicon Valley types consciously sought to build addictive personal tracking and data crunching systems and change the world does not automatically tie every social problem to their products.

I say this because so much of this has a long history the movie needs for context. The too-much-screen-time of my childhood was TV, though my (older) parents worried far more about the intelligence-drainage perpetrated by comic books. Girls who now seek cosmetic surgery in order to look more like filter-enhanced Instagram images were preceded by girls who starved themselves to look like air-brushed, perfect models in teen magazines. Today's depressed girls could have been those profiled in Mary Pipher's 1994 Reviving Ophelia, and she, too, had forerunners. Claims about Internet addiction go back more than 20 years, and until very recently were focused on gaming. Finally, though data does show that teens are going out less, less interested in learning to drive, and are having less sex and using less drugs, is social media the cause or the compensation for a coincidental overall loss of physical freedom? Even pre-covid they were growing up into a precarious job market and a badly damaged planet; depression might just be the sane response.

In the second half the film moves on to consider social media divisions as assault on democracy. Here, it's on firmer ground, but really only because the much better film The Great Hack has already exposed how Facebook (in particular) was used to spark violence and sway elections even before 2016. And then it wraps up: people are trapped, the companies have no incentive to change, and (says Jaron Lanier) the planet will die. As solutions, the film's many spokespeople suggest familiar ideas: regulation, taxation, withdrawal. Shoshana Zuboff is the most radical: outlaw them. (Please don't take Twitter! I learn so much from Twitter!)

"We are allowing technologists to frame this as a problem that they are equipped to solve," says data scientist Cathy O'Neil. "That's a lie." She goes on to say that AI can't distinguish truth. Even if it could, truth is not part of the owners' business model.

Fair enough, but remove Facebook and YouTube, and you still have Fox News, OANN, and the Daily Mail inciting anger and division with expertise honed over a century of journalistic training - and amoral world leaders. This week, a study published this week from Cornell University found that Donald Trump is implicated in 38% of the coronavirus misinformation circulating on online and traditional media. Knock out a few social media sites...and that still won't change because his pulpit is too powerful.

Most of the film's speakers eventually close by recommending we delete our social media accounts. It seems a weak response, in part because the movie does a poor job of disentangling the dangers of algorithmic manipulation from the myriad different reasons why people use phones and social media: they listen to music, watch TV, connect with their friends, play games, take pictures, and navigate unfamiliar locations. It's absurd to ask them to give that up without suggesting alternatives for fulfilling those functions.

A better answer may be that offered this week by the 25-odd experts who have formed an independent Facebook oversight board (the actual oversight board Facebook announced months ago is still being set up and won't begin meeting until after the US presidential election). The expertise assembled is truly impressive, and I hope that, like the Independent SAGE group of scientists who have been pressuring the UK government into doing a better job on coronavirus, they will have a mind-focusing effect on our Facebook overlords, perhaps later to be copied for other sites. The problem - an aspect also omitted from The Social Dilemma - is that under the company's shareholder structure Zuckerberg is under no requirement to listen.

Illustrations: Skyler Gisondo as Ben, in The Social Dilemma.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 11, 2020


sfo-fires-hasbrouck.jpegA new complaint surfaced on Twitter this week. Anthony Ryan may have captured it best: "In San Francisco everyone is trying unsuccessfully to capture the hellish pall that we're waking up to this morning but our phone cameras desperately want everything to be normal." california-fires-sffdpio.jpegIn other words: as in these pictures, the wildfires have turned the Bay Area sky dark orange ("like dusk on Mars," says one friend), but people attempting to capture it on their phone cameras are finding that the automated white balance correction algorithms recalibrate the color to wash out the orange in favor of grey daylight.

At least that's something the computer is actually doing, even if it's counter-productive. Also this week, the Guardian ran an editorial that it boasted had been "entirely" written by OpenAI's language generator, GPT-3. Here's what they mean by "written" and "entirely": the AI was given a word length, a theme, and the introduction, from which it produced eight unique essays, which the Guardian editors chopped up and pieced together into a single essay, which they then edited in the usual way, cutting lines and rearranging paragraphs as they saw fit. Trust me, human writers don't get to submit eight versions of anything; we'd be fired when the first one failed. But even if we did, editing, as any professional writer will tell you, is the most important part of writing anything. As I commented on Twitter, the whole thing sounds like a celebrity airily claiming she's written her new book herself, with "just some help with the organizing". I'd advise that celebrity (name withheld) to have a fire extinguisher ready for when her ghostwriter reads that and thinks of all the weeks they spent desperately rearranging giant piles of rambling tape transcripts into a (hopefully) compelling story.

The Twitter discussion of this little foray into "AI" briefly touched on copyright. It seems to me hard to argue that the AI is the author given the editors' recombination of its eight separately-generated pieces (which likely took longer than if one of them had simply written the piece). Perhaps you could say - if you're willing to overlook the humans who created, coded, and trained the AI - that the AI is the author of the eight pieces that became raw material for the essay. As things are, however, it seems clear that the Guardian is the copyright owner, just as it would be if the piece had been wholly staff-written (by humans).

Meanwhile, the fallout from Max Schrems' latest win continues to develop. The Irish Data Protection Authority has already issued a preliminary order to suspend data transfers to the US; Facebook is appealing. The Swiss data protection authority has issued a notice that the Swiss-US Privacy Shield is also void. During a September 3 hearing before the European Parliament Committee on Civil Liberties, Justice, and Home Affairs, MEP Sophie in't Veld said that by bringing the issue to the courts Schrems is doing the job data protection authorities should be doing themselves. All agreed that a workable - but this time "Schrems-proof" - solution must be found to the fundamental problem, which Gwendolyn Delbos-Corfield summed up as "how to make trade with a country that has decided to put mass surveillance as a rule in part of its business world". In't Veld appeared to sum up the entire group's feelings when she said, "There must be no Schrems III."

Of course we all knew that the UK was going to get caught in the middle between being able to trade with the EU, which requires a compatible data protection regime (either the continuation of the EU's GDPR or a regime that is ruled equal), and the US, which wants data to be free-flowing and which has been trying to use trade agreements to undermine the spread of data protection laws around the world (latest newcomer: Brazil). What I hadn't quite focused on (although it's been known for a while) is that, just like the US surveillance system, the UK's own surveillance regime could disqualify it from the adequacy ruling it needs to allow data to go on flowing. When the UK was an EU member state, this didn't arise as an issue because EU data protection law permits member states to claim exceptions for national security. Now that the UK is out, that exception no longer applies. It was a perk of being in the club.

Finally, the US Senate, not content with blocking literally hundreds of bills passed by the House of Reprsentatives over the last few years, has followed up July's antitrust hearings with the GAFA CEOs with a bill that's apparently intended to answer Republican complaints that conservative voices are being silenced on social media. This is, as Eric Goldman points out in disgust one of several dozen bits of legislation intended to modify various pieces of S230 or scrap it altogether. On Twitter, Tarleton Gillespie analyzes the silliness of this latest entrant into the fray. While modifying S230 is probably not the way to go about it, right now curbing online misinformation seems like a necessary move - especially since Facebook CEO Mark Zuckerberg has stated outright that Facebook won't remove anti-vaccine posts. Even in a pandemic.

Illustrations: The San Francisco sky on Wednesday ("full sun, no clouds, only smoke"), by Edward Hasbrouck; accurate color comparison from the San Francisco Fire Department.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 4, 2020

The Internet as we know it

Internet_map_1024-2005-the Opte Project.jpgIt's another of those moments when people to whom the Internet is still a distinctive and beloved medium fret that it's about to be violently changed into all everything they were glad it wasn't when it began. That this group is a minority is in itself a sign. Circa 1994, almost every Internet user was its defender. Today, for most people, the Internet just *is* and ever has been - until someone comes along and wants to delete their favorite service.

Fears of splintering the Internet are as old as the network itself. Different people have focused on different mechanisms: TV and radio-style corporate takeover (see for example Robert McChesney's work); incompatible censorship and data protection regimes, technical incompatibilities born of corporate overreach, and so on. In 2013, four seemed significant: copyright, localizing data storage (data protection), censorship; losing network neutrality, and splitting the addressing system.

Then, the biggest threats appeared to be structural censorship and losing network neutrality. Both are still growing. In 2019, Access Now says, 213 Internet shutdowns in 33 countries collectively disrupted 1,706 days of Internet access. No one imagined this in the 1990s, when all countries vied to reap the benefits of getting their citizens online. More conceivable were government regulation, shifting technological standards, corporate ownership, copyright laws, and unequal access...but we never expected the impact of the eventual convergence with the mobile world, a clash of cultures that got serious after 2010, when social media and smartphones began mutually supercharging.

A couple of weeks ago, James Ball introduced a new threat, writing disapprovingly about US president Donald Trump's executive order declaring the video-sharing app TikTok a national emergency. Ball rightly calls this ban "generational vandalism, but then he writes that banning an app solely because of the nationality of its owner, he writes, "could be an existential threat to the Internet as we know it".

If that's true, then the Internet is already not "the Internet as we know it". So much depends on when your ideas of "the Internet" were formed and where you live. As Ball himself acknowledges in his new book, The System: Who Owns the Internet and How It Owns Us, in some countries Facebook is synonymous with the Internet because of the zero-rating deals the company has struck with mobile phone operators. In China, "the Internet", contrary to what most people believed was possible in the 1990s, is a giant, firewalled nationally controlled space. TikTok, as primarily a mobile phone app lives in a highly curated "the Internet" of app stores. Finally, even though "the Internet" in the 1990s sense is still with us in that people can still build their new ideas, most people's "the Internet" is now confined to the same few sites that exercise extraordinary control over what is read, seen, and heard.

The Australian Competition and Consumer Commission's new draft News Media Bargaining Code provides an example. It requires Google and Facebook (and, eventually, others) to negotiate in good faith to pay news media companies for use of their content when users share links and snippets. Unlike Spain's previous similar attempt, Google can't escape by shutting down its news service because it also serves up news through its search engine and YouTube. Facebook has said it will block Australian users from sharing local or international news on Facebook and Instagram if the code becomes mandatory. But, as Alex Hern writes, the problem is that "One of the big ways that Facebook and Google have been bad for the news industry has been by becoming indispensable to the news industry". Australia can push this code into force, but when it does Google won't pay publishers *and* publishers will lose most of their traffic, exactly as happened in Spain and Germany. But misinformation will flourish.

This is still an upper network layer problem, albeit simplified by corporate monopoly. On the 1995-2010 web, there would be too many site owners to contend with, just as banning apps (see also India) is vastly simplified by needing to negotiate with just two app store owners. Censoring the open Internet required China to build a national firewall and hire maintainers while millions of new sites and services arrived every day. When they started, no one believed it could even be done.

The mobile world is not and never has been "the Internet as we know it", built to facilitate openness for scientists. Telephone companies have always been happiest with controlled systems and walled gardens, and before 2006, manufacturers like Nokia, Motorola, and Psion had to tailor their offerings to telco specifications. The iPhone didn't just change the design and capabilities of the slab in your hand; it also changed the makeup and power structures of the industry as profoundly as the PC had changed computing before it.

But these are still upper layers. Far more alarming, as Milton Mueller writes at the Internet Governance Project, is Trump's policy of excluding Chinese businesses from Internet infrastructure - and China's ideas for "new IP". This is a crucialthreat to the interoperable bedrock of "the network of all networks". As the Internet Society explains, it is that cooperative architecture "with no central authority" that made the Internet so successful. This is the first principle that built the Internet as we know it.

Illustrations: Map of the Internet circa 2005 (via The Opte Project at Wikimedia Commons.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 24, 2020

The invisible Internet

1964 world's fair-RCA_Pavilion-Doug Coldwell.jpgThe final session of this week's US Internet Governance Forum asked this question: what do you think Internet governance will look like five, ten, and 25 years from now?

Danny Weizner, who was assigned 25 years, started out by looking back 25 years to 1995, and noted that by and large we have the same networks, and he therefore thinks we will have largely the same networks in 2045. He might have - but didn't - point out how many of the US-IGF topics were the same ones we were discussing in 1995: encryption and law enforcement access, control of online content, privacy, and cyber security. The encryption panel was particularly nostalgic; it actually featured three of the same speakers I recall from the mid-1990s on the same topic. The online content one owed its entertainment value to the presence of one of the original authors of Section 230, the liability shield written into the 1996 Communications Decency Act. There were newcomers: 5G; AI, machine learning, and big data; and some things to do with the impact of the pandemic.

As Laura DeNardis then said, looking back to the past helps when thinking about the future, if only to understand how much change can happen in that time. Through that lens, although the Internet has changed enormously in 25 years in many ways the *debates* and *issues* have barely altered - they're just reframed. But here's your historical reality: 25 years ago we were reading Usenet newsgroups to find interesting websites and deploring the sight of the first online ads.

This is a game anyone can play, and so we will. We will try to avoid seeing the November US presidential election as a hinge.

The big change of the last ten years is the transformation of every Internet debate into a debate about a few huge companies, none of which were players in the mid-1990s. The rise of the mobile Internet was predicted by 2000, but it wasn't until 2006 and the arrival of the iPhone that it became a mass-market reality and began the merger of the physical and online worlds, followed by machine learning, and AI as the next big wave. Now, as DiNardis correctly said, we're beginning to see the Internet moving into the biological world. She predicted, therefore, that the Internet will be both very small (the biological cellular level) and very large (Vint Cerf's galactic Internet). "The Internet will have to move out of communications issues and into environmental policy, consumer safety, and health," she said. Meanwhile, Danny Weizner suggested that data scientists will become the new priests - almost certainly true, because if we do nothing to rein in technology they will be the people whose algorithms determine how decisions are made.

But will we really take no control? The present trend is toward three computing power blocs: China, the United States, and the EU. Chinese companies are beginning to move into the West, either by operating (such as TikTok, which US president Donald Trump has mooted banning) or by using their financial clout to push Westerners to conform to their values. The EU is only 28 years old (dating from the Maastricht Treaty), but in that time has emerged as the only power willing to punish US companies by making them pay taxes, respect privacy law, or accept limits on acquisitions. Will it be as willing to take on Chinese companies if they start to become equally dominant in the West and as willing to violate the fundamental rights enshrined in data protection law?

In his 1998 book, The Invisible Computer, usability pioneer Donald Norman predicted that computers would become invisible, embedded inside all sorts of devices, like electric motors before them. Yesterday, Brenda Leong made a similar prediction by asking the AI session how we will think about robots when they've become indistinguishable. Her analogy, the Internet itself, which in the 1990s was something you had to "go to" by dialing up and waiting for modems to wait, but somewhere around 2010 began to simply be wherever you go, there you are.

So my prediction for 25 years from now is that there will effectively be no such thing as today's "Internet governance"; it will have disappeared into every other type of governance, though engineering and standards bodies will still work to ensure that the technical underpinnings remain robust and reliable. I'd like to think that increasingly technical standards will be dominated by climate change, so that emerging technologies that, like cryptocurrencies, use more energy than entire countries, will be sent back to the drawing board because someone will do the math at the design stage.

Today's debates will merge with their offline counterparts, just as data protection law no longer differentiates between paper-based and electronic data. As the biological implants DiNardis mentioned - and Andrea Matwyshyn has been writing about 2016 - come into widespread use, they will be regulated as health care. We will regulate Internet *companies*, but regulating Facebook (in Western countries) is not governing the Internet.

Many conflicts will persist. Matwyshyn's Internet of Bodies is the perfect example, as copyright laws written for the entertainment industry are invoked by medical device manufacturers. A final prediction, therefore: net.wars is unlikely to run out of subjects in my lifetime.

Illustrations: A piece of the future as seen at the 1964 New York World's Fair (by Doug Coldwell.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 29, 2020


sbisson-parrot-49487515926_0c97364f80_o.jpgAnyone who's ever run an online forum has at some point grappled with a prolific poster who deliberately spreads division, takes over every thread of conversation, and aims for outraged attention. When your forum is a few hundred people, one alcohol-soaked obsessive bent on suggesting that anyone arguing with him should have their shoes filled with cement before being dropped into the nearest river is enormously disruptive, but the decision you make about whether to ban, admonish, or delete their postings matters only to you and your forum members. When you are a public company, your forum is several hundred million people, and the poster is a world leader...oy.

Some US Democrats have been calling Donald Trump's outrage this week over having two tweets labeled with a fact-check an attempt to distract us all from the terrible death toll of the pandemic under his watch. While this may be true, it's also true that the tweets Trump is so fiercely defending form part of a sustained effort to spread misinformation that effectively acts as voter suppression for the upcoming November election. In the 12 hours since I wrote this column, Trump has signed an Executive Order to "prevent online censorship", and Twitter has hidden, for "glorifying violence", Trump tweets suggesting shooting protesters in Minneapolis. It's clear this situation will escalate over the coming week. Twitter has a difficult balance to maintain: it's important not to hide the US president's thoughts from the public, but it's equally important to hold the US president to the same standards that apply to everyone else. Of course he feels unfairly picked on.

Rewind to Tuesday. Twitter applied its recently-updated rules regarding election integrity by marking two of Donald Trump's tweets. The tweets claimed that conducting the November presidential election via postal ballots would inevitably mean electoral fraud. Trump, who moved his legal residence to Florida last year, voted by mail in the last election. So did I. Twitter added a small, blue line to the bottom of each tweet: "! Get the facts about mail-in ballots". The link leads to numerous articles debunking Trump's claim. At OneZero, Will Oremus explains Twitter's decision making process. By Wednesday, Trump was threatening to "shut them down" and sign an Executive Order on Thursday.

Thursday morning, a leaked draft of the proposed executive order had been found, and Daphne Keller had color coded it to show which bits matter. In a fact-check of what power Trump actually has for Vox, Shirin Ghaffary quotes a tweet from Lawrence Tribe, who calls Trump's threat "legally illiterate". Unlike Facebook, Twitter doesn't accept political ads that Trump can threaten to withdraw, and unlike Facebook and Google, Twitter is too small for an antitrust action. Plus, Trump is addicted to it. At the Washington Post, Tribe adds that Trump himself *is* violating the First Amendment by continuing to block people who criticize his views, a direct violation of a 2019 court order.

What Trump *can* do - and what he appears to intend to do - is push the FTC and Congress to tinker with Section 230 of the Communications Decency Act (1996), which protects online platforms from liability for third-party postings spreading lies and defamation. S230 is widely credited with having helped create the giant Internet businesses we have today; without liability protection, it's generally believed that everything from web comment boards to big social media platforms will become non-viable.

On Twitter, US Senator Ron Wyden (D-OR), one of S230's authors, explains what the law does and does not do. At the New York Times, Peter Baker and Daisuke Wakabayashi argue, I think correctly, that the person a Trump move to weaken S230 will hurt most is...Trump himself. Last month, the Washington Post put the count of Trump's "false or misleading claims" while in office at 18,000 - and the rate has grown over time. Probably most of them have been published on Twitter.

As the lawyer Carrie A. Goldberg points out on Twitter, there are two very different sets of issues surrounding S230. The victims she represents cannot sue the platforms where they met serial rapists who preyed on them or continue to tolerate the revenge porn their exes have posted. Compare that very real damage to the victimhood conservatives are claiming: that the social media platforms are biased against them and disproportionately censor their posts. Goldberg wants access to justice for the victims she represents, who are genuinely harmed, and warns against altering S230 for purposes such as "to protect the right to spread misinformation, conspiracy theory, and misinformation".

However, while Goldberg's focus on her own clients is understandable, Trump's desire to tweet unimpeded about mail-in ballots or shooting protesters is not trivial. We are going to need to separate the issue of how and whether S230 should be updated from Trump's personal behavior and his clearly escalating war with the social medium that helped raise him from joke to viable presidential candidate. The S230 question and how it's handled in Congress is important. Calling out Trump when he flouts clearly stated rules is important. Trump's attempt to wield his power for a personal grudge is important. Trump versus Twitter, which unfortunately is much easier to write about, is a sideshow.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 22, 2020

The pod exclusion

Vintage_Gloritone_Model_27_Cathedral-Tombstone_Style_Vacuum_Tube_Radio,_AM_Band,_TRF,_Circa_1930_(14663394535).jpgThis week it became plain that another bit of the Internet is moving toward the kind of commercialization and control the Internet was supposed to make difficult in the first place: podcasts. The announcement that one of the two most popular podcasts, the Joe Rogan Experience, will move both new episodes and its 11-year back catalogue to Spotify exclusively in a $100 million multiyear deal is clearly a step change. Spotify has also been buying up podcast networks, and at the Verge, Ashley Carman suggests the podcast world will bifurcate into twin ecosystems, Spotify versus Everyone Else.

Like a few hundred million other people, I am an occasional Rogan listener, my interest piqued by a web forum mention of his interview with Jeff Novitzky, the investigator in the BALCO doping scandal. Other worth-the-time interviews from his prolific output include Lawrence Lessig, epidemiologist Michael Osterholm (particularly valuable because of its early March timing), Andrew Yang, and Bernie Sanders. Parts of Twitter despise him; Rogan certainly likes to book people (usually, but not always, men - for example Roseanne Barr) who are being pilloried in the news and jointly chew over their situation. Even his highest-profile interviewees rarely find, anywhere else, the two to three hours Rogan spends letting them talk quietly about their thinking. He draws them out by not challenging them much, and his predilection for conspiracy theories and interest in unproven ideas about nutrition make it advisable to be selective and look for countervailing critiques.

It's about 20 years since I first read about Dave Winers early experiments in "audio blogging", renamed "podcast" after the 2001 release of the iPod eclipsed all previously existing MP3 players. The earliest podcasts tended to be the typical early-stage is-this-thing-on? that leads the unimaginative to dismiss the potential. But people with skills honed in radio were obviously going to do better, and within a few years (to take one niche example) the skeptical world was seeing weekly podcasts like Skepchick (beginning 2005) and The Pod Delusion (2009-2014). By 2014, podcast networks were forming, and an estimated 20% of Americans were listening to podcasts at least once a month.

That era's podcasts, although high-quality, were - and in some cases still are - produced by people seeking to educate or promote a cause, and were not generally money-making enterprises in their own right. The change seems to have begun around 2010, as the acclerating rise of smartphones made podcasts as accessible as radio for mobile listening. I didn't notice until late 2016, when the veteran screenwriter and former radio announcer and DJ Ken Levine announced on his daily 11-year-old blog that he was starting up Hollywood & Levine and I discovered the ongoing influx of professional comedians, actors, and journalists into podcasting. Notably, they all carried ads for the same companies - at the minimum, SquareSpace and Blue Apron. Like old-time radio, these minimal-production ads were read by the host, sometimes making the whole affair feel uncomfortably fake. Per the Wall Street Journal, US advertising revenue from podcasting was $678.7 million last year, up 42% over 2018.

No wonder advertisers like podcasts: users can block ads on a website or read blog postings via RSS, but no matter how you listen to a podcast the ads remain in place, and if you, like most people, listen to podcasts (like radio) when your hands are occupied, you can't easily skip past them. For professional communicators, podcasts therefore provide direct access to revenues that blogging had begun to offer before it was subsumed by social media and targeted advertising.

The Rogan deal seems a watershed moment that will take all this to a new level. The key element really isn't the money, as impressive as it sounds at first glance; it's the exclusive licensing. Rogan built his massive audience by publishing his podcast in both video and audio formats widely on multiple platforms, primarily his own websites and YouTube; go to any streaming site and you're likely to find it listed. Now, his audience is big enough that Spotify apparently thinks that paying for exclusivity will net the company new subscribers. If you prefer downloads to streaming, however, you'll need a premium subscription. Rogan himself apparently thinks he will lose no control over his show; he distrusts YouTube's censorship.

At his blog on corporate competition, Matt Stoller proclaims that the Rogan deal means the death of independent podcasting. While I agree that podcasts circa 2017-2020 are in a state similar to the web in the 2000s, I don't agree this means the death of all independent podcasting - but it will be much harder for their creators to find audiences and revenues as Spotify becomes the primary gatekeeper. This is what happened with blogs between 2008 and 2015 as social media took over.

Both Carman's and Stoller's predictions are grim: that podcasts will go the way of today's web and become a vector for data collection and targeted advertising. Carman, however, imagines some survival for a privacy-protecting, open ecosystem of podcasts. I want to believe this. But, like blogging now, that ecosystem will likely have to find a new business model.

Illustrations: 1930s vacuum tube radio (via Joe Haupte).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 15, 2020


Thumbnail image for sidewalklabs-streetcrossing.pngIn the last few weeks, unlike any other period in the 965 (!) previous weeks of net.wars columns: there were *five* pieces of (relatively) good news in the (relatively) restricted domain of computers, freedom, and privacy.

One: Google sibling Sidewalk Labs has pulled out of the development it had planned with Waterfront Toronto. This project has been contentious ever since the contract was signed in 2017 to turn a 12-acre section of Toronto's waterfront into a data-driven, sensor-laden futuristic city. In 2018, leading Canadian privacy pioneer Ann Cavoukian quit the project after Sidewalk Labs admitted that instead of ensuring the data it collected wouldn't be identifiable it actually would grant third parties access to it. At a panel on smart city governance at Computers, Privacy, and Data Protection 2019, David Murakami Wood gave the local back story (go to 43:30) on the public consultations and the hubris on display. Now, blaming the pandemic-related economic conditions, Sidewalk Labs has abandoned the plan altogether; its public opponents believe the scheme was really never viable in the first place. This is good news, because although technology can help some of urban centers' many problems, it should always be in the service of the public, not an opportunity for a private company to seize control.

Two: The Internet Corporation for Assigned Names and Numbers has rejected the Internet Society's proposal to sell PIR, the owner of the .org generic top-level domain, to the newly created private equity firm Ethos Capital, Timothy B. Lee reports at Ars Technica. Among its concerns, ICANN cited the $360 million in debt that PIR would have been required to take on, Ethos' lack of qualifications to run such a large gTLD, and the lack of transparency around the whole thing. The decision follows an epistolary intervention by California's Attorney General, who warned ICANN that it thought that the deal "puts profit above the public interest" and that ICANN was "abandoning its core duty to protect the public interest". As the overseer of both it (as a non-profit) and the sale, the AG was in a position to make its opinion hurt. At the time when the sale was announced, the Internet Society claimed there were other suitors. Perhaps now we'll find out who those were.

Three: The textbook publishers Cengage and McGraw-Hill have abandoned their plan to merge, saying that antitrust enforcers' requirements that they divest their overlapping businesses made the merger uneconomical. The plan had attracted pushback from students, consumer groups, libraries, universities, and bookstores, as well as lawmakers and antitrust authorities.

Four: Following a similar ruling from the UK Intellectual Property Office, the US Patent and Trademark Office has rejected two patents listing the Dabus AI system as the inventor. The patent offices argue that innovations must be attributed to humans in order to avoid the complications that would arise from recognizing corporations as inventors. There's been enough of a surge in such applications that the World Intellectual Property Organization held a public consultation on this issue that closed in February. Here again my inner biological supremacist asserts itself: I'd argue that the credit for anything an AI creates belongs with the people who built the AI. It's humans all the way down.

Five: The US Supreme Court has narrowly upheld the right to freely share the official legal code of the state of Georgia. Carl Malamud, who's been liberating it-ought-to-be-public data for decades - he was the one who first got Securities and Exchange Commission company reports online in the 1990s, and on and on - had published the Official Code of Georgia Annotated. The annotations in question, which include summaries of judicial opinions, citations, and other information about the law, are produced by Lexis-Nexus under contract to the state of Georgia. No one claimed the law itself could be copyrighted, but the state argued it owned copyright in the annotations, with Lexis-Nexus as its contracted commercial publisher. The state makes no other official version of its code available, meaning that someone consulting the non-annotated free version Lexis-Nexus does make available would be unaware of later court decisions rejecting parts of some of the laws the legislature passed. So Malamud paid the hundreds of dollars to buy a full copy of the official annotated version, and published it in full on his website for free access. The state sued. Public.Resource lost in the lower courts but won on appeal - and, in a risky move, urged the Supreme Court to take the case and set the precedent. The vote went five to four. The impact will be substantial. Twenty-two other states publish their legal code under similar arrangements with Lexis-Nexus. They will now have to rethink.

All these developments offer wins for the public in one way or another. None should be cause for complacence. Sidewalk Labs and other "surveillance city" purveyors will try again elsewhere with less well-developed privacy standards - and cities still have huge problems to solve. The future of .org, the online home for the world's non-profits and NGOs, is still uncertain. Textbook publishing is still disturbingly consolidated. The owners of AIs will go on seeking ways to own their output. And ensuring that copyright does not impede access to the law that governs those 23 American states does not make those laws any more just. But, for a brief moment, it's good.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

This week's net.wars, "Quincunx", wakes up to discover a confluence of (relatively) good news in the last few weeks of computers, freedom, and privacy:

May 1, 2020

A life in three lockdowns

squires-rainbow.jpg"For most people it's their first lockdown," my friend Eva said casually a couple of weeks ago. "It's my third."

Third? Third?!

Eva is Eva Pascoe, whose colorful life story so far includes founding London's first cybercafe in 1994, setting up Cybersalon as a promulgator of ideas and provocations, and running a consultancy for retailers. She drops hints of other activities: mining cryptocurrencies in Scandinavia using renewable energy, for example. I'm fairly sure it's all true.

So: three lockdowns.

Eva's first lockdown was in 1981, when the Communist Party in her home country, Poland, decided to preempt Russian intervention against the Solidarity workers' movement and declared martial law. One night the country's single TV channel went blank; the next morning they woke up to sirens and General Wojciech Jaruzelski banning public gatherings and instituting a countrywide curfew under which no one could leave their house after 6pm. Those restrictions still left everyone going to work every day and, as it turned out crucially, kept the churches open for business.

Her second was in 1987, and was unofficial. On April 26, 1986, her nuclear physics student flatmate noticed that the Geiger counter in his lab at Warsaw's Nuclear Institute was showing extreme - and consistent - levels of radiation. The Russian Communist Party was saying nothing, and the rest of Poland wouldn't find out until four days later, but Chernobyl had blown up. Physicists knew and spread the news by word of mouth. The drills they'd had in Polish schools told them what to do: shelter indoors, close all windows, admit no fresh air. Harder was getting others to trust their warnings at a time without mobile phones and digital cameras to show the Geiger counter's readings.

Those two lockdowns had similarities. First, they were abrupt, arriving overnight with no time to prepare. That posed a particular difficulty in the second lockdown, when outside food couldn't be trusted because of radioactive fallout, and it wasn't clear whether the water in the taps was safe. "As in COVID-19," she wrote in a rough account I asked her to create, "we had to protect against an invisible enemy with no clear knowledge of the surface risks already in the flat, and no ability to be sure when the danger passes." After 14 days, with no sick pay available, they had to re-emerge and go to work. With the Communist Party still suggesting the radiation was mostly harmless, "In the absence of honest government information, many myths about cures for fallout circulated, some looking more promising than others."

Their biggest asset in both lockdowns was the basement tunnels that connect Warsaw's ten-story blocks of flats, each equipped with six to ten entrances leading to separate staircases. A short run through these corridors enabled inhabitants to connect with the hundreds of other people in the same block when it was too dangerous to go outside. Even under martial law, with deaths and thousands of arrests on the streets, mostly of Solidarity activists, those basement corridors enabled parties featuring home-brewed beer and vodka, pickled cabbage, mushrooms, and herring, and "sausages smuggled in from Grandma's house in the countryside". Most important was the vodka.

The goal of martial law was to stop the spread of ideas, in this case, the Polish freedom movement. The connections made in those basement corridors - and the churches - ensured it failed. After 18 months, the lockdown ended because it was unsustainable. Communist rule ended in 1989, as in many other eastern European countries.

Chernobyl's effects were harder to shake. When the government eventually admitted the explosion had taken place, it downplayed the danger, suggesting that vegetables would be safe to eat if scrubbed with hot water, that the level of radiation was about the same as radon - at the time, thought to be safe - and insisted the population should participate in the May 1 Labor Day marches. Eventually, Polish leaders broke ranks, advised people to stay at home and stop eating food from the affected 40% of Poland, and organized supplies of Lugol for young people to try to mitigate the effects of the radioactive iodine Chernobyl had spread. Eva, a few years too old to qualify, calls her Hashimoto's thyroiditis "a lifelong reminder of why we must not blindly trust government health advice during large-scale medical emergencies".

Eva's lessons: always have a month's supply of food stocks; make friends with virologists, as this will not be our last pandemic; buy a gas mask and make sure everyone knows how to put it on. Most important, buy home-brew equipment. "It not only helps to pass time, but alcohol becomes a currency when the value of money disappears."

This lockdown gave us advance notice; if you were paying attention, you could see it forming on the horizon a month out. Anyone who was stocked for a no-deal Brexit was already prepared. But ironically, the thing that provided safety, society, and survival during Eva's first two lockdowns would be lethal if applied in this one, which finds her in a comfortable London house with a partner and two children. Basement tunnels connecting households would be spreading disease and death, not ideas and safety in which to hatch them. Our tunnels are the Internet and social media; our personal connections are strengthening, even with hugs on pause.

Illustrations: Sign posted on the front door of a local shop that had to close temporarily.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 17, 2020

Anywhere but here

Jacinda_Ardern_at_the_University_of_Auckland_(cropped).jpgThe international comparisons that feature in every chart of infection curves are creating a new habit. Expatriates are unusually prone to this sort of thing anyway, as I've written before, but right now almost everyone appears to have some form of leader envy. Eventually, history will judge, but for now the unquestioned leader on the leader leaderboard is New Zealand prime minister Jacinda Ardern, who this week followed up her decisive and undeniably effective early action by taking a 20% pay cut in solidarity with her country's workers. Also much admired this week - even subtitled! - is Germany's Angela Merkel, whose press conference explaining that small margins in infection rates make huge differences when translated into hospital beds over time, was widely circulated for its honest clarity. Late yesterday New York state governor Andrew Cuomo appeared to have copied it for his own presentation.

Cuomo's daily briefings have become must-see-TV for many of us with less forthcoming leaders; they start with facts, follow with frank interpretation, and end with rambling empathy. Cuomo's rise - which has led many to wonder why he wasn't a presidential candidate - is greeted more cautiously among New York state residents and by those who note the effectiveness of governors Jay Inslee (Washington) and Gavin Newsom (California)). On Sunday's edition of Last Week Tonight, John Oliver said, "I never really liked Andrew Cuomo before this, but I will admit he's doing admirably well, and I can't wait to get to the other side of this when I can go back to being irritated by him again.". He may already have his chance: yesterday evening Cuomo announced he'd signed up McKinsey to plan a strategy for ending the lockdown. Meanwhile, in a tiny unrepresentative sample of local contacts "what world leader do you wish you had in this crisis?", the only British leader mentioned was Scottish first minister Nicola Sturgeon. Only the US federal vacuum can make us feel better about our present government.


One unexpected entertainment in this unfolding disaster is the peeks inside people's homes afforded by their appearances on TV or Zoom. I am finally getting to browse at least a small portion of the bookshelves and artwork or admire the ceiling cornices belonging to people I've known for decades but have never had the chance to visit. How TV commentators set themselves up is revealing, too. Adam Schiff appears to unfortunately dress his broadcast corner like a stage set. And one MSNBC commentator sits in an immaculate kitchen, the expanse of whiteness broken only by a pink dishtowel whose movements are fun to chart. Presumably, right before broadcast someone goes through frantically cleaning.


This year appears to be the Year of New York. Even before the pandemic, the first Democratic presidential primaries were (however briefly) dominated by three 70-something New Yorkers: Michael Bloomberg, an aristocrat from Manhattan's Upper East Side (even if he was nominally born in Boston), whose campaign ads were expensive but entertaining; Bernie Sanders, whom no amount of Vermont-washing can change from an unmistakable Brooklyn Jew; and Donald Trump, the kid from Queens. In the Washington Post in February - so long ago! - Howard Fineman highlighted this inter-borough dispute and concluded: "The civil way to settle this is to put Trump, Sanders, and Bloomberg on a Broadway park bench and let them argue politics while they feed the pigeons." Two months on, the most visible emerging US leaders in the pandemic are Fauci, Brooklyn-born of Italian descent; Cuomo, Queens-born, also of Italian descent; and Trump.

Fauci was already a familiar name to readers of what a friend calls "plague books". He has been director of the National Institute of Allergy and Infectious Diseases since 1984, and played a crucial role in the AIDS crisis (see Randy Shilts' 1987 book, And the Band Played On) and ebola epidemic (see Laurie Garrett's 1995 title, The Coming Plague), and on and on to today. When he emerged as a member of the White House task force, the natural reaction was, "Of course" and "Thank God". And then: "How old is he, anyway?" He is 79 and looks incredibly fit. Still, one frets. Does he have to be kept standing there mute for two hours? He could be sleeping. He could be working. He could be...well, doing almost anything else, more usefully. We are all incredibly lucky to have him and he should be treated as a precious resource.


The loss of things to go to that provoke ideas for things to write about has me scrambling around the Internet looking for virtual stand-ins. For those interested in net.wars-type issues (and why else would you be here?), the Open Rights Group is hosting a weekly discussion group on Fridays at 16:30 London time (that is BST, or GMT+1), and ORG offshoots such as ORG Glasgow are also holding virtual events. I can also recommend the Meetup group London Futurists, which is hosting regular discussions that sound crazier than they actually are. Further afield, I'm sampling events in New York at Data & Society, and in California, at UC Berkeley's Center for Law & Technology. Why not? Anything with live humans trying to think about hard problems, and I'm there. Virtually.

Illustrations: New Zealand prime minister Jacinda Ardern campaigning in 2017 (Brigitte Neuschwander-Kasselordner, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 20, 2020

The beginning of the world as we don't know it

magnolia-1.jpgOddly, the most immediately frightening message of my week was the one from the World Future Society, subject line "URGENT MESSAGE - NOT A DRILL". The text began, "The World Future Society over its 60 years has been preparing for a moment of crisis like this..."

The message caused immediate flashbacks to every post-disaster TV show and movie, from The Leftovers (in which 2% of the world's population mysteriously vanishes) to The Last Man on Earth (in which everyone who isn't in the main cast has died of a virus). In my case, it also reminds unfortunately of the very detailed scenarios I saw posted in the late 1990s to the Usenet newsgroup, in which survivalists were certain that the Millennium Bug would cause the collapse of society. In one scenario I recall, that collapse was supposed to begin with the banks failing, pass through food riots and cities burning, and end with four-fifths of the world's population dead: the end of the world as we know it (TEOTWAWKI). So what I "heard" in the World Future Society's tone was that the "preppers", who built bunkers, stored sacks of beans, rice, dried meat, and guns, were finally right and this was their chance to prove it.

Naturally, they meant no such thing. What they *did* mean was that futurists have long thought about the impact of various types of existential risks, and that what they want is for as many people as possible to join their effort to 1) protect local government and health authorities, 2) "co-create back-up plans for advanced collaboration in case of societal collapse", and 3) collaborate on possible better futures post-pandemic. Number two still brings those flashbacks, but I like the first goal very much, and the third is on many people's minds. If you want to see more, it's here.

It was one of the notable aspects of the early Internet that everyone looked at what appeared to be a green field for development and sought to fashion it in their own desired image. Some people got what they wanted: China, for example, defying Western pundits who claimed it was impossible, successfully built a controlled national intranet. Facebook, while coming along much later, through zero rating deals with local telcos for its Free Basics, is basically all the Internet people know in countries like Ghana and the Philippines, a phenomenon Global Voices calls "digital colonialism". Something like that mine-to-shape thinking is visible here.

I don't think WFS meant to be scary; what they were saying is in fact what a lot of others are saying, which is that when we start to rebuild after the crisis we have a chance - and a need - to do things differently. At Wired, epidemiologist Larry Brilliant tells Steven Levy he hopes the crisis will "cause us to reexamine what has caused the fractional division we have in [the US]".

At Singularity University's virtual summit on COVID-19 this week, similar optimism was on display (some of it probably unrealistic, like James Ehrlich's land-intensive sustainable villages). More usefully, Jamie Metzl compared the present moment to 1941, when US president Franklin Delano Roosevelt began to imagine how the world might be reshaped after the war would end in the Atlantic charter. Today, Metzl said, "We are the beneficiaries of that process." Therefore, like FDR we should start now to think about how we want to shape our upcoming different geopolitical and technological future. Like net.wars last week and John Naughton at the Guardian, Metzl is worried that the emergency powers we grant today will be hard to dislodge later. Opportunism is open to all.

I would guess that the people who think it's better to bail out businesses than support struggling people also fear permanence will become true of the emergency support measures being passed in multiple countries. One of the most surreal aspects of a surreal time is that in the space of a few weeks actions that a month ago were considered too radical to live are suddenly happening: universal basic income, grounding something like 80% of aviation, even support for *some* limited free health care and paid sick leave in the US.

The crisis is also exposing a profound shift in national capabilities. China could build hospitals in ten days; the US, which used to be able to do that sort of thing, is instead the object of charity from Chinese billionaire Alibaba founder Jack Ma, who sent over half a million test kits and 1 million face masks.

Meanwhile, all of us, with a few billionaire exceptions are turning to the governments we held in so little regard a few months ago to lead, provide support, and solve problems. Libertarians who want to tear governments down and replace all their functions with free-market interests are exposed as a luxury none of us can afford. Not that we ever could; read Paulina Borsook's 1996 Mother Jones article Cyberselfish if you doubt this.

"It will change almost everything going forward," New York State governor Andrew Cuomo said of the current crisis yesterday. Cuomo, who is emerging as one of the best leaders the US has in an emergency, and his counterparts are undoubtedly too busy trying to manage the present to plan what that future might be like. That is up to us to think about while we're sequestered in our homes.

Illustrations:: A local magnolia tree, because it *is* spring.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 10, 2020

The forever bug

Bug_de_l'an_2000.jpgY2K is back, and this time it's giggling at us.

For the past few years, there's been a growing drumbeat on social media and elsewhere to the effect that Y2K - "the year 2000 bug" - never happened. It was a nothingburger. It was hyped then, and anyone saying now it was a real thing is like, ok boomer.

Be careful what old averted messes you dismiss; they may come back to fuck with you.

Having lived through it, we can tell you the truth: Y2K *was* hyped. It was also a real thing that was wildly underestimated for years before it was taken as seriously as it needed to be. When it finally registered as a genuine and massive problem, millions of person-hours were spent remediating software, replacing or isolating systems that couldn't be fixed, and making contingency and management plans. Lots of things broke, but, because of all that work, nothing significant on a societal scale. Locally, though, anyone using a computer at the time likely has a personal Y2K example. In my own case, an instance of Quicken continued to function but stopped autofilling dates correctly. For years I entered dates manually before finally switching to GnuCash.

The story, parts of which Chris Stokel-Walker recounts at New Scientist, began in 1971, when Bob Bemer published a warning about the "Millennium Bug", having realized years earlier that the common practice of saving memory space by using two digits instead of four to indicate the year was storing up trouble. He was largely ignored, in part, it appeared, because no one really believed the software they were writing would still be in use decades later.

It was the mid-1990s before the industry began to take the problem seriously, and when they did the mainstream coverage broke open. In writing a 1997 Daily Telegraph article, I discovered that mechanical devices had problems, too.

We had both nay-sayers, who called Y2K a boondoggle whose sole purpose was to boost the computer industry's bottom line, and doommongers, who predicted everything from planes falling out of the sky to total societal collapse. As Damian Thompson told me for a 1998 Scientific American piece (paywalled), the Millennium Bug gave apocalyptic types a *mechanism* by which the crash would happen. In the Usenet newsgroup, I found a projected timetable: bank systems would fail early, and by April 1999 the cities would start to burn... When I wrote that society would likely survive because most people wanted it to, some newsgroup members called me irresponsible, and emailed the editor demanding he "fire this dizzy broad". Reconvening ten years later, they apologized.

Also at the extreme end of the panic spectrum was the then-head of Deutsche Bank, Ed Yardeni, who repeatedly predicted that Y2K would cause a worldwide recession; it took him until 2002 to admit his mistake, crediting the industry's hard work.

It was still a real problem, and with some workarounds and a lot of work most of the effects were contained, if not eliminated. Reporters spent New Year's Eve at empty airports, in case there was a crash. Air travel that night, for sure, *was* a nothingburger. In that limited sense, nothing happened.

Some of those fixes, however, were not so much fixes as workarounds. One of these finessed the rollover problem by creating a "window" and telling systems that two-digit years fell between 1920 and 2020, rather than 1900 and 2000. As the characters on How I Met Your Mother might say: "It's a problem for Future Ted and Future Marshall. Let's let those guys handle it."

So, it's 2020, we've hit the upper end of the window, the bug is back, and Future Ted and Future Marshall are complaining about Past Ted and Past Marshall, who should have planned better. But even if they had...the underlying issue is temporary thinking that leads people to still - still, after all these decades - believe that today's software will be long gone 20 years from now and therefore they need only worry about the short term of making it work today.

Instead, the reality is, as we wrote in 2014, that software is forever.

That said, the reality is also that Y2K is forever, because if the software couldn't be rewritten to take a four-digit year field in 1999 it probably can't be today, either. Everyone stresses the need to patch and update software, but a lot - for an increasing value of "a lot" as Internet of Things devices come on the market with no real idea of how long they were be in service - of things can't be updated for one reason or another. Maybe the system can't be allowed to go down; maybe it's a bespoke but crucial system whose maintainers are long gone; maybe the software is just too fragile and poorly documented to change; maybe old versions propagated all over the place and are laboring on in places where they've simply been forgotten. All of that is also a reason why it's not entirely fair for Stokel-Walker to call the old work "a lazy fix". In a fair percentage of cases, creating and moving the window may have been the only option.

But fret ye not. We will get through this. And then we can look forward to 2038, when the clocks run out in Linux. Future Ted and Future Marshall will handle it.

Illustrations: Millennium Bug manifested at a French school (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 3, 2020

Chronocentric circles

We wrapped up 2018 with a friend's observation that there was no excitement around technology any more; we conclude the Year of the Bedbug with the regularly heard complaint that the Internet isn't *fun* any more. The writer of this last piece, Brian Koerber, is at least a generation later in arriving online than I was, and he's not alone: where once the Internet was a venue for exploring the weird and unexpected and imagining a hopeful future, increasingly it's a hamster wheel of the same few, mostly commercial, sites and services, which may be entertaining but do not produce any sense of wonder in their quest to exploit us all. Phillip Maciak expands the trend by mourning the death of innovative web publishing, while Abid Omar calls today's web an unusable, user-hostile wasteland. In September, Andres Guadamuz wondered if boredom would kill the Internet; we figure it's a tossup between that and the outrageous energy consumption.

The feeling of sameness is exacerbated by the fact that so many of this year's stories have been mutatis mutandis variations on those of previous years. Smut-detecting automated bureaucrats continue to blame perfectly good names for their own deficiencies, 25 years after AOLbarred users from living in Scunthorpe; the latest is Lyft. Less amusingly, for the ninth year in a row, Freedom House finds that global Internet freedom has declined; of the 65 countries it surveys, only 16 have seen improvement, and that only marginal.

Worse, the year closed with the announcement of perhaps the most evil invention of recent years, the toilet designed to deter lingering. "Most evil", because the meanness is intentional, rather than the result of a gradual drift away from founding values.

Meanwhile, the EU passed a widely disliked copyright-tightening bill. The struggle to change it from threat to opportunity burned out yet another copyright warrior; now-former MEP Julia Reda. It appears increasingly impossible to convince national governments that there is no such thing as a hole - in a wall or in encryption software - that only "good guys" can use (and still less that "good guys" is entirely in the eyes of the beholder). After four years of effort to invent mechanisms for it, age verification may have died...or it may come back as a "duty of care" in whatever legislation builds upon the Online Harms white paper - or in the EU's Audiovisual Media Services Directive. And, nearly three years on, US sites are still ghosting EU residents for fear of GDPR and its potentially massive fines. With the January 1 entry into force of the California Consumer Privacy Act, the US west coast seems set to join us. Hot times for corporate lawyers!

The most noticeable end-of-year trend, however, has been the return of the decade as a significant timeframe and the future as ahead of us. In 2010, the beginning of a decade in which people went from boasting about their smartphones to boasting about how little they used them, no one mentioned the end-of-decade, perhaps because we were all still too startled to be living in the third millennium and the 21st century, known as "the future" for the first decades of my life. Alternatively, perhaps, as a friend suggests, it's because the last couple of years have been so exhausting and depressing that people are clinging to anything that suggests we might now be in for something new.

At Vanity Fair, Nick Bolton has a particularly disturbing view of 2030, and he doesn't even consider climate change, water supplies, the rise of commercial podcasts or cybersecurity.

I would highlight instead a couple of small green shoots of optimism. The profligate wastage exposed by the WeWork IPO appears to be sparking a very real change in both the Silicon Valley venture capital funding ethos (good) and the cost basis of millennial lifestyles (more difficult), or "counterfeit capitalism", as Matt Stoller calls it. Even Wired is suggesting that the formerly godlike technology company founder is endangered. Couple that with 2019's dramatic and continuing rise in employee activism within technology companies and increasing regulatory pressure, particularly on Uber and Airbnb, and there might be some cause to hope for change. Even though company founders like Mark Zuckerberg and Sergey Brin and Larry Page have made themselves untouchable by controlling the majority of voting shares in their companies, they won't *have* companies if they can't retain the talent. The death of the droit de genius ethos that the Jeffrey Epstein case exposed can't come soon enough.

I also note the sudden rebirth of personal and organizational online forums, based on technology such as Mastodon and Soapbox. Some want to focus on specific topics and restrict members to trusted colleagues; some want a lifeboat (paywall) in case of a Twitter ban; WT Social wants to change the game away from data exploitation. Whether any of thesewill have staying power is an open question; a decade ago, when Diaspora, tried to decentralize social media, it failed to gain traction. This time round, with greater consciousness of the true price of pay-with-data "free" services, these return-to-local efforts may have better luck.

Happy new year.

Illustrations:Roborovski hamster (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 21, 2019

The choices of others

vlcsnap-2019-11-21-21h32m40s545.pngFor the last 30 years, I've lived in the same apartment on a small London street. So small, in fact, that even though London now has so many CCTV cameras - an estimated 627,707 - that the average citizen is captured on camera 300 times a day, it remains free of these devices. Camera surveillance and automated facial recognition are things that happen when I go out to other places.

Until now.

It no longer requires state-level resources to put a camera in place to watch your front door. This is a function that has been wholly democratized. And so it is that my downstairs neighbors, whose front door is side by side with mine, have inserted surveillance into the alleyway we share via an Amazon Ring doorbell.

Now, I understand there are far worse things, both as neighbors go and as intrusions go. My neighbors are mostly quiet. We take in each other's packages. They would never dream of blocking up the alleyway with stray furniture. And yet it never occurred to them that a 180-degree camera watching their door is, given the workings of physics and geography, also inevitably watching mine. And it never occurred to them to ask me whether I minded.

I do mind.

I have nothing to hide, and I mind.

Privacy advocates have talked and written for years about the many ways that our own privacy is limited by the choices of others. I use Facebook very little - but less-restrained friends nonetheless tag me in photographs, and in posts about shared activities. My sister's decision to submit a DNA sample to a consumer DNA testing service in order to get one of those unreliable analyses of our ancestry inevitably means that if I ever want to do the same thing the system will find the similarity and identify us as relatives, even though it may think she's my aunt.

We have yet to develop social norms around these choices. Worse, most people don't even see there's a problem. My neighbor is happy and enthusiastic about the convenience of being able to remotely negotiate with package-bearing couriers and be alerted to possible thieves. "My office has one," he said, explaining that they got it after being burgled several times to help monitor the premises.

We live down an alleyway so out of the way that both we and couriers routinely leave packages on our doorsteps all day.

I do not want to fight with my neighbor. We live in a house with just two flats, one up, one down, on a street with just 20 households. There is no possible benefit to be had from being on bad terms. And yet.

I sent him an email: would he mind walking me through the camera's app so I can see what it sees? In response, he sent a short video; the image above, taken from it, shows clearly that the camera sees all the way down the alleyway in both directions.

So I have questions: what does Amazon say about what data it keeps and for how long? If the camera and microphone are triggered by random noises and movements, how can I tell whether they're on and if they're recording?

Obviously, I can read the terms and conditions for myself, but I find them spectacularly unclear. Plus, I didn't buy this device or agree to any of this. The document does make mention of being intended for monitoring a single-family residence, but I don't think this means Amazon is concerned that people will surveil their neighbors; I think it means they want to make sure they sell a separate doorbell to every home.

Examination of the video and the product description reveals that camera, microphone, and recording are triggered by movement next to his - and therefore also next to my - door. So it seems likely that anyone with access to his account can monitor every time I come or go, and all my visitors. Will my privacy advocate friends ever visit me again? How do my neighbors not see why I think this is creepy?

Even more disturbing is the cozy relationship Amazon has been developing with police, especially in the US, where the company has promoted the doorbells by donating units for neighborhood watch purposes, effectively allowing police to build private surveillance networks with no public oversight. The Sun reports similar moves by UK police forces.

I don't like the idea of the police being able to demand copies of recordings of innocent people - couriers, friends, repairfolk - walking down our alleyway. I don't want surveillance-by-default. But as far as I can tell, this is precisely what this doorbell is delivering.

A lawyer friend corrects my impression that GDPR does not apply. The Information Commissioner's Office is clear that cameras should not be pointed at other people's property or shared spaces, and under GDPR my neighbor is now a data controller. My friends can make subject access requests. Even so: do I want to pick a fight with people who can make my life unpleasant? All over the country, millions of people are up against the reality that no matter how carefully they think through their privacy choices they are exposed by the insouciance of other people and robbed of agency not by police or government action but by their intimate connections - their neighbors, friends, and family..

Yes, I mind. And unless my neighbor chooses to care, there's nothing I can practically do about it.

Illustrations: Ring camera shot of alleyway.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2019

I never paid for it in my life

lanier-lrm-2017.jpgSo Jaron Lanier is back, arguing that we should be paid for our data. He was last seen in net.wars two years back, arguing that if people had started by charging for email we would not now be the battery fuel for "behavior modification empires". In a 2018 TED talk, he continued that we should pay for Facebook and Google in order to "fix the Internet".

Lanier's latest disquisition goes like this: the big companies are making billions from our data. We should have some of it. That way lies human dignity and the feeling that our lives are meaningful. And fixing Facebook!

The first problem is that fixing Facebook is not the same as fixing the Internet, a distinction Lanier surely understands. The Internet is a telecommunications network; Facebook is a business. You can profoundly change a business by changing who pays for its services and how, but changing a telecommunications network that underpins millions of organizations and billions of people in hundreds of countries is a wholly different proposition. If you mean, as Lanier seems to, that what you want to change is people's belief that content on the Internet should be free, then what you want to "fix" is the people, not the network. And "fixing" people at scale is insanely hard. Just ask health professionals or teachers. We'd need new incentives,

Paying for our data is not one of those incentives. Instead of encouraging people to think more carefully about privacy, being paid to post to Facebook would encourage people to indiscriminately upload more data. It would add payment intermediaries to today's merry band of people profiting from our online activities, thereby creating a whole new class of metadata for law enforcement to claim it must be able to access.

A bigger issue is that even economists struggle to understand how to price data; as Diane Coyle asked last year, "Does data age like fish or like wine?" Google's recent announcement that it would allow users to set their browser histories to auto-delete after three or 12 months has been met by the response that such data isn't worth much three months on, though the privacy damage may still be incalculable. We already do have a class of people - "influencers" - who get paid for their social media postings, and as Chris Stokel-Walker portrays some of their lives, it ain't fun. Basically, while paying us all for our postings would put a serious dent into the revenues of companies like Google, and Facebook, it would also turn our hobbies into jobs.

So a significant issue is that we would be selling our data with no concept of its true value or what we were actually selling to companies that at least know how much they can make from it. Financial experts call this "information asymmetry". Even if you assume that Lanier's proposed "MID" intermediaries that would broker such sales will rapidly amass sufficient understanding to reverse that, the reality remains that we can't know what we're selling. No one happily posting their kids' photos to Flickr 14 years ago thought that in 2014 Yahoo, which owned the site from 2005 to 2015, was going to scrape the photos into a database and offer it to researchers to train their AI systems that would then be used to track protesters, spy on the public, and help China surveil its Uighur population.

Which leads to this question: what fire sales might a struggling company with significant "data assets" consider? Lanier's argument is entirely US-centric: data as commodity. This kind of thinking has already led Google to pay homeless people in Atlanta to scan their faces in order to create a more diverse training dataset (a valid goal, but oh,.the execution).

In a paywalled paper for Harvard Business Review, Lanier apparently argues that instead he views data as labor. That view, he claims, opens the way to collective bargaining via "data labor unions" and mass strikes.

Lanier's examples, however, are all drawn from active data creation: uploading and tagging photos, writing postings. Yet much of the data the technology companies trade in is stuff we unconsciously create - "data exhaust" - as we go through our online lives: trails of web browsing histories, payment records, mouse movements. At Tech Liberation, Will Rinehart critiques Lanier's estimates, both the amount (Lanier suggests a four-person household could gain $20,000 a year) and the failure to consider the differences between and interactions among the three classes of volunteered, observed, and inferred data. It's the inferences that Facebook and Google really get paid for. I'd also add the difference between data we can opt to emit (I don't *have* to type postings directly into Facebook knowing the company is saving every character) and data we have no choice about (passport information to airlines, tax data to governments). The difference matters: you can revise, rethink, or take back a posting; you have no idea what your unconscious mouse movements reveal and no ability to edit them. You cannot know what you have sold.

Outside the US, the growing consensus is that data protection is a fundamental human right. There's an analogy to be made here between bodily integrity and personal integrity more broadly. Even in the US, you can't sell your kidney. Isn't your data just as intimate a part of you?

Illustrations: Jaron Lanier in 2017 with Luke Robert Mason (photo by Eva Pascoe).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2019

The China syndrome

800px-The_Great_wall_-_by_Hao_Wei.jpgAbout five years ago, a friend commented that despite the early belief - promulgated by, among others, then-US president Bill Clinton and vice-president Al Gore - that the Internet would spread democracy around the world, so far the opposite seemed to be the case. I suggested perhaps it's like the rising sea level, where local results don't give the full picture.

Much longer ago, I remember wondering how Americans would react when large parts of the Internet were in Chinese. My friend shrugged. Why should they care? They don't have to read them.

This week's news shows that we may both have been wrong in both cases. The reality, as the veteran technology journalist Charles Arthur suggested in the Wednesday and Thursday editions of his weekday news digest, The Overspill, is that the Hong Kong protests are exposing and enabling the collision between China's censorship controls and Western standards for free speech, aided by companies anxious to access the Chinese market. We may have thought we were exporting the First Amendment, but it doesn't apply to non-government entities.

It's only relatively recently that it's become generally acknowledged that governments can harness the Internet themselves. In 2008, the New York Times thought there was a significant domestic backlash against China's censors; by 2018, the Times was admitting China's success, first in walling off its own edited version of the Internet, and second in building rival giant technology companies and speeding past the US in areas such as AI, smartphone payments, and media creation.

So, this week. On Saturday, Demos researcher Carl Miller documented an ongoing edit war at Wikipedia: 1,600 "tendentious" edits across 22 articles on topics such as Taiwan, Tiananmen Square, and the Dalai Lama to "systematically correct what [officials and academics from within China] argue are serious anti-Chinese biases endemic across Wikipedia".

On Sunday, the general manager of the Houston Rockets, an American professional basketball team, withdrew a tweet supporting the Hong Kong protesters after it caused an outcry in China. Who knew China was the largest international market for the National Basketball Association? On Tuesday, China responded that it wouldn't show NBA pre-season games, and Chinese fans may boycott the games scheduled for Shanghai. The NBA commissioner eventually released a statement saying the organization would not regulate what players or managers say. The Americanness of basketball: restored.

Also on Tuesday, Activision Blizzard suspended Chung Ng Wai, a professional player of the company's digital card game, Hearthstone, after he expressed support for the Hong Kong protesters in a post-win official interview and fired the interviewers. Chung's suspension is set to last for a year, and includes forfeiting his thousands of dollars of 2019 prize money. A group of the company's employees walked out in protest, and the gamer backlash against the company was such that the moderators briefly took the Blizzard subreddit private in order to control the flood of angry posts (it was reopened within a day). By Wednesday, EU-based Hearthstone gamers were beginning to consider mounting a denial-of-service-attack against Blizzard by sending so many subject access requests under the General Data Protection Regulation that it will swamp the company's resources complying with the legal requirement to fulfill them.

On Wednesday, numerous media outlets reported that in its latest iOS update Apple has removed the Taiwan flag emoji from the keyboard for users who have set their location to Hong Kong or Macau - you can still use the emoji, but the procedure for doing so is more elaborate. (We will save the rant about the uselessness of these unreadable blobs for another time.)

More seriously, also on Wednesday, the New York Times reported that Apple has withdrawn the app that Hong Kong protesters were using to track police after China's state media accusing and protecting the protesters.

Local versus global is a long-standing variety of net.war, dating back to the 1991 Amateur Action bulletin board case. At Stratechery, Ben Thompson discusses the China-US cultural clash, with particular reference to TikTok, the first Chinese company to reach a global market; a couple of weeks ago, the Guardian revealed the site's censorship policies.

Thompson argues that, "Attempts by China to leverage market access into self-censorship by U.S. companies should also be treated as trade violations that are subject to retaliation." Maybe. But American companies can't win at this game.

In her recent book, The Big Nine, Amy Webb discusses China AI advantage as it pours resources and, above all, data into becoming the world leader via Baidu, Ali Baba, and Tencent, which have grown to rival Google, Amazon, and Facebook, without ever needing to leave home. Beyond that, China has been spreading its influence by funding telecommunications infrastructure. The Belt and Road initiative has projects in 152 countries. In this, China is taking advantage of the present US administration's inward turn and worldwide loss of trust.

After reviewing the NBA's ultimate decision, Thompson writes, "I am increasingly convinced this is the point every company dealing with China will reach: what matters more, money or values?" The answer will always be money; whose values count will depend on which market they can least afford to alienate. This week is just a coincidental concatenation of early skirmishes; just wait for the Internet of Things.

Illustrations: The Great Wall of China (by Hao Wei, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 23, 2019


rotated-gchq-secure-phone.jpegFor many reasons, I've never wanted to use my mobile phone for banking. For one thing, I have a desktop machine with three 24-inch monitors and a full-size functioning keyboard; why do I want to poke at a small screen with one finger?

Even if I did, the corollary is that mobile phones suck for typing passwords. For banking, you typically want the longest and most random password you can generate. For mobile phone use, you want something short, easy to remember and type. There is no obvious way to resolve this conflict, particularly in UK banking, where you're typically asked to type in three characters chosen from your password. It is amazingly easy to make mistakes counting when you're asked to type in letter 18 of a 25-character random string. (Although: I do admire the literacy optimism one UK bank displays when it asks for the "antepenultimate" character in your password. It's hard to imagine an American bank using this term.)

Beyond that, mobile phones scare me for sensitive applications in general; they seem far too vulnerable to hacking, built-in vulnerabilities, SIM swapping, and, in the course of wandering the streets of London, loss, breakage, or theft. So mine is not apped up for social media, ecommerce, or anything financial. I accept that two-factor authentication is a huge step forward in terms of security, but does it have to be on my phone? In this, I am, of course, vastly out of step with the bulk of the population, who are saying instead: "Can't it be on my phone?" What I want, however, is a 2FA device I can turn off and stash out of harm's way in a drawer at home. That approach would also mean not having to give my phone number to an entity that might, like Facebook has in the past, coopt it into their marketing plans.

So, it is with great unhappiness that I discover that the combination of the incoming Payment Services Directive 2 and the long-standing effort to get rid of cheques are combining to force me to install a mobile banking app.

PSD2 possibly will may perhaps not have been the antepenultimate gift from the EU28. At Wired, Laurie Clark explains the result of the directive's implementation, which is that ecommerce sites, as well as banks, must implement two-factor authentication (2FA) by September 14. Under this new regime, transactions above £30 (about $36.50, but shrinking by the Brexit-approaching day) will require customers to prove at least two of the traditional three security factors: something they have (a gadget such as a smart phone, a specific browser on a specific machine, or a secure key,, something they know (passwords and the answers to secondary questions), and something they are (biometrics, facial recognition). As Clark says, retailers are not going to love this, because anything that adds friction costs them sales and customers.

My guess is that these new requirements will benefit larger retailers and centralized services at the expense of smaller ones. Paypal, Amazon, and eBay already have plenty of knowledge about their customers to exploit to be confident of the customer's identity. Requiring 2FA will similarly privilege existing relationships over new ones.

So far, retail sites don' t seem to be discussing their plans. UK banking sites, however, began adopting 2FA some years ago, mostly in the form of secure keys that they issued and replaced as needed - credit card-sized electronic one-time pads. Those sites now are simply dropping the option of logging on with limited functionality without the key. These keys have their problems - especially non-inclusive design with small, fiddly keys and hard-to-read LCD screens - but I liked the option.

Ideally, this would be a market defined by standards, so people could choose among different options - such as the Yubikey, Where the banks all want to go, though, is to individual mobile phone apps that they can also use for marketing and upselling. Because of the broader context outlined above, I do not want this.

One bank I use is not interested in my broader context, only its own. It has ruled: must download app. My first thought was to load the app onto my backup, second-to-last phone, figuring that its unpatched vulnerabilities would be mitigated by its being turned off, stuck in a drawer, and used for nothing else. Not an option: its version of Android is two decimal places too old. No app for *you*!

At Bentham's Gaze, Steven Murdoch highlights a recent Which? study that found that those who can't afford, can't use, or don't want smartphones or who live with patchy network coverage will be shut out of financial services.

Murdoch, an expert on cryptography and banking security, argues that by relying on mobile apps banks are outsourcing their security to customers and telephone networks, which he predicts will fail to protect against criminals who infiltrate the phone companies and other threats. An additional crucial anti-consumer aspect is the refusal of phone manufacturers to support ongoing upgrades, forcing obsolescence on a captive audience, as we've complained before. This can only get worse as smartphones are less frequently replaced while being pressed into use for increasingly sensitive functions.

In the meantime, this move has had so little press that many people are being caught by surprise. There may be trouble ahead...


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".

Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 19, 2019

The Internet that wasn't

Bambi-forest.jpgThis week on Twitter, writer and Georgia Tech professor Ian Bogost asked this: "There's a belief that the internet was once great but then we ruined it, but I'm struggling to remember the era of incontrovertible greatness. Lots of arguing from the start. Software piracy. Barnfuls of pornography. Why is the fall from grace story so persistent and credible?"

My reply: "Mostly because most of the people who are all nostalgic either weren't there, have bad memories, or were comfortable with it. Flaming has existed in every online medium that's ever been invented. The big difference: GAFA weren't profiting from it."

Let's expand on that here. Not only was there never a period of peace and tranquility on the Internet, there was never a period of peace and tranquility on the older, smaller, more contained systems that proliferated in the period when you had to dial up and wait through the modems' mating calls. I only got online in 1991, but those 1980s systems - primarily CIX (still going), the WELL (still going), and CompuServe (bought by AOL) - hosted myriad "flame wars". The small CompuServe UK journalism forum I co-managed had to repeatedly eject a highly abusive real-life Fleet Street photographer who obsessively returned with new name, same behavior. CompuServe finally blocked his credit card, an option unavailable to pay-with-data TWIFYS (Twitter-WhatsApp-Instagram-Facebook-YouTube-Snapchat). The only real answer to containing abuse and abusers was and is human moderators.

The quick-trigger abuse endemic on Twitter has persisted since the beginning, as Sara Kiesler and Lee Sproull documented in their 1992 book, Connections, based on years of studies of mailing lists within large organizations. Even people using their real names and job descriptions within a professional context displayed online behavior they would never display offline. The distancing effect appears inherent to the medium and the privacy in which we experience it. Meanwhile, urgency of response rises with each generation. The etiquette books of my childhood recommended rereading angry letters after a day or two before sending; who has the attention span for that now?

Three documented examples of early cyberbullying provide perspective. In Josh Quittner's 1994 Wired story about Usenet, the rec.pets.cats successfully repelled invaders from alt.tasteless when a long-time poster and software engineer taught the others her tools; when she began getting death threats a phone call to the leader's ISP made him back down for fear of losing his Internet access. In Julian Dibbell's A Rape in Cyberspace "Mr Bungle took over another user's avatar in the virtual game space Lambda MOO, and forced it into virtual sex. After inconclusive community consideration, a single administrator quietly expelled Bungle. Finally, in my own piece about Scientology's early approach to the Internet, disputes over disclosing secret scriptures in the newsgroup alt.religion.scientology led to police raids, court cases, and attempts to smother the newsgroup with floods of pro-Scientology postings, also countered by a mix of community practices and purpose-built tools. Nonetheless, even in 1997 in 1997 people complained that tolerating abuse shouldn't be the price of participation.

Software "piracy" was born right alongside the commercial software business. In 1976, a year after Bill Gates and Paul Allen launched Microsoft's first product, a BASIC language interpreter for the early Altair computer, Gates published an open letter to hobbyists begging them to make the new industry viable by buying the software rather than circulate copies. The tug of war over copyrighted material, unauthorized copies, and business models has continued ever since in a straight line from Gates's open letter through Napster to today's battles over the right to repair. The shift moving modifiable software into copyright control was the spark that got Richard Stallman building GNU, the bulk of "Linux".

"Barnfuls of pornography" is slightly exaggerated, especially before search engines simplified finding it. Still, pornography producers are adept at colonizing new technology, from cave paintings to videocassettes, and the Internet was no exception. It was certainly popular: the University of Delft took down its pornography archive because the traffic swamped its bandwidth. In 1994, students protested when Carnegie-Mellon removed sexually explicit newsgroups, and conflicting US states' standards landed Robert and Carleen Thomas in jail.

Some of the Internet's steamy reputation was undeserved. Time magazine's shock-horror 1995 Cyberporn Cyberporn cover story was based on a fraudulent study. That sloppy reporting's fallout included the 1996 passage of the Communications Decency Act, antecedent of today's online harms and age verification.

So why does the myth persist? First, anyone under 35 probably wasn't there. Second, the early Internet was more homogeneous and more open, and you lost less by abandoning a community to create a new one when you mostly interacted with strangers. As previously noted, 1980s online forums did not profit from abuse; today, ramping up "engagement" to fuel ad-bearing traffic is TWIFYS' business model. More important, these scaled-up, closed systems do not offer us the ability to create and deploy tools or enforce our own fine-grained rules.

Crucially, the early Internet seemed *ours* - no expanding privacy policies or data collection. The first spammers, hackers, and virus writers were *amateurs*. Today, as Craig Silverman pointed out on Twitter, "There are tens of thousands of people whose entire job it is to push out spam on Facebook." We were free to imagine this new technology would bring a better world, however dumb that seemed even at the time. The Internet was *magic*.

Tl;dr: human behavior hasn't changed. The Internet hasn't changed. It's just not magic any more.

Illustrations: Bambi, before Man enters the forest.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 26, 2019

This house

2001-hal.pngThis house may be spying on me.

I know it listens. Its owners say, "Google, set the timer for one minute," and a male voice sounds: "Setting the timer for one minute."

I think, one minute? You need a timer for one minute? Does everyone now cook that precisely?

They say, "Google, turn on the lamp in the family room." The voice sounds: "Turning on the lamp in the family room." The lamp is literally sitting on the table right next to the person issuing the order.

I think, "Arm, hand, switch, flick. No?"

This happens every night because the lamp is programmed to turn off earlier than we go to bed.

I do not feel I am visiting the future. Instead, I feel I am visiting an experiment that years from now people will look back on and say, "Why did they do that?"

I know by feel how long a minute is. A child growing up in this house would not. That child may not even know how to operate a light switch, even though one of the house's owners is a technical support guy who knows how to build and dismember computers, write code, and wire circuits. Later, this house's owner tells me, "I just wanted a reminder."

It's 16 years since I visited Microsoft's and IBM's visions of the smart homes they thought we might be living in by now. IBM imagined voice commands; Microsoft imagined fashion advice-giving closets. The better parts of the vision - IBM's dashboard with a tick-box so your lawn watering system would observe the latest municipal watering restrictions - are sadly unavailable. The worse parts - living in constant near-darkness so the ubiquitous projections are readable - are sadly closer. Neither envisioned giant competitors whose interests are served by installing in-house microphones on constant alert.

This house inaudibly alerts its owner's phones whenever anyone approaches the front door. From my perspective, new people mysteriously appear in the kitchen without warning.

This house has smartish thermostats that display little wifi icons to indicate that they're online. This house's owners tell me these are Ecobee Linux thermostats; the wifi connection lets them control the heating from their phones. The thermostats are not connected to Google.

None of this is obviously intrusive. This house looks basically like a normal house. The pile of electronics in the basement is just a pile of electronics. Pay no attention to the small blue flashing lights behind the black fascia.

One of this house's owners tells me he has deliberately chosen a male voice for the smart speaker so as not to suggest that women are or should be subservient to men. Both owners are answered by the same male voice. I can imagine personalized voices might be useful for distinguishing who asked what, particularly in a shared house or a company, and ensuring only the right people got to issue orders. Google says its speakers can be trained to recognize six unique voices - a feature I can see would be valuable to the company as a vector for gathering more detailed information about each user's personality and profile. And, yes, it would serve users better.

Right now, I could come down in the middle of the night and say, "Google, turn on the lights in the master bedroom." I actually did something like this once by accident years ago in a friend's apartment that was wirelessed up with X10 controls. I know this system would allow it because I used the word "Google" carelessly in a sentence while standing next to a digital photo frame, and the unexpected speaker inside it woke up to say, "I don't understand". This house's owner stared: "It's not supposed to do that when Google is not the first word in the sentence". The photo frame stayed silent.

I think it was just marking its territory.

Turning off the fan in their bedroom would be more subtle. They would wake up more slowly, and would probably just think the fan had broken. This house will need reprogramming to protect itself from children. Once that happens, guests will be unable to do anything for themselves.

This house's owners tell me there are many upgrades they could implement, and they will but: managing them needs skill and thought to segment and secure the network and implement local data storage. Keeping Google and Amazon at bay requires an expert.

This house's owners do not get their news from their smart speakers, but it may be only a matter of time. At a recent Hacks/Hackers, Nic Newman gave the findings of a recent Reuters Institute study: smart speakers are growing faster than smartphones at the same stage, they are replacing radios, and "will kill the remote control". So far, only 46% use them to get news updates. What was alarming was the gatekeeper control providers have: on a computer, the web could offer 20 links; on a smartphone there's room for seven, Just one answer to, "What's the latest news on the US presidential race?"

At OpenTech in 2017, Tom Steinberg observed that now that his house was equipped with an Amazon Echo, homes without one seemed "broken". He predicted that this would become such a fundamental technology that "only billionaires will be able to opt out". Yet really, the biggest advance since the beginning of remote controls is that now your garage door opener can collect your data and send it to Google.

My house can stay "broken".

Illustrations: HAL (what else?).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 29, 2019


Philip Seymour Hoffman in Doubt.jpgA few months ago, Max Reid published an article at New York Magazine commenting that increasingly most of the Internet is fake. He starts with Twitter bots and fake news and winds up contemplating the inauthenticity of self. Fake traffic, automatically-generated content, and advertising fraud are not necessarily lethal. What *is* lethal is the constant nagging doubt about what you're seeing or, as Reid puts it, "the uncanny sense that what you encounter online is not 'real' but is also undeniably not 'fake', and indeed may be both at once, or in succession, as you turn it over in your head". It's good not to take everything you read at face value. But the loss of confidence can be poisonous. Navigating that is exhausting.

And it's pervasive. Last weekend, a couple of friends began arguing about whether the epetition to revoke Article 50 was full of fraud. At the time, 40,000 to 50,000 people were signing per hour and the total had just passed 4 million. The petition is hosted on a government site, whose managers have transparently explained their handling of the twin challenges of scale and combating fraud. JJ Patrick's independent analysis concurs: very little bot participation. Even Theresa May defended its validity during Wednesday's Parliamentary debate, even though her government went on to reject it.

Doubt is very easily spread, sometimes correctly. One of the most alarming things about the Boeing 737 MAX 8 crashes is the discovery that the Federal Aviation Administration is allowing the company to self-certify the safety of its planes. Even if the risk there is only perception, it's incredibly dangerous for an industry that has displayed enormous intelligence in cooperating to make air travel safe.

Another example: last year Nextdoor, a San Francisco-based neighborhood-focused social networking service, sent postcards inviting my area to join Think of it as a corporately-owned chain of community bulletin boards. Across the tracks from me, one street gossips on a private community bulletin board one of them has set up, where I'm told they have animated discussions of their micro issues. By comparison, Nextdoor sounded synthetic; still, I signed up.

Most postings ask for or recommend window cleaners and piano tuners, buildes and babysitters. Yet the site's daily email persistently gives crime and safety the top spot: two guys on a moped peering into car windows reacted aggressively to challenges; car broken into at Tesco; stolen bike; knife attack; police notices. I can't tell if this is how the site promotes "engagement" or whether its origin is deliberate strategy or algorithmic accident. But I do note the rising anxiety among people's responses, and while crime is rising in the UK, likely attributable to police cuts, my neighborhood remains a safer place than Nextdoor suggests...I think. What is certain is that I doubt my neighbors more; you can easily imagine facing their hostile inquisition for being a perfectly innocent hoodie-wearing young male on a moped using a flashlight to look for dropped keys.

Years ago, some of us skeptics considered mounting a hoax - a fake UFO to be found in someone's garden, for example - to chart its progress through the ranks of believers and the media. In the end, we decided it was a bad idea, because such things never die, and then you have to spend the rest of your life debunking them. There are plenty of examples; David Langford's UFO hoax, published as an account found in his attic and written by William Robert Loosley still circulates as true, goosed since then by a mention in a best-selling book by Whitley Strieber. As the Internet now proves every day, once a false story is embedded, you can never fully dig it out again. Worse, even when you don't believe it, repeated encounters can provoke doubt despite yourself.

Andrew Wakefield is a fine case in point. Years after the British Medical Journal retracted his paper and called it a hoax, the damage continues to escalate. Recently, the World Health Organization called vaccine hesitancy is a top-ten threat to health worldwide. Hesitancy is right. I am an oddity in having been born in 1954 but having somehow escaped all the childhood diseases. "You probably just don't remember you had them," the nurse said when I inquired about getting the MMR, now that every week you read of a measles outbreak somewhere. True, I *don't* remember having them. But I *do* remember, clearly, my mother asking me, "Were you *close* to them?" every time the note came home from school that another kid had one of them. We made an appointment for the shot.

I then realized that years of exposure to anti-vaccination arguments have had their effect. Hundreds of millions of people have had these vaccines with little to no ill-effects, and yet: what if? How stupid would I feel if I broke my own health? "It's just fear of illness," someone carped on Twitter, trying to convince me that vaccines were not safety-tested and only a fool would get one. Well, yes, and some illnesses *should* be feared, particularly as you age.

"I know," the nurse said, when I commented that spreading doubt has been a terrible effect of all this. She pushed the plunger. Two days later, a friend living in north London emailed: she has mumps. It feels like I had a close call.

Illustrations: Philip Seymour Hoffman in John Patrick Shanley's DOUBT.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 14, 2019


Anti-copyright.svg.pngJust a couple of weeks ago it looked like the EU's proposed reform of the Copyright Directive, last updated in 2001, was going to run out of time. In the last three days, it's revived, and it's heading straight for us. As Joe McNamee, the outgoing director of European Digital Rights (EDRi), said last year, the EU seems bent on regulating Facebook and Google by creating an Internet in which *only* Facebook and Google can operate.

We'll start with copyright. As previously noted, the EU's proposed reforms include two particularly contentious clauses: Article 11, the "link tax", which would require anyone using more than one or two words to link to a news article elsewhere to get a license, and Article 13, the "upload filter", which requires any site older than three years *or* earning more than €10,000,000 a year in revenue to ensure that no user posts anything that violates copyright, and sites that allow user-generated content must make "best efforts" to buy licenses for anything they might post. So even a tiny site - like net.wars, which is 13 years old - that hosted comments would logically be required to license all copyrighted content in the known universe, just in case. In reviewing the situation at TechDirt, Mike Masnick writes, "If this becomes law, I'm not sure Techdirt can continue publishing in the EU." Article 13, he continues, makes hosting comments impossible, and Article 11 makes their own posts untenable. What's left?

Thumbnail image for Thumbnail image for Julia Reda-wg-2016-06-24-cropped.jpgTo these known evils, the German Pirate Party MEP Julia Reda finds that the final text adds two more: limitations on text and data mining that allow rights holders to opt out under most circumstances, and - wouldn't you know it? - the removal of provisions that would have granted authors the right to proportionate remuneration (that is, royalties) instead of continuing to allow all-rights buy-out contracts. Many younger writers, particularly in journalism, now have no idea that as recently as 1990 limited contracts were the norm; the ability to resell and exploit their own past work was one reason the writers of the mid-20th century made much better livings than their counterparts do now. Communia, an association of digital rights organizations, writes that at least this final text can't get any *worse*.

Well, I can hear Brexiteers cry, what do you care? We'll be out soon. No, we won't - at least, we won't be out from under the Copyright Directive. For one thing, the final plenary vote is expected in March or April - before the May European Parliament general election. The good side of this is that UK MEPs will have a vote, and can be lobbied to use that vote wisely; from all accounts the present agreed final text settled differences between France and Germany, against which the UK could provide some balance. The bad side is that the UK, which relies heavily on exports of intellectual property, has rarely shown any signs of favoring either Internet users or creators against the demands of rights holders. The ugly side is that presuming this thing is passed before the UK brexits - assuming that happens - it will be the law of the land until or unless the British Parliament can be persuaded to amend it. And the direction of travel in copyright law for the last 50 years has very much been toward "harmonization".

Plus, the UK never seems to be satisfied with the amount of material its various systems are blocking, as the Open Rights Group documented this week. If the blocks in place weren't enough, Rebecca Hill writes at the Register: under the just-passed Counter-Terrorism and Border Security Act, clicking on a link to information likely to be useful to a person committing or preparing an act of terrorism is committing an offense. It seems to me that could be almost anything - automotive listings on eBay, chemistry textbooks, a *dictionary*.

What's infuriating about the copyright situation in particular is that no one appears to be asking the question that really matters, which is: what is the problem we're trying to solve? If the problem is how the news media will survive, this week's Cairncross Review, intended to study that exact problem, makes some suggestions. Like them or loathe them, they involve oversight and funding; none involve changing copyright law or closing down the Internet.

Similarly, if the problem is market dominance, try anti-competition law. If the problem is the increasing difficulty of making a living as an author or creator, improve their rights under contract law - the very provisions that Reda notes have been removed. And, finally, if the problem is the future of democracy in a world where two companies are responsible for poisoning politics, then delving into campaign finances, voter rights, and systemic social inequality pays dividends. None of the many problems we have with Facebook and Google are actually issues that tightening copyright law solves - nor is their role in spreading anti-science, such as this, just in from Twitter, anti-vaccination ads targeted at pregnant women.

All of those are problems we really do need to work on. Instead, the only problem copyright reform appears to be trying to solve is, "How can we make rights holders happier?" That may be *a* problem, but it's not nearly so much worth solving.

Illustrations: Anti-copyright symbol (via Wikimedia); Julia Reda MEP in 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 8, 2019

Doing without

kashmir-hill-untech-gizmodo.pngOver at Gizmodo, Kashmir Hill has conducted a fascinating experiment: cutting, in turn, Amazon, Facebook, Google, Microsoft, and Apple, culminating with a week without all of them. Unlike the many fatuous articles in which privileged folks fatuously boast about disconnecting, Hill is investigating a serious question: how deeply have these companies penetrated into our lives? As we'll see, this question encompasses the entire modern world.

For that reason, it's important. Besides, as Hill writes, it's wrong to answer objections to GAFAM's business practices - or their privacy policies - with, "Well, don't use them, then." It may be to buy from smaller sites and local suppliers, delete Facebook, run Linux, switch to AskJeeves and OpenStreetMap, and dump the iPhone, but doing so requires a substantial rethink of many tasks. As regulators consider curbing GAFAM's power, Hill's experiment shows where to direct our attention.

Online, Amazon is the hardest to avoid. As Lina M. Khan documented last year, Amazon underpins an ever-increasing amount of Internet infrastructure. Netflix, Signal, the WELL, and Gizmodo itself all run on top of Amazon's cloud services, AWS. To ensure she blocked all of them, Hill got a technical expert to set up a VPN that blocked all IP addresses owned by each company and monitored attempted connections. Even that, however, was complicated by the use of content delivery networks, which mask the origin of network traffic.

Barring Facebook also means dumping Instagram and WhatsApp, and, as Hill notes, changing the signin procedure for any website where you've used your Facebook ID. Even if you are a privacy-conscious net.wars reader who would never grant Facebook that pole position, the social media buttons on most websites and ubiquitous trackers also have to go.

For Hill, blocking Apple - which seems easy to us non-Apple users - was "devastating". But this is largely a matter of habit, and habits can be re-educated. The killer was the apps: because iMessage reroutes texts to its own system, some of Hill's correspondents' replies never arrive, and she can't FaceTime her friends. Her conclusion: "It's harder to get out of Apple's ecosystem than Google's." However, once out she found it easy to stay that way - as long as she could resist her friends pulling her back in.

Google proved easier than expected despite her dependence on its services - Maps, calendar, browser. Here the big problem was email. The amount of stored information made it impossible to simply move and delete the account; now we know why Google provides so much "free" storage space. Like Amazon, the bigger issue was all the services Google underpins - trackers, analytics, and, especially, Maps, which Uber, Lyft, and Yelp depend. Hill should be grateful she didn't have a Nest thermostat and doesn't live in Minnesota. The most surprising bit is that so many sites load Google *fonts*. Also, like Facebook, Google has spread logins across the web, and Hill had to find an alternative to Dropbox, which uses Google to verify users.

In our minds, Microsoft is like Apple. Don't like Windows? Get a Mac or use Linux. Ah, but: I have seen the Windows Blue Screen of Death on scheduling systems on both the London Underground and Philadelphia's SEPTA. How many businesses that I interact with depend on Microsoft products? PCs, Office, and Windows servers and point of sale systems are everywhere. A VPN can block LinkedIn, Skype, and (sadly) Github - but it can't block any of those - or the back office systems at your bank. You can sell your Xbox, but even the local film society shows movies using VLC on Windows.

Hill's final episode, in which she eliminates all five simultaneously, posted just last night. As expected, she struggles to find alternative ways to accomplish many tasks she hasn't had to think about before. Ironically, this is easier if you're an Old Net Curmudgeon: as soon as she says large file, can't email, I go, "FTP!" while various web services all turn out to behosted on AWS, and she eventually lands on "command line". It's a definite advantage if you remember how you did stuff *before* the Internet - cash can pay the babysitter (or write a check!), and old laptops can be repurposed to run Linux. Even so, complete avoidance is really only realistic for a US Congressman. The hardest for me personally would be giving up my constant compaion, DuckDuckGo, which is hosted on...AWS.

Several things need to happen to change this - and we *should* change it because otherwise we're letting them pwn us, as in Dave Eggers' The Circle. The first is making the tradeoffs visible, so that we understand who we're really benefiting and harming with our clicks. The second is also regulatory: Lina Khan described in 2017 how to rethink antitrust law to curb Amazon. Facebook, as Marc Rotenberg told CNBC last week, should be required to divest Instagram and WhatsApp. Both Facebook and Google should spin off or discontinue their identity verification and web-wide login systems into separate companies. Third, we should encourage alternatives by using them.

But the last thing is the hardest: we must convince all our friends that it's worth putting up with some inconvenience. As a lifelong non-drinker living in pub-culture Britain, I can only say: good luck with that.

Illustrations: Kashmir Hill and her new technology.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 25, 2019

Reversal of fortunes

Seabees_remove_corroded_zinc_anodes_from_an_undersea_cable._(28073762161).jpgIt may seem unfair to keep busting on the explosion of the Internet's origin myths, but documenting what happens to the beliefs surrounding the beginning of a new technology may help foster more rational thinking next time.

Today's two cherished early-Internet beliefs: 1) the Internet was designed withstand a bomb outage; 2) the Internet is impossible to censor. The first of these is true - the history books are clear on this - but it was taken to mean that the Internet could withstand all damage. That's just not true; it can certainly be badly disrupted on a national or regional basis.

While the Internet was new, a favorite route to overload was introducing a new application - the web, for example. Around 1996, Peter Dawe, the founder of one of Britain's first two ISPs, predicted that video would kill the Internet. For "kill" read "slow down horribly". Bear in mind that this was BB - before broadband - so an 11MB video file took hours to trickle in. Stream? Ha!

In 1995, Bob Metcalfe, the co-inventor of ethernet, predicted that the Internet would start to collapse in 1996. In 1997, he literally ate his column as penance for being wrong.

It was weird: with one of their brains people were staking their lives on online businesses, yet with another part the Internet was always vulnerable. My favorite was Simson Garfinkel, writing "Fifty Ways to Kill the Internet" for Wired in 1997 who nailed the best killswitch: "Buy ten backhoes." Underneath all the rhetoric about virtuality the Internet remains a physical network of cables. You'd probably need more than ten backhoes today, but it's still a finite number.

People have given up these worries even though parts of the Internet are actually being blacked out - by governments. In the acute form either access providers (ISPs, mobile networks) are ordered to shut down, or the government orders blocks on widely-used social media that people use to distribute news (and false news) and coordinate action, such as Twitter, Facebook, or WhatsApp.

In 2018 , governments shutting down "the Internet" became an increasingly frequent fixture of the the fortnightly Open Society Foundation Information Program News Digest. The list for 2018 is long, as Access Now says. At New America, Justin Sherman predicts that 2019 will see a rise in Internet blackouts - and I doubt he'll have to eat his pixels. The Democratic Republic of Congo was first, on January 1, soon followed by Zimbabwe.

There's general agreement that Internet shutdowns are bad for both democracy and the economy. In a 2016 study, the Brookings Institution estimated that Internet shutdowns cost countries $2.4 billion in 2015 (PDF), an amount that surely rises as the Internet becomes more deeply embedded in our infrastructure.

But the less-worse thing about the acute form is that it's visible to both internal and external actors. The chronic form, the second of our "things they thought couldn't be done in 1993", is long-term and less visible, and for that reason is the more dangerous of the two. The notion that censoring the Internet is impossible was best expressed by EFF co-founder John Gilmore in 1993: "The Internet perceives censorship as damage and routes around it". This was never a happy anthropomorphization of a computer network; more correctly, *people* on the Internet... Even today, ejected Twitterers head toGab; disaffected 4chan users create 8chan. But "routing around the damage" only works as long as open protocols permit anyone to build a new service. No one suggests that *Facebook* regards censorship as damage and routes around it; instead, Facebook applies unaccountable censorship we don't see or understand. The shift from hundreds of dial-up ISPs to a handful of broadband providers is part of this problem: centralization.

The country that has most publicly and comprehensively defied Gilmore's aphorism is China; in the New York Times, Raymond Zhong recently traced its strategy. At Technology Review, James Griffiths reports that the country is beginning to export its censorship via malware infestations and DDoS attacks, while Abdi Latif Dahir writes at Quartz that it is also exporting digital surveillance to African countries such as Morocco, Egypt, and Libya inside the infrastructure it's helping them build as part of its digital Silk Road.

The Guardian offers a guide to what Internet use is like in Russia, Cuba, India, and China. Additional insight comes from Chinese professor Bai Tongdong, who complains in the South China Morning Post that Westerners opposing Google's Dragonfly censored search engine project do not understand the "paternalism" they are displaying in "deciding the fate of Chinese Internet users" without considering their opinion.

Mini-shutdowns are endemic in democratic countries: unfair copyright takedowns, the UK's web blocking, and EU law limiting hate speech. "From being the colonizers of cyberspace, Americans are now being colonized by the standards adopted in Brussels and Berlin," Jaccob Mchangama complains at Quillette.

In the mid-1990s, Americans could believe they were exporting the First Amendment. Another EFF co-founder, John Perry Barlow, was more right than he'd have liked when, in a January 1992 column for Communications of the ACM, he called the US First Amendment "a local ordinance". That is much less true of the control being built into our infrastructure now.

Illustrations: The old threat model: Seabees remove corroded zinc anodes from an undersea cable (via Wikimedia, from the US Navy site.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2019


European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpg"It's amazing. We're all just sitting here having lunch like nothing's happening, but..." This was on Tuesday, as the British Parliament was getting ready to vote down the Brexit deal. This is definitely a form of privilege, but it's hard to say whether it's confidence born of knowing your nation's democracy is 900 years old, or aristocrats-on-the-verge denial as when World War I or the US Civil War was breaking out.

Either way, it's a reminder that for many people historical events proceed in the background while they're trying to get lunch or take the kids to school. This despite the fact that all of us in the UK and the US are currently hostages to a paralyzed government. The only winner in either case is the politics of disgust, and the resulting damage will be felt for decades. Meanwhile, everything else is overshadowed.

One of the more interesting developments of the past digital week is the European advocate general's preliminary opinion that the right to be forgotten, part of data protection law, should not be enforceable outside the EU. In other words, Google, which brought the case, should not have to prevent access to material to those mounting searches from the rest of the world. The European Court of Justice - one of the things British prime minister Theresa May has most wanted the UK to leave behind since her days as Home Secretary - typically follows these preliminary opinions.

The right to be forgotten is one piece of a wider dispute that one could characterize as the Internet versus national jurisdiction. The broader debate includes who gets access to data stored in another country, who gets to crack crypto, and who gets to spy on whose citizens.

This particular story began in France, where the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection regulator, fined Google €100,000 for selectively removing a particular person's name from its search results on just its French site. CNIL argued that instead the company should delink it worldwide. You can see their point: otherwise, anyone can bypass the removal by switching to .com or On the other hand, following that logic imposes EU law on other countries, such as the US's First Amendment. Americans in particular tend to regard the right to be forgotten with the sort of angry horror of Lady Bracknell contemplating a handbag. Google applied to the European Court of Justice to override CNIL and vacate the fine.

A group of eight digital rights NGOs, led by Article 19 and including Derechos Digitales, the Center for Democracy and Technology, the Clinique d'intérêt public et de politique d'Internet du Canada (CIPPIC), the Electronic Frontier Foundation, Human Rights Watch, Open Net Korea, and Pen International, welcomed the ruling. Many others would certainly agree.

The arguments about jurisdiction and censorship were, like so much else, foreseen early. By 1991 or thereabouts, the question of whether the Internet would be open everywhere or devolve to lowest-common-denominator censorship was frequently debated, particularly after the United States v. Thomas case that featured a clash of community standards between Tennessee and California. If you say that every country has the right to impose its standards on the rest of the world, it's unclear what would be left other than a few Disney characters and some cat videos.

France has figured in several of these disputes: in (I think) the first international case of this kind, in 2000, it was a French court that ruled that the sale of Nazi memorabilia on Yahoo!'s site was illegal; after trying to argue that France was trying to rule over something it could not control, Yahoo! banned the sales on its French auction site and then, eventually, worldwide.

Data protection law gave these debates a new and practical twist. The origins of this particular case go back to 2014, when the European Court of Justice ruled in Google Spain v AEPD and Mario Costeja González that search engines must remove links to web pages that turn up in a name search and contain information that is irrelevant, inadequate, or out of date. This ruling, which arguably sought to redress the imbalance of power between individuals and corporations publishing information about them and free expression. Finding this kind of difficult balance, the law scholar Judith Rauhofer argued at that year's Computers, Freedom, and Privacy, is what courts *do*. The court required search engines to remove from the search results that show up in a *name* search the link to the original material; it did not require the original websites to remove it entirely or require the link's removal from other search results. The ruling removed, if you like, a specific type of power amplification, but not the signal.

How far the search engines have to go is the question the ECJ is now trying to settle. This is one of those cases where no one gets everything they want because the perfect is the enemy of the good. The people who want their past histories delinked from their names don't get a complete solution, and no one country gets to decide what people in other countries can see. Unfortunately, the real winner appears to be geofencing, which everyone hates.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 10, 2019

Secret funhouse mirror room

Lost_City_-_Fun_House.jpg"Here," I said, handing them an old pocket watch. "This is your great-grandfather's watch." They seemed a little stunned.

As you would. A few weeks earlier, one of them had gotten a phone call from a state trooper. A cousin they'd never heard of had died, and they might be the next of kin.

"In this day and age," one of them told me apologetically, "I thought it must be a scam."

It wasn't. Through the combined offices of a 1940 divorce and a lifetime habit of taciturnity on personal subjects, a friend I'd known for 45 years managed to die without ever realizing his father had an extensive tree of living relatives. They would have liked each other, I think.

So they came to the funeral and met their cousin through our memories and the family memorabilia we found in his house. And then they went home bearing the watch, understandably leaving us to work out the rest.

Whenever someone dies, someone else inherits a full-time job. In our time, that full-time job is located at the intersection of security, privacy - and secrecy, the latter a complication rarely discussed. In the eight years since I was last close to the process of closing out someone's life, very much more of the official world has moved online. This is both help and hindrance. I was impressed with the credit card company whose death department looked online for obits to verify what I was saying instead of demanding an original death certificate (New York state charges $15 per copy). I was also impressed with - although a little creeped out by - the credit card company that said, "Oh, yes, we already know." (It had been three weeks, two of them Christmas and New Year's.)

But those, like the watch, were easy, accounts with physical embodiments - that is, paper statements. It's the web that's hard. All those privacy and security settings that we advocate for live someones fall apart when they die without disclosing their passwords. We found eight laptops, the most recent an actively hostile mid-2015 MacBook Pro. Sure, reset the password, but doing so won't grant access to any other stored passwords. If File Vault is turned on, a beneficent fairy - or a frustrated friend trying to honor your stated wishes that you never had witnessed or notarized - is screwed. I'd suggest an "owner deceased" mode, but how do you protect *that* for a human rights worker or a journalist in a war zone holding details of at-risk contacts? Or when criminals arrive knowing how to unlock it? Privacy and security are essential, but when someone dies they turn into secrecy that - I seem to recall predicting in 1997 - means your intended beneficiaries *don't* inherit because they can't unlock your accounts.

It's a genuinely hard problem, not least because most people don't want to plan for their own death. Personal computers operate in binary mode: protect everything, or nothing, and protect it all the same way even though exposing a secret not-so-bad shame is a different threat model from securing a bank account. But most people do not think, "After I'm dead, what do I care?" Instead, they think, "I want people to remember me the way I want and this thing I'm ashamed of they must never, ever know, or they'll think less of me." It takes a long time in life to arrive at, "People think of me the way they think of me, and I can't control that. They're still here in my life, and that must count for something." And some people never realize that they might feel more secure in their relationships if they hid less.

So, the human right to privacy bequeaths a problem: how do you find your friend's long-lost step-sibling, who is now their next of kin, when you only know their first name and your friend's address book is encrypted on a hard drive and not written, however crabbily, in a nice, easily viewed paper notebook?

If there's going to be an answer, I imagine it lies in moving away from binary mode. It's imaginable that a computer operating system could have a "personal rescue mode" that would unlock some aspects of the computer and not others, an extension of the existing facilities for multiple accounts and permissions, though these are geared to share resources, not personal files. The owner of such a system would have to take some care which information went in which bucket, but with a system like that they could give a prospective executor a password that would open the more important parts.

No such thing exists, of course, and some people wouldn't use it even if it did. Instead, the key turned out to be the modest-sized-town people network, which was and is amazing. It was through human connections that we finally understood the invoices we found for a storage unit. Without ever mentioning it, my friend had, for years, at considerable expense, been storing a mirror room from an amusement park funhouse. His love of amusement parks was no surprise. But if we'd known, the mirror room would now be someone's beloved possession instead of broken up in a scrapyard because a few months before he died my friend had stopped paying his bills - also without telling anyone.

Illustrations: The Lost City Fun House (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 21, 2018

Behind you!

640px-Aladdin_pantomime_Nottingham_Playhouse_2008.jpgFor one reason or another - increasing surveillance powers, increasing awareness of the extent to which online activities are tracked by myriad data hogs, Edward Snowden - crypto parties have come somewhat back into vogue over the last few years after a 20-plus-year hiatus. The idea behind crypto parties is that you get a bunch of people together and they all sign each other's keys. Fun! For some value of fun.

This is all part of the web of trust that is supposed to accrue when you use public key cryptography software like PGP or GPG: each new signature on a person's public key strengthens the trust you can have that the key truly belongs to that person. In practice, the web of trust, also known as "public key infrastructure", does not scale well, and the early 1990s excitement about at least the PGP version of the idea died relatively quickly.

A few weeks ago, ORG Norwich held such a meeting and I went along to help workshop about when and how you want to use crypto. Like any security mechanism, encrypting email has its limits. Accordingly, before installing PGP and saying, "Secure now!" a little threat modeling is a fine thing. As bad as it can be to operate insecurely, it is much, much worse to operate under the false belief that you are more secure than you are because the measures you've taken don't fit the risks you face.

For one thing, PGP does nothing to obscure metadata - that is, the record of who sent email to whom. Newer versions offer the option to encrypt the subject line, but then the question arises: how do you get busy people to read the message?

For another thing, even if you meticulously encrypt your email, check that the recipient's public key is correctly signed, and make no other mistakes, you are still dependent on your correspondent to take appropriate care of their archive of messages and not copy your message into a new email and send it out in plain text. The same is true of any other encrypted messaging program such as Signal; you depend on your correspondents to keep their database encrypted and either password-protect their phone and other devices or keep them inaccessible. And then, too, even the most meticulous correspondent can be persuaded to disclose their password.

For that reason, in some situations it may in fact be safer not to use encryption and remain conscious that anything you send may be copied and read. I've never believed that teenagers are innately better at using technology than their elders, but in this particular case they may provide role models: research has found that they are quite adept at using codes only they understand. To their grown-ups, it just looks like idle Facebook chatter.

Those who want to improve their own and others' protection against privacy invasion therefore need to think through what exactly they're trying to achieve.

Some obvious questions are, partly derived from Steve Bellovin's book Thinking Security:

- Who might want to attack you?
- What do they want?
- Are you a random target, the specific target, or a stepping stone to mount attacks on others?
- What do you want to protect?
- From whom do you want to protect it?
- What opportunities do they have?
- When are you most vulnerable?
- What are their resources?
- What are *your* resources?
- Who else's security do you have to depend on whose decisions are out of your control?

At first glance, the simple answer to the first of those is "anyone and everyone". This helpful threat pyramid shows the tradeoff between the complexity of the attack and the number of people who can execute it. If you are the target of a well-funded nation-state that wants to get you, just you, and nobody else but you, you're probably hosed. Unless you're a crack Andromedan hacker unit (Bellovin's favorite arch-attacker), the imbalance of available resources will probably be insurmountable. If that's your situation, you want expert help - for example, from Citizen Lab.

Most of us are not in that situation. Most of us are random targets; beyond a raw bigger-is-better principle, few criminals care whose bank account they raid or which database they copy credit card details from. Today's highly interconnected world means that even a small random target may bring down other, much larger entities when an attacker leverages a foothold on our insignificant network to access the much larger ones that trusts us. Recognizing who else you put at risk is an important part of thinking this through.

Conversely, the point about risks that are out of your control is important. Forcing everyone to use strong, well-designed passwords will not matter if the site they're used for stores them in with inadequate protections.

The key point that most people forget: think about the individuals involved. Security is about practice, not just technology; as Bruce Schneier likes to say, it's a process not a product. If the policy you implement makes life hard for other people, they will eventually adopt workarounds that make their lives more manageable. They won't tell you what they've done, and you won't have anyone to shout to warn you where the risk is lurking.

Illustrations: Aladdin panomime at Nottingham Playhouse, 2008 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2018

The Rochdale hypothesis

Unity_sculpture,_Rochdale_(1).JPGFirst, open a shop. Thus the pioneers of Rochdale, Lancashire, began the process of building their town. Faced with the jobs and loss of income brought by the Industrial Revolution, a group of 28 people, about half of them weavers, designed the set of Rochdale principles, and set about finding £1 each to create a cooperative that sold a few basics. Ten years later, Wikipedia tells us, Britain was home to thousands of imitators: cooperatives became a movement.

Could Rochdale form the template for building a public service internet?

This was the endpoint of a day-long discussion held as part of MozFest and led by a rogue band from the BBC. Not bad, considering that it took us half the day to arrive at three key questions: What is public? What is service? What is internet?


To some extent, the question's phrasing derives from the BBC's remit as a public service broadcaster. "Public service" is the BBC's actual mandate; broadcasting the activity it's usually identified with, is only the means by which it fulfills that mission. There might be - are - other choices. To educate, inform, to entertain, those are its mandate. Neither says radio or TV.

Probably most of the BBC's many global admirers don't realize how broadly the BBC has interpreted that. In the 1980s, it commissioned a computer - the Acorn, which spawned ARM, whose chips today power smartphones - and a series of TV programs to teach the nation about computing. In the early 1990s, it created a dial-up Internet Service Provider to help people get online. Some ten or 15 years ago I contributed to an online guide to the web for an audience with little computer literacy. This kind of thing goes way beyond what most people - for example, Americans - mean by "public broadcasting".

But, as Bill Thompson explained in kicking things off, although 98% of the public has some exposure to the BBC every week, the way people watch TV is changing. Two days later, the Guardian reported that the broadcasting regulator, Ofcom, believes the BBC is facing an "existential crisis" because the younger generation watches significantly less television. An eighth of young people "consume no BBC content" in any given week. When everyone can access the best of TV's back catalogue on a growing array of streaming services, and technology giants like Netflix and Amazon are spending billions to achieve worldwide dominance, the BBC must change to find new relevance.

So: the public service Internet might be a solution. Not, as Thompson went on to say, the Internet to make broadcasting better, but the Internet to make *society* better. Few other organizations in the world could adopt such a mission, but it would fit the BBC's particular history.

Few of us are happy with the Internet as it is today. Mozilla's 2018 Internet Health Report catalogues problems: walled gardens, constant surveillance to exploit us by analyzing our data, widespread insecurity, and increasing censorship.

So, again: what does a public service Internet look like? What do people need? How do you avoid the same outcome?

"Code is law," said Thompson, citing Lawrence Lessig's first book. Most people learned from that book that software architecture could determine human behaviour. He took a different lesson: "We built the network, and we can change it. It's just a piece of engineering."

Language, someone said, has its limits when you're moving from rhetoric to tangible service. Canada, they said, renamed the Internet "basic service" - but it changed nothing. "It's still concentrated and expensive."

Also: how far down the stack do we go? Do we rewrite TCP/IP? Throw out the web? Or start from outside and try to blow up capitalism? Who decides?

At this point an important question surfaced: who isn't in the room? (All but about 30 of the world's population, but don't get snippy.) Last week, the Guardian reported that the growth of Internet access is slowing - a lot. UN data to be published next month by the Web Foundation, shows growth dropped from 19% in 2007 to less than 6% in 2017. The report estimates that it will be 2019, two years later than expected, before half the world is online, and large numbers may never get affordable access. Most of the 3.8 billion unconnected are rural poor, largely women, and they are increasingly marginalized.

The Guardian notes that many see no point in access. There's your possible starting point. What would make the Internet valuable to them? What can we help them build that will benefit them and their communities?

Last week, the New York Times suggested that conflicting regulations and norms are dividing the Internet into three: Chinese, European, and American. They're thinking small. Reversing the Internet's increasing concentration and centralization can't be by blowing up the center because it will fight back. But decentralizing by building cooperatively at the edges...that is a perfectly possible future consonant with its past, even we can't really force clumps of hipsters to build infrastructure in former industrial towns, by luring them there with cheap housing prices. Cue Thompson again: he thought of this before, and he can prove it: here's his 2000 manifesto on e-mutualism.

Building public networks in the many parts of Britain where access is a struggle...that sounds like a public service remit to me.

Illustrations: Illustrations: The Unity sculpture, commemorating the 150th anniversary of the Rochdale Pioneers (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 21, 2018

Facts are screwed

vlad-enemies-impaled.gif"Fake news uses the best means of the time," Paul Bernal said at last week's gikii conference, an annual mingling of law, pop culture, and technology. Among his examples of old media turned to propaganda purposes: hand-printed woodcut leaflets, street singers, plays, and pamphlets stuck in cracks in buildings. The big difference today is data mining, profiling, targeting, and the real-time ability to see what works and improve it.

Bernal's most interesting point, however, is that like a magician's plausible diversion the surface fantasy story may stand in front of an earlier fake news story that is never questioned. His primary example was Vlad, the Impaler, the historical figure who is thought to have inspired Dracula. Vlad's fame as a vicious and profligate killer, derives from those woodcut leaflets. Bernal suggests the reasons: a) Vlad had many enemies who wrote against him, some of it true, most of it false; b) most of the stories were published ten to 20 years after he died; and c) there was a whole complicated thing about the rights to Transylvanian territory.

"Today, people can see through the vampire to the historical figure, but not past that," he said.

His main point was that governments' focus on content to defeat fake news is relatively useless. A more effective approach would have us stop getting our news from Facebook. Easy for me personally, but hard to turn into public policy.

Soon afterwards, Judith Rauhofer outlined a related problem: because Russian bots are aimed at exacerbating existing divisions, almost anyone can fall for one of the fake messages. Spurred on by a message from the Tumblr powers that be advising that she had shared a small number of messages that were traced to now-closed Russian accounts, Rauhofer investigated. In all, she had shared 18 posts - and these had been reblogged 2.7 million times, and are still being recirculated. The focus on paid ads means there is relatively little research on organic and viral sharing of influential political messages. Yet these reach vastly bigger audiences and are far more trusted, especially because people believe they are not being influenced by them.

In the particular case Rauhofer studied, "There are a lot of minority groups under attack in the US, the UK, Germany, and so on. If they all united in their voting behavior and political activity they would have a chance, but if they're fighting each other that's unlikely to happen." Divide and conquer, in other words, works as well as it ever has.

The worst part of the whole thing, she said, is that looking over those 18 posts, she would absolutely share them again and for the same reason: she agreed with them.

Rauhofer's conclusion was that the combination of prioritization - that is, the ordering of what you see according to what the site believes you're interested in - and targeting form "a fail-safe way of creating an environment where we are set against each other."

So in Bernal's example, an obvious fantasy masks an equally untrue - or at least wildly exaggerated - story, while in Rauhofer's the things you actually believe can be turned into weapons of mass division. Both scenarios require much more nuance and, as we've argued here before, many more disciplines to solve than are currently being deployed.

Andrea Matwyshyn, in providing five mini-fables as a way of illustrating five problems to consider when designing AI - or, as she put it, five stories of "future AI failure". These were:

- "AI inside" a product can mean sophisticated machine learning algorithms or a simple regression analysis; you cannot tell from the outside what is real and what's just hype, and the specifics of design matter. When Google's algorithm tagged black people as "gorillas", the company "fixed" the algorithm by removing "gorilla" from its list of possible labels. The algorithm itself wasn't improved.

- "Pseudo-AI" has humans doing the work of bots. Lots of historical examples for this one, most notably the mechanical Turk; Matwyshyn chose the fake autonomaton the Digesting Duck.

- Decisions that bring short-term wins may also bring long-term losses in the form of unintended negative consequences that haven't been thought through. Among Matwyshyn's examples were a number of cases where human interaction changed the analysis such as the failure of Google flu trends and Microsoft's Tay bot.

- Minute variations or errors in implementation or deployment can produce very different results than intended. Matwyshyn's prime example was a pair of electronic hamsters she thought could be set up to repeat each other w1ords to form a recursive loop. Perhaps responding to harmonics less audible to humans, they instead screeched unintelligibly at each other. "I thought it was a controlled experiment," she said, "and it wasn't."

- There will always be system vulnerabilities and unforeseen attacks. Her example was squirrels that eat power lines, but ten backhoes is the traditional example.

To prevent these situations, Matwyshyn emphasized disclosure about code, verification in the form of third-party audits, substantiation in the form of evidence to back up the claims that are made, anticipation - that is, liability and good corporate governance, and remediation - again a function of good corporate governance.

"Fail well," she concluded. Words for our time.

Illustrations: Woodcut of Vlad, with impaled enemies.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?

Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2018


GDPR-LATimes.pngThree months after the arrival into force of Europe's General Data Protection Regulation, Nieman Lab finds that more than 1,000 US newspapers are still blocking EU visitors.

"We are engaged on the issue", says the placard that blocks access to even the front pages of the New York Daily News and the Chicago Tribune, both owned by Tronc, as well as the Los Angeles Times, which was owned by Tronc until very recently. Ironically, Wikipedia tells us that the silly-sounding name "Tronc" was derived from "Tribune Online Content"; you'd think a company whoe name includes "online" would grasp the illogic of blocking 500 million literate readers. Nieman Lab also notes that Tronc is for sale, so I guess the company has more urgent problems.

Also apparently unable to cope with remediating its systems, despite years of notice, is Lee Enterprises, which owns numerous newspapers including the Carlisle, PA Sentinel and the Arizona Daily Star; these return "Error 451: Unavailable due to legal reasons", and blame GDPR as the reason "access cannot be granted at this time". Even the giant retail chain Williams-Sonoma has decided GDPR is just too hard, redirecting would-be shoppers to a UK partner site that is almost, but not quite, entirely unlike Williams-Sonoma - and useless if you want to ship a gift to someone in the US.

If you're reading this in the US, and you want to see what we see, try any of those URLs in a free proxy such as Hide Me, setting set your location to Amsterdam. Fun!

Less humorously, shortly after GDPR came into force a major publisher issued new freelance contracts that shift the liability for violations onto freelances. That is, if I do something that gets the company sued for GDPR violations, in their world I indemnify them.

And then there are the absurd and continuing shenanigans of ICANN, which is supposed to be a global multi-stakeholder modeling a new type of international governance, but seems so unable to shake its American origins that it can't conceive of laws it can't bend to its will.

Years ago, I recall that the New York Times, which now embraces being global, paywalled non-US readers because we were of no interest to their advertisers. For that reason, it seems likely that Tronc and the others see little profit in a European audience. They're struggling already; it may be hard to justify the expenditure on changing their systems for a group of foreign deadbeats. At the same time, though, their subscribers are annoyed that they can't access their home paper while traveling.

On the good news side, the 144 local daily newspapers and hundreds of other publications belonging to GateHouse Media seem to function perfectly well. The most fun was NPR, which briefly offered two alternatives: accept cookies or view in plain text. As someone commented on Twitter, it was like time-traveling back to 1996.

The intended consequence has been to change a lot of data practices. The Reuters Institute finds that the use of third-party cookies is down 22% on European news sites in the three months GDPR has been in force - and 45% on UK news sites. A couple of days after GDPR came into force, web developer Marcel Freinbichler did a US-vs-EU comparison on USA Today: load time dropped from 45 seconds to three, from 124 JavaScript files to zero, and a more than 500 requests to 34.

gdpr-unbalanced-cookingsite.jpgBut many (and not just US sites) are still not getting the message, or are mangling it. For example, numerous sites now display boxes displaying the many types of cookies they use and offering chances to opt in or out. A very few of these are actually well-designed, so you can quickly opt out of whole classes of cookies (advertising, tracking...) and get on with reading whatever you came to the site for. Others are clearly designed to make it as difficult as possible to opt out; these sites want you to visit a half-dozen other sites to set controls. Still others say that if you click the button or continue using the site your consent will be presumed. Another group say here's the policy ("we collect your data"), click to continue, and offer no alternative other than to go away. Not a lawyer - but sites are supposed to obtain explicit consent for collecting data on an opt-in basis, not assume consent on an an opt-out basis while making it onerous to object.

The reality is that it is far, far easier to install ad blockers - such as EFF's Privacy Badger - than to navigate these terrible user interfaces. In six months, I expect to see surveys coming from American site owners saying that most people agree to accept advertising tracking, and what they will mean is that people clicked OK, trusting their ad blockers would protect them.

None of this is what GDPR was meant to do. The intended consequence is to protect citizens and redress the balance of power; exposing exploitative advertising practices and companies' dependence on "surveillance capitalism" is a good thing. Unfortunately, many Americans seem to be taking the view that if they just refuse service the law will go away. That approach hasn't worked since Usenet.

Illustrations: Personally collected screenshots.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2018


On Wednesday evening, Joanna Geary posted a challenge to all and sundry on Twitter.

To wit:

"OK, choose your own adventure! It is an early summer morning. You wake up to the sound of constant buzzing on your phone. Reading many messages you discover your name has been picked in a compulsory lottery. You are now Lord High Ruler of the Internet. What do you do next?"


Observation: she capitalized "Internet". So nostalgic. We all should, but the Associated Press style book is against us.

I tried answering: "Convene a wise council."

Geary: "You have chosen to convene a wise council. Your first task is to produce a list of names for the council and justify why they are the wisest for the job. Who do you choose?"

Me (muttering, "That should be *whom*"): "Depends. What's the job? And what would *you* do? (Yes, I know it's your game.)"

Geary: "You have chosen to ask host. ... ... Cannot contact the specified host. The host may not be available on the network or to keep consistency this host may not be responding."

Right. I know where I am now. It's a text-based online game. We who came before the generation who grew up on graphical games remember these things from the 1980s, when Richard Bartle and Roy Trubshaw's MUD launched a genre and Douglas Adams tortured many people with the game version of Hitchhiker's Guide to the Galaxy, a time-sink series of frustrating puzzles. How Geary spent her childhood is now clear.

I reply: "*Go north." In response, she sends me to Canada and repeats: "What do you want to do next?" (She's got me on the wrong continent, but I digress.)

Geary, a former journalist and director of curation at Twitter, is also founder of London's edition of Hacks/Hackers, a gathering that smashes together journalists and computer hackers for mutual benefit.

Geary's main question is the same one everyone has been struggling with ever since 1980-something, when John Connolly reportedly demanded, "Who's in charge?" of a roomful of engineers. Connolly had a right to ask: he was the guy at the National Science Foundation who funded what eventually became the Internet backbone. We're no nearer an answer now.

The history of the Internet is littered with "wise councils" working to solve things for the greater good. In the early days, they were mostly engineers: IETF, ISOC, Jon Postel. It was Postel and his group who allocated the country code domain name registries, deciding to adopt the ISO list to determine whether the UK should be .uk or .gb, for example.

"The IANA [Internet Assigned Numbers Authority] is not in the business of deciding what is and is not a country," Postel wrote in RFC 1591 in 1994. "The selection of the ISO 3166 list as a basis for country code top-level domain names was made with the knowledge that ISO has a procedure for determining which entities should be and should not be on that list."

In that statement, Postel, the nearest thing to Geary's Lord High Ruler the Internet has ever had, provided a useful model for Internet governance: he set limits; deferred to established processes and the knowledgeable experts who created them after serious study; and published the reasoning. "RFC", "request for comments", was deliberately chosen for collaboration. Before the big money came in, someone told me at a policy conference circa 1998, anyone pushing a proposal's adoption because it would be good for their company would have been booed off the stage. Today, even the Tim Berners Lee-led W3C struggles to resist corporate influence.

By the mid-1990s, it was clear Internet governance needed more disciplines: civil society, international relations, economists, security practitioners...lawyers. Just one lawyer present when the domain name system was created, Michael Froomkin said at We Robot 2015, could have averted decades of disputes. Postel himself was replaced in 1998 by ICANN, which is currently proving that despite its multistakeholder model it's so resolutely American that it thinks it can cut a deal with GDPR.

In Code and Other Laws of Cyberspace, Lawrence Lessig argued that there are four means of regulation: market, code/architecture, social norms, and law. The market beloved of libertarians is failing in all sorts of ways, most notably privacy; law struggles to cross international borders; much code has become pwned by the largest Internet companies; and no one can agree on norms.

In a coda, Barlow added, "In the absence of law, ethics and responsibility is [sic] what you have to have." But again: who defines the ethics and the responsibility?

So we are back to Geary's challenge, having eliminated most avenues of approach. Grumpily lifting my crown, I think I would start with access, primarily through municipal and cooperatively built networks. In Cybersalon's unscientific poll at the 2015 Web We Want Festival it was the number one complaint, even in some areas of London. Improving access will continue to enlarge the tranche of existing problems: security, privacy, literacy, education, insufficient competition, network neutrality, centralization, public and private investment, law enforcement, security, protection for democratic processes, moderating toxic human behavior, governance, control, censorship, bullying and intimidation, and technical development. All that will need many wise councils.

But first: let there be light.

Illustrations: Screenshot of Geary's Twitter post.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 3, 2018

JAQing off

benjaminfranklin-pd.jpgYears ago, when I used to be called in to do various types of discussion TV and radio programs as the token skeptic, a friend said I should turn these invitations down. He had done so himself, on the basis that absent specialists to provide "the other side" of the debate, the programs would drop the item.

I was less sure, because before The Skeptic was founded, those programs did still run - with the opposition provided by a different type of anti-science. Programs featuring mediums and ghost-seers would debate religious representatives who saw these activities and apparitions as evil, but didn't doubt their existence or discuss the psychology of belief. So I thought the more likely outcome was that the programs would run anyway, but be more damaging to the public perception of science.

I did, however, argue (I think in a piece for New Scientist that matters of fact should not be fodder for "debate" on TV. "Balance" was much-misused even in the early 1990s, and I felt that if you defined it as "presenting two opposing points of view" then every space science story would require a quote from the Flat Earth Society. Fortunately, no one has gone that far in demanding "balance". Yet.

Deborah Lipstadt, opened her book Denying the Holocaust with an argument like my friend's. She refuses to grant Holocaust deniers the apparent legitimacy of a platform. This is ultimately an economic question: if the producers want spectacular mysteries, then the skeptic is there partly as a decorative element and partly to absolve the producers of the accusation that they're promoting nonsense. The program runs if you decline. If they want Deborah Lipstadt as their centerpiece, then she is in a position to demand that her fact-based work not be undermined by some jackass presenting "an alternative view".

Maybe that should be JAQass. A couple of weeks ago, Slate ran a piece by AskHistorians subreddit volunteer moderator Johannes Breit. He and his fellow moderators, who sound like they come from the Television without Pity school of moderation, keep AskHistorians high-signal, low-noise by ruthlessly stamping on speculation, abuse, and anything that smacks of denial of established fact. The Holocaust and Nazi Germany are popular topics, so Holocaust denial is a particular target of their efforts.

"Conversation is impossible if one side refuses to acknowledge the basic premise that facts are facts," he writes. And then: "Holocaust denial is a form of political agitation in the service of bigotry, racism, and anti-Semitism." For these reasons, he argues that Mark Zuckerberg's announced plan to control anti-Semitic hate speech on Facebook is to remove only postings that advocate violence will not work: In Breit's view, "Any attempt to make Nazism palatable again is a call for violence." Accordingly, the AskHistorians moderators have a zero-tolerance policy even for "just asking questions" - or JAQing, a term I hadn't noticed before - which in their experience is not innocent questioning at all, but deliberate efforts to sow doubt in the audiences' minds.

"Just asking questions" was apparently also Gwyneth Paltrow's excuse for not wanting to comply with Conde Nast's old-fangled rules about fact checking. It appears in HIV denial (that is, early 1990s Sunday Times-style refusal to accept the scientific consensus about the cause of AIDS).

One reason the AskHistorians moderators are so unforgiving, Breit writes, is because it shares a host - Reddit - with myriad other subcommunities that are "notorious for their toxicity". I'd argue this is a feature as well as a bug: AskHistorians' regime would be vastly harder to maintain if there weren't other places where people can blow off steam and vent their favorite anti-matter. As much as I loathe a business that promotes dangerous and unhealthy practices in the name of "wellness", I'm still a free speech advocate - actual free speech, not persecuted-conservative-mythology free speech.

I agree with Breit that Zuckerberg's planned approach for Facebook won't work. But Breit's approach isn't applicable either because of scale: AskHistorians, with a clearly defined mission and real expert commenters, has 37 moderators. I can't begin to guess how many that would translate to for Facebook, where groups are defined but the communities that form around each individual poster are not. That said, if you agree with Breit about the purpose of JAQ, his approach is close to the one I've always favored: distinguishing between content and behavior.

Mostly , we need principles. Without them, we have a patchwork of reactions but no standards to debate. We need not to confuse Google and Facebook with the internet. And we need to think about the readers as well as posters. Finally, we need to understand the tradeoffs. History teaches us a lot about the price of abrogating free speech. The events of the last two years have taught us that our democracies can be undermined by hostile actors turning social media to their own purposes.

My suspicion is that it's the economic incentives underlying these businesses that have to be realigned, and that the solution to today's problems is less about limiting speech than about changing business models to favor meaningful connection rather than "engagement" (aka, outrage). That probably won't be enough by itself, but it's the part of the puzzle that is currently not getting enough attention..

Illustrations: Benjamin Franklin, who said, "Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech."

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 6, 2018

This is us

Thumbnail image for ACTA_Protest_Crowd_in_London.JPGAfter months of anxiety among digital rights campaigners such as the Open Rights Group and the Electronic Frontier Foundation, the European Parliament has voted 318-278 against fast-tracking a particularly damaging set of proposed changes to copyright law.

There will be a further vote on September 10, so as a number of commentators are reminding us on Twitter, it's not over yet.

The details of the European Commission's alarmingly wrong-headed approach have been thoroughly hashed out for the last year by Glyn Moody. The two main bones of contention are euphoniously known as Article 11 and Article 13. Article 11 (the "link tax") would give publishers the right to require licenses (that is, payment) for the text accompanying links shared on social media, and Article 13 (the "upload filter") would require sites hosting user content to block uploads of copyrighted material.

In a Billboard interview with MEP Helga Trüpel, Muffett quite rightly points out the astonishing characterization of the objections to Articles 11 and 13 as "pro-Google". There's a sudden outburst of people making a similar error: Even the Guardian's initial report saw the vote as letting tech giants (specifically, YouTube) off the hook for sharing their revenues. Paul McCartney's last-minute plea hasn't helped this perception. What was an argument about the open internet is now being characterized as a tussle over revenue share between a much-loved billionaire singer/songwriter and a greedy tech giant that exploits artists.

Yet, the opposition was never about Google. In fact, probably most of the active opponents to this expansion of copyright and liability would be lobbying *against* Google on subjects like privacy, data protection, tax avoidance, and market power, We just happen to agree with Google on this particular topic because we are aware that forcing all sites to assume liability for the content their users post will damage the internet for everyone *else*. Google - and its YouTube subsidiary - has both the technology and the financing to play the licensing game.

But licensing and royalties are a separate issue from mandating that all sites block unauthorized uploads. The former is about sharing revenues; the latter is about copyright enforcement, and conflating them helps no one. The preventive "copyright filter" that appears essential for compliance with Article 13 would fail the "prior restraint" test of the US First Amendment - not that the EU needs to care about that. As copyright-and-technology consultant Bill Rosenblatt writes, licensing is a mess that this law will do nothing to fix. If artists and their rights holders want a better share of revenues, they could make it a *lot* easier for people to license their work. This is a problem they have to fix themselves, rather than requiring lawmakers to solve it for them by placing the burden on the rest of us. The laws are what they are because for generations they made them.

Article 11, which is or is not a link tax depending who you listen to, is another matter. Germany (2013) and Spain (2014) have already tried something similar, and in both cases it was widely acknowledged to have been a mistake. So much so that one of the opponents to this new attempt is the Spanish newspaper El País.

My guess is that those who want these laws passed are focusing on Google's role in lobbying against them - for example, Digital Music News reports that Google spent more than $36 million on opposing Article 13 - is preparation for the next round in September. Google and Facebook are increasingly the targets people focus on when they're thinking about internet regulation. Therefore, if you can recast the battle as being one between deserving artists and a couple of greedy American big businesses, they think it will be an easier sell to legislators.

But there are two of them and billions of us, and the opposition to Articles 11 and 13 was never about them. The 2012 SOPA and PIPA protests and the street protests against ACTA were certainly not about protecting Google or any other large technology company. No one goes out on the street or dresses up their website in protest banners in order to advocate for *Google*. They do it because what's been proposed threatens to affect them personally.

There's even a sound economic argument: had these proposed laws been in place in 1998, when Sergey Brin and Larry Page were meeting in dorm rooms, Google would not exist. Nor would thousands of other big businesses. Granted, most of these have not originated in the EU, but that's not a reason to wreck the open internet. Instead, that's a reason to find ways to make the internet hospitable to newcomers with bright ideas.

This debate is about the rest of us and our access to the internet. We - for some definition of "we" - were against these kinds of measures when they first surfaced in the early 1990s, when there were no tech giants to oppose them, and for the same reasons: the internet should be open to all of us.

Let the amendments begin.

Illustrations: Protesters against ACTA in London, 2012 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 6, 2018


Facebook-76536_640.pngWell, what's 37 million or 2 billion scraped accounts more or less among friends? The exploding hairball of the Facebook/Cambridge Analytica scandal keeps getting bigger. And, as Rana Dasgubta writes in the Guardian, we are complaining now because it's happening to us, but we did not notice when these techniques were tried out first in third-world countries. Dasgupta has much to say about how nation-states will have to adapt to these conditions.

Given that we will probably never pin down every detail of how much data and where it went, it's safest to assume that all of us have been compromised in some way. The smug "I've never used Facebook" population should remember that they almost certainly exist in the dataset, by either reference (your sister posts pictures of "my brother's birthday") or inference (like deducing the existence, size, and orbit of an unseen planet based on its gravitational pull on already-known objects).

Downloading our archives tells us far less than people recognize. My own archive had no real surprises (my account dates in 2007, but I post little and adblock the hell out of everything). The shock many people have experienced of seeing years of messages and photographs laid out in front of them, plus the SMS messages and call records that Facebook shouldn't have been retaining in the first place, hides the fact that these archives are a very limited picture of what Facebook knows about us. It shows us nothing about information posted about us by others, photos others have posted and tagged, or comments made in response to things we've posted.

The "me-ness" of the way Facebook and other social media present themselves was called out by Christian Fuchs in launching his book Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter. "Twitter is a me-centred medium. 'Social media' is the wrong term, because it's actually anti-social, Me media. It's all about individual profiles, accumulating reputation, followers, likes, and so on."

Saying that, however, plays into Facebook's own public mythology about itself. Facebook's actual and most significant holdings about us are far more extensive, and the company derives its real power from the complex social graphs it has built and the insights that can be gleaned from them. None of that is clear from the long list of friends. Even more significant is how Facebook matches up user profiles to other public records and social media services and with other brokers' datasets - but the archives give us no sense of that either. Facebook's knowledge of you is also greatly enhanced - as is its ability to lock you in as a user - if you, like many people, have opted to use Facebook credentials to log into third-party sites. Undoing that is about as easy and as much fun as undoing all your direct debit payments in order to move your bank account.

Facebook and the other tech companies are only the beginning. There's a few people out there trying to suggest Google is better, but Zeynep Tufekci discovered it had gone on retaining her YouTube history even though she had withdrawn permission to do so. As Tufekci then writes, if a person with a technical background whose job it is to study such things could fail to protect her data, how could others hope to do so?

But what about publishers and the others dependent on that same ecosystem? As Doc Searls writes, the investigative outrage on display in many media outlets glosses over the fact that they, too, are compromised. Third party trackers, social media buttons, Google analytics, and so on all deliver up readers to advertisers in increasing detail, feeding the business plans of thousands of companies all aimed at improving precision and targeting.

And why stop with publishers? At least they have the defense of needing to make a living. Government sites, libraries, and other public services do the same thing, without that justification. The Richmond Council website shows no ads - but it still uses Google Analytics, which means sending a steady stream of user data Google's way. Eventbrite, which everyone now uses for event sign-ups, is constantly exhorting me to post my attendance to Facebook. What benefit does Eventbrite get from my complying? It never says.

Meanwhile, every club, member organization, and creative endeavor begs its adherents to "like my page on Facebook" or "follow me on Twitter". While they see that as building audience and engagement, the reality is that they are acting as propagandists for those companies. When you try to argue against doing this, people will say they know, but then shrug helplessly and say they have to go where the audience is. If the audience is on Facebook, and it takes page likes to make Facebook highlight your existence, then what choice is there? Very few people are willing to contemplate the hard work of building community without shortcuts, and many seem to have come to believe that social media engagement as measured in ticks of approval is community, like Mark Zuckerberg tried to say last year.

For all these reasons, it's not enough to "fix Facebook". We must undo its leverage.

Illustrations: Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 30, 2018

Conventional wisdom

Hague-Grote_Markt_DH3.JPGOne of the problems the internet was always likely to face as a global medium was the conflict over who gets to make the rules and whose rules get to matter. So far, it's been possible to kick the can down the road for Future Government to figure out while each country makes its own rules. It's clear, though, that this is not a workable long-term strategy, if only because the longer we go on without equitable ways of solving conflicts, the more entrenched the ad hoc workarounds and because-we-can approaches will become. We've been fighting the same battles for nearly 30 years now.

I didn't realize how much I longed for a change of battleground until last week's Internet Law Works-in-Progress paper workshop, when for the first time I heard an approach that sounded like it might move the conversation beyond the crypto wars, the censorship battles, and the what-did-Facebook-do-to-our-democracy anguish. The paper was presented by Asaf Lubin, a Yale JSD candidate whose background includes a fellowship at Privacy International. In it, he suggested that while each of the many cases of international legal clash has been considered separately by the courts, the reality is that together they all form a pattern.

asaf-lubin-yale.pngThe cases Lubin is talking about include the obvious ones, such as United States v. Microsoft, currently under consideration in the US Supreme Court and Apple v. FBI. But they also include the prehistoric cases that created the legal environment we've lived with for the last 25 years: 1996's US v. Thomas, the first jurisdictional dispute, which pitted the community standards of California against those of Tennessee (making it a toss-up whether the US would export the First Amendment Puritanism); 1995's Stratton Oakmont v. Prodigy, which established that online services could be held liable for the content their users posted; and 1991's Cubby v. CompuServe, which ruled that CompuServe was a distributor, not a publisher and could not be held liable for user-posted content. The difference in those last two cases: Prodigy exercised some editorial control over postings; CompuServe did not. In the UK, notice-and-takedown rules were codified after the Godfrey v. Demon Internet extended defamation law to the internet..

Both access to data - whether encrypted or not - and online content were always likely to repeatedly hit jurisdictional boundaries, and so it's proved. Google is arguing with France over whether right-to-be-forgotten requests should be deindexed worldwide or just in France or the EU. The UK is still planning to require age verification for pornography sites serving UK residents later this year, and is pondering what sort of regulation should be applied to internet platforms in the wake of the last two weeks of Facebook/Cambridge Analytica scandals.

The biggest jurisdictional case, United States v. Microsoft, may have been rendered moot in the last couple of weeks by the highly divisive Clarifying Lawful Overseas Use of Data (CLOUD) Act. Divisive because: the technology companies seem to like it, EFF and CDT argue that it's an erosion of privacy laws because it lowers the standard of review for issuing warrants, and Peter Swire and Jennifer Daskal think it will improve privacy by setting up a mechanism by which the US can review what foreign governments do with the data they're given; they also believe it will serve us all better than if the Supreme Court rules in favor of the Department of Justice (which they consider likely).

Looking at this landscape, "They're being argued in a siloed approach," Lubin said, going on to imagine the thought process of the litigants involved. "I'm only interested in speech...or I'm a Mutual Legal Assistance person and only interested in law enforcement getting data. There are no conversations across fields and no recognition that the problems are the same." In conversation at conferences, he's catalogued reasons for this. Most cases are brought against companies too small to engage in too-complex litigation and who fear antagonizing the judge. Larger companies are strategic about which cases they argue and in front of whom; they seek to avoid having "sticky precedents" issued by judges who don't understand the conflicts or the unanticipated consequences. Courts, he said, may not even be the right forums for debating these issues.

The result, he went on to say, is that these debates conflate first-order rules, such as the right balance on privacy and freedom of expression, with second-order rules, such as the right procedures to follow when there's a conflict of laws. To solve the first-order rules, we'd need something like a Geneva Convention, which Lubin thought unlikely to happen.

To reach agreement on the second-order rules, however, he proposes a Hague Convention, which he described as "private international law treaties" that could address the problem of agreeing the rules to follow when laws conflict. As neither a lawyer nor a policy wonk, the idea sounded plausible and interesting: these are not debates that should be solved by either "Our lawyers are bigger and more expensive than your lawyers" or "We have bigger bombs." (Cue Tom Lehrer: "But might makes right...") I have no idea if such an idea can work or be made to happen. But it's the first constructive new suggestion I've heard anyone make for changing the conversation in a long, long time.

Illustrations: The Hague's Grote Markt (via Wikimedia; Asaf Lubin.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 9, 2018

Signaling intelligence

smithsam-ASIdemo-slides.pngLast month, the British Home Office announced that it had a tool that can automatically detect 94% of Daesh propaganda with 99.995% accuracy. Sophos summarizes the press release to say that only 50 out of 1 million videos would require human review.

"It works by spotting subtle patterns in the extremist videso that distinguish them from normal content..." Mark Werner, CEO of London-based ASI Data Science, the company that developed the classifier, told Buzzfeed.

Yesterday, ASI, which numbers Skype co-founder Jaan Tallinn among its investors, presented its latest demo day in front of a packed house. Most of the lightning presentations focused on various projects its Fellows have led using its tools in collaboration with outside organizations such as Rolls Royce and the Financial Conduct Authority. Warner gave a short presentation of the Home Office extremism project that included little more detail than the press reports a month ago, to which my first reaction was: it sounds impossible.

That reaction is partly due to the many problems with AI, machine learning, and big data that have surfaced over the last couple of years. Either there are hidden biases, or the media reports are badly flawed, or the system appears to be telling us only things we already know.

Plus, it's so easy - and so much fun! - to mock the flawed technology. This week, for example, neural network trainer Janelle Shane showed off the results of some of her pranks. After confusing image classifiers with sheep that don't exist, goats in trees (birds! or giraffes!) and sheep painted orange (flowers!), she concludes, "...even top-notch algorithms are relying on probability and luck." Even more than humans, it appears that automated classifiers decide what they see based on what they expect to see and apply probability. If a human is holding it, it's probably a cat or dog; if it's in a tree it's not going to be a goat. And so on. The experience leads Shane to surmise that surrealism might be the way to sneak something past a neural net.

Some of this approach appears to be what ASI's classifier probably also does (we were shown no details). As Sophos suggests, a lot of the signals ASI's algorithm is likely to use have nothing to do with the computer "seeing" or "interpreting" the images. Instead, it likely looks for known elements such as logos and facial images matched against known terrorism photos or videos. In addition it can assess the cluster of friends surrounding the account that's posted the video and look for profile information that shows the source is one that has been known to post such material in the past. And some will be based on analyzing the language used in the video. From what ASI was saying, it appears that the claim the company is making is fairly specific: the algorithm is supposed to be able to detect (specifically) Daesh videos, with a false positive rate of 0.005%, and 94% of true positives.

These numbers - assuming they're not artifacts of computerish misunderstanding about what it's looking for - of course represent tradeoffs, as Patrick Ball explained to us last year. Do we want the algorithm to block all possible Daesh videos? Or are we willing to allow some through in the interests of honoring the value of freedom of expression and not blocking masses of perfectly legal and innocent material? That policy decision is not ASI's job.

What was more confusing in the original reports is that the training dataset was said to have been "over 1,000 videos". That seems an incredibly small sample for testing a classifier that's going to be turned loose on a dataset of millions. At the demonstration, Warner's one new piece of information is that because that training set was indeed small, the project developed "synthetic data" to enlarge the training set to sufficient size. As gaming-the-system as that sounds, creating synthetic data to augment training data is a known technique. Without knowing more about the techniques ASI used to create its synthetic data it's hard to assess that work.

We would feel a lot more certain of all of these claims if the classifier had been through an independent peer review. The sensitivity of the material involved makes this tricky; and if there has been an outside review we haven't been told about it.

But beyond that, the project to remove this material rests on certain assumptions. As speakers noted at the first conference run by VOX-Pol, an academic research network studying violent online political extremism, the "lone wolf" theory posits that individuals can be radicalized at home by viewing material on the internet. The assumption that this is true underpins the UK's censorship efforts. Yet this theory is contested: humans are highly social animals. Radicalization seems unlikely to take place in a vacuum. What - if any - is the pathway from viewing Daesh videos to becoming a terrorist attacker?

All these questions are beyond ASI's purview to answer. They'd probably be the first to say: they're only a hill of technology beans being asked to solve a mountain of social problems.

Illustrations: Slides from the demonstration (Sam Smith).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 9, 2018

RIP John Perry Barlow (1947-2018)

Thumbnail image for John_Perry_Barlow.jpgThere's a certain irony about the fact that John Perry Barlow, who styled himself "cognitive dissident" and whose early 1990s writings set the tone of so much discourse about the internet and inspired so many thousands of activists, has died in the same week that Conde Nast has put up a paywall around Wired, the magazine of record of that era. If you haven't crossed the free limit, I can recommend Steven Levy's obit.

I first encountered Barlow when I began writing about computer crime, around 1990, and called the office of the newly formed Electronic Frontier Foundation, which Barlow co-founded with John Gilmore and Mitch Kapor. A chat with Mike Godwin produced, soon afterwards, a fat paper folder Barlow's founding documents, "Crime and Puzzlement", parts one and two, along with a Harper's Forum discussion of computer hacking and the disproportionate law enforcement response. The first Computers, Freedom, and Privacy to get hackers and law enforcement talking to each other soon followed. I finally met the man himself at my first CFP in 1994.

Barlow's ideas are everywhere in modern internet activism. The EFF itself became a role model for dozens of other digital rights organizations across the world, including Britain's Open Rights Group, which was originally pitched as "a British EFF". The Economy of Ideas: Selling Wine Without Bottles, written in 1992-1993, discusses the "crisis in intellectual property" and how creators will make a living, issues still with us today. EFF has a helpful archive of his internet-related writing, and all of it is worth reading whether or not you agree with him or think, as Barlow claimed Kapor did, that he needed a hyperbolectomy.

His most famous piece, A Declaration of the Independence of Cyberspace, met with embarrassment from many of us when he wrote it in 1996. Yet of everything he wrote it's the one that is still the most widely cited, critiqued, and discussed. To many of us at the time the notion that government had no role to play in cyberspace was either naive or too libertarian for words. In a contemporaneous critique, Reilly Jones (PDF) said Barlow's vision would lead inexorably to universal tyranny. It was clear in conversation with Barlow that he thought the internet was creating libertarians by the million, but I thought government regulation would be an inevitable consequence of ecommerce, and that people would be quick to welcome it to protect them from fraud, theft, and other crimes.

It was clear to anyone who'd talked with him, though, that the ideas he expressed in A Declaration were not the work of a moment's anger at the passage of the Communications Decency Act as part of the 1996 Telecommunications Act. In April 1995, in an interview for the Guardian, he told me, "Cyberspace is naturally sovereign for a variety of reasons...If the terms and conditions of the place are so different from the terms and conditions of the colonial power, sooner or later it becomes obvious that it makes better sense for it to be self-ordering or self-governing." His example was the British Empire: "One of the things that happened quite frequently with the British empire is that Britain realized that from a purely economic standpoint its self-interest was better served by a more or less equal relationship with the former colony as a member of the Commonwealth rather than having it as being an ungovernable, restless, and angry colony. And that analogy applies very well in this instance, because the citizens of cyberspace are going to become more restless and intractable as time goes on, and less willing to be governed by terrestrial principles."

So it's no surprise that 20 years later, Barlow told Wired he stood by its central concept: that cyberspace has a "natural immunity" to nation-state interference. Around the same time he called Wikileaks a "foreign power".

The world he wrote about has both changed and stayed the same. "Cyberspace" dates his views terribly: it's an increasingly meaningless concept to those who've never had to wait to connect, and for whom everything they do online is inextricably entangled with their physical lives. Many younger people are not, as they're so often called, "digital natives", but people to whom the internet has always been a giant surveillance platform delivering cat videos and homework. Yet the battles he wrote about - the right to use encryption, copyright, privacy, openness - are all still being hammered out all around us. So is the key piece of the reason to found the EFF, which he expressed in Crime and Puzzlement, part 2, as "to ensure that the Constitution will continue to apply to digital media". Politicians have long been fond of saying that what is illegal offline should be illegal online, but are less fond of saying the equally important converse: what is legal offline should be legal online.

In his obit for TechDirt, Godwin suggests that in dissecting Barlow's A Declaration we all missed the point. Barlow, he writes, "was writing to inspire activism, not to prescribe a new world order, and his goal was to be lyrical and aspirational, not legislative." In that, Barlow certainly succeeded.

Illustrations: John Perry Barlow.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 29, 2017


Science_Museum_-_Robots_-_Metropolis_(32781591336).jpgThe end of the year seems a good moment for complaints. The Guardian has gone all out with 2017 as the year the world turned against Silicon Valley. This is a more modest bitchfest.

First is the distressing tendency for journalists - who should know better - to anthropomorphize robots, both physical and virtual, by assigning them "gender". Pepper looks cute, but it's still just a fancy hammer. Plus, as came out at We Robot in 2016, while Westerners see Pepper as female, Asians identify it as male. I don't care how curvaceous or evil that robot looks, or what kind of voice it has, it's an it.


At the fifth birthday party for the Open Data Institute, co-founder Tim Berners-Lee gave an uncharacteristically depressed speech, in which he said, "We can't go on as though nothing's changed". A couple of weeks earlier, interviewed in the Guardian, he spoke despairingly about the reversal of network neutrality, the issues surrounding fake news and propaganda, ad revenues and clickbait, and manipulative "dark ads" that bypass democratic accountability.

Yet some would argue that Berners-Lee and his W3C have added to the mess by accepting digital rights management into the web's standards without requiring rights holders to promise not to use copyright law to attack people who bypass DRM for legal reasons (such as accessibility and security research). For such a long-term advocate of the internet as a permissionless space, it's a startling decision.


The web continues to embrace unreadability in the form of skinny, grey, low-contrast fonts, and the problem is spreading to new devices. Maybe it's not young geeks that are at fault, but all those video autoplay bandits, who would vastly prefer it if we didn't read anything.


Returning to robots, in August Edward Hasbrouck saw one behaving badly on a San Francisco Caltrain platform. In all those robot panics, no one considered how to share physical public space. Now, congestion on already-clogged sidewalks has led the City of San Francisco to require permits and human chaperones for the city's thousands (so far) of delivery robots, and also to restrict where they may roam. I would expect many cities - such as London, where so many sidewalks are cramped, medieval affairs - to follow suit. At least, until the robots have learned some manners.


Waterdrops_(4648726722).jpgSometime around mid-November, I became conscious of a new and frequently-heard sound: Ding!

Inquiry established that it is the noise made by current iPhones when they receive messages. It dings so often and so unpredictably that I now hear the sound when others can't. It's so deeply embedded into my head that I may have begun to hallucinate it, just as, half-waking at 7:30 AM I used to "hear" non-existent doorbells.


I imagine the circumstances of this sound's birth. A design team roved northern California looking for inspiring landscapes. Eventually, they chose a piece of land and built a Japanese garden in which they locked themselves. There, they were taught to reflect on the qualities of the desired sound: quietness, elegance, simplicity, peace, the best qualities of the best-trained and most deferential butlers. They imagined the sound that would capture these qualities: melodious, brief, undemanding yet piercingly sweet, intended to let you know you had a message without adding stress or harassment to your already-busy life, the equivalent of holding out a handwritten calling card on an exquisitely engraved and polished silver tray.

Googlers would have picked two dozen frequencies and conducted A/B tests to determine which of the candidate sounds came closest to getting people to open their messages. But this is Apple, so instead they discussed the sounds as if they were fine wines. They used the words "minimalist" and "custom frequencies" a lot.


Finally, one was chosen and deployed: a crystalline, clear, not-quite-pure tone that conveys the sense of a glistening drop of water falling into a placid pool, leaving behind the slightest of expanding circles.

I have never hated a sound more. The harsh, penetrating, high-pitched tone of the doors on Southwest trains and London Underground elevators, or the needle through the occipital bone that accompanies the dentist's ultrasonic cleaner are far more horrible noises, but you know when you are going to hear them. Apple's noise is pure Chinese water torture in its unpredictability, because a) you have no idea when other people might receive messages or who in your vicinity owns one of these devices, and b) it's designed to be so mellifluous and charming that no iPhone owner can imagine it could ever be offensive and so they do not set their phones to silent.


The reason it's so enraging, of course, is that its apparent politeness - "so sorry to disturb you but it's only for a split second and I sound beautiful, and then I'll leave you alone" - is utterly meretricious. Its real purpose is to say, "HEY, YOU, PAY ATTENTION TO ME." And you, the iPhone refusenik, can do nothing about it because the designers don't care about you-not-their-customer, you. You can divorce Google, but you can't divorce other people's phones.

Happy New Year.

Illustrations: Metropolis's Maria at the Science Museum (via Matt Brown), Water Drops (via Sander van der Wel).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 22, 2017

Hidden figures

betteroffted-malcolm-barrett.jpgIn the last half of this year, I've become aware of one of the less obvious casualties of the decline in local journalism: talking newspapers.

Probably most people don't know these exist; I certainly didn't. But around the UK there is a string of these things, which exist to give blind and visually impaired people access to their local newspaper.

This is the kind of thing the existence of the internet tends to lead us to dismiss as no longer necessary. Surely screen readers and the web have provided such access? But as I learned years ago, wandering the web with a screen reader is not easy. The Talking Newspapers provides uninterrupted content; screen readers can't always distinguish between content, navigational aids, and other clutter. As information density goes, the Talking Newspaper wins.

The Richmond one, which began in 1979, works like this: four people meet on a Friday evening and spread out copies of the Richmond and Twickenham Times. They look over the paper and share out the stories. Then they take turns reading them out into microphones and a recorder, which runs specific software for this sort of purpose. A production engineer oversees the recording. Later, others oversee the business of assembling the files and making copies onto USB sticks. Separately, a couple of administrators extract returned sticks from their envelopes and prepare envelopes to send out the new batch. (They do modern times, too, and you can download the output from the website.)

Here's the thing: The Richmond and Twickenham Times is about half the size it was ten years ago, and it's not getting any bigger. A single edition no longer fills enough space.

The team are coping by getting permission to use some material from the BBC, and there are some obvious untapped sources of content in the form of newsletters and articles from local churches, historical societies, arts groups, and that glossy magazine that plonks through the door that you don't know what it's for that's mostly advertising and the printed equivalent of infomercials but there's probably an article or two per issue that's usable.

To those of us who spend our time mulling the doings of largely automated services that count their users by the billion, a labor-intensive service like this that counts them in dozens seems like a rounding error. Surely it's the wrong scale for the 21st century? And for a huge city like London? Yet Europe's first megacity, though it's generally treated in the media as an amorphous whole, in reality is made up of quite distinct neighborhoods. The area surrounding Richmond is made up of many formerly separate small towns, each with its own town center, shops, and community life. What's left of newspapers and newsletters still provides connective tissue within these subcommunities

To some extent, this is a story about older people and isolation. When lessening physical ability shrinks your world, remaining connected in other ways rises in importance. A friend who has long worked on local newspapers tells me their older readers are their most dedicated - although by and large most advertisers don't care much about them. It's not easy to develop and learn new sources of connection after reading has become difficult; ideally, you need to prepare in advance, but no one does.

At the other end of the scale, this week Pro Publica caught Facebook and a bunch of other companies (including Amazon and Verizon) in a new form of age discrimination: using Facebook's advertising platform to target job ads at specific age groups. Under present US, UK, and EU law, a newspaper ad could never say, "Must be 25 to 36", but on Facebook it doesn't need to. All an advertiser has to do is tick the appropriate boxes, and anyone outside of the desired age group simply won't ever see the ad.

It's fair to say that one of the difficulties with new technology is that you just don't know how people are going to use it to bypass the law. However, Facebook's reply to this claim has been to compare the practice to running ads in magazines that target specific age groups, and argue that age-based targeting for recruitment ads is "standard industry practice". Yet there is an obvious difference: anyone can pick up a magazine and read it whether they fit the target demographic or not, but on Facebook you can only see what the system decides to show you. Be 37, and that job does not exist in your reality.

The company's response brings to mind the TV show Better Off Ted and the dysfunctional company in which it is set, Veridian Dynamics. Specifically, the moment in Season 1 episode 4, "Racial Sensitivity", when Veronica (Portia di Rossi) explains to Ted (Jay Harrington) why the building control system, which automates everything from lights and water fountains to elevators, lighting, and doors by responding to reflections of light off the skin, is not racist. "The company's position is that it's actually the opposite of racist because it's not targeting black people, it's just ignoring them."

Illustrations: Malcolm Barrett, trying to be seen by the motion sensors in Better Off Ted.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 8, 2017

Plastures of plenty

Thumbnail image for windows-xp-hilltop.jpegIt was while I was listening to Isabella Henriques talk about children and consumerism at this week's Children's Global Media Summit that it occurred to me that where most people see life happening advertisers see empty space.

Henriques, like Kathryn Montgomery earlier this year, is concerned about abusive advertising practices aimed at children. So much UK rhetoric around children and the internet focuses on pornography and extremism - see, for example, this week's Digital Childhood report calling for a digital environment that is "fit for childhood" - that it's refreshing to hear someone talk about other harms. Such as: teaching kids "consumerism". Under 12, Henriques said, children do not understand the persuasiveness and complexity of advertising. Under six, they don't identify ads (like the toddler who watched 12 minutes of Geico commercials). And even things that are *effectively* ads aren't necessarily easily identifiable as such, even by adults: unboxing videos, product placement, YouTube kids playing with branded toys, and in-app "opportunities" to buy stuff. Henriques' research finds that children influence family purchases by up to 80%. That's not a baby you're expecting; it's a sales promoter.

When we talk about the advertising arms race, we usually mean the expanding presence and intrusiveness of ads in places where we're already used to seeing them. That escalation has been astonishing.

To take one example: a half-hour sitcom episode on US network television in 1965 - specifically, the deservedly famous Coast to Coast Big Mouth episode of The Dick Van Dyke Show - was 25:30 minutes long. A 2017 episode of the top-rated US comedy, The Big Bang Theory, barely ekes out 18. That's over a third less content, double the percentage of time watching ads, or simply seven and a half extra minutes. No wonder people realized automatic ad marking and fast-forwarding would sell.

The internet kicked this into high gear. The lack of regulation and the uncertainty about business models led to legitimate experimentation. But it also led to today's complaints, both about maximally intrusive and attention-demanding ads and the data mining advertisers and their agencies use to target us, and also to increasingly powerful ad blockers - and ad blocker blockers.

The second, more subtle version of the arms race is the one where advertisers see every open space where people congregate as theirs to target. This was summed up for me once at a lunchtime seminar run by the UK's Internet Advertising Bureau in 2003, when a speaker gave an enthusiastic tutorial on marketing via viral email: "It gets us into the office. We've never been able to go there before." You could immediately see what office inboxes looked like to them: vast green fields just waiting to be cultivated. You know, the space we thought of as "work". And we were going to be grateful.

Childhood, as listening to Henriques, Montgomery, and the Campaign for a Commercial-Free Childhood makes plain, is one of those green fields advertisers have long fought to cultivate. On broadcast media, regulators were able to exercise some control. Even online, the Childhood Online Privacy Protection Act has been of some use.

Thumbnail image for isabella-henriques.jpegAdvertisers, like some religions, aim to capture children's affections young, on the basis that the tastes and habits you acquire in childhood are the hardest for an interloper to disrupt. The food industry has long been notorious unhealthy foods into finding ways around regulations that limit how they target children on broadcast and physical-world media. But the internet offers new options: "Smart" toys are one set of examples; Facebook's new Messenger Kids app is another. This arms race variant will escalate as the Internet of Things offers advertisers access to new areas of our lives.

Part of this story is the vastly increased quantities of data that will be available to sell to advertisers for data mining. On the web, "free" has long meant "pay with data". With the Internet of Things, no device will be free, but we will pay with data anyway. The cases we wrote about last week are early examples. As hardware becomes software, replacement life cycles become the manufacturer's choice, not yours. "My" mobile phone is as much mine as "my library book" - and a Tesla is a mobile phone with a chassis and wheels. Think of the advertising opportunities when drivers are superfluous to requirements, , beginning with the self-driving car;s dashboard and windshield. The voice-operated Echo/Home/Dot/whatever is clearly intended to turn homes into marketplaces.

A more important part is the risk of turning our homes into walled gardens, as Geoffrey A. Fowler writes in the Washington Post of his trial of Amazon Key. During the experiment, Fowler found strangers entering his house less disturbing than his sense of being "locked into an all-Amazon world". The Key experiment is, in Fowler's estimation, the first stab at Amazon's goal of becoming "the operating system for your home". Will Amazon, Google, and Apple homes be interoperable?

Henriques is calling for global regulation to limit the targeting of children for food and other advertising. It makes sense: every country is dealing with the same multinational companies, and most of us can agree on what "abusive advertising" means. But then you have to ask: why do they get a pass on the rest of us?

Illustrations: Windows XP start-up screen

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017


Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.

Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 27, 2017

The opposite of privilege

new-22portobelloroad.jpgA couple of weeks ago, Cybersalon held an event to discuss modern trends in workplace surveillance. In the middle, I found myself reminding the audience, many of whom were too young to remember, that 20 or so years ago mobile phones were known locally as "poserphones". "Poserphone" because they were still expensive enough recently enough that they were still associated with rich businessmen who wanted to show off their importance.

The same poseurship today looks like this: "I'm so grand I don't carry a mobile phone." In a sort of rerun of the 1997 anti-internet backlash, which was kicked off by Clifford Stoll's Silicon Snake-Oil, all over the place right now we're seeing numerous articles and postings about how the techies of Silicon Valley are disconnecting themselves and removing technology from the local classrooms. Granted, this has been building for a while: in 2014 the New York Times reported that Steve Jobs didn't let his children use iPhones or iPads.

It's an extraordinary inversion in a very short time. However, the notable point is that the people profiled in these stories are people with the agency to make this decision and not suffer for it. In April, Congressman Jim Sensenbrenner (R-WI), claimed airily that "Nobody has to use the internet", a statement easily disputed. A similar argument can be made about related technology such as phones and tablets: it's perfectly reasonable to say you need downtime or that you want your kids to have a solid classical education with plenty of practice forming and developing long-form thinking. But the option to opt out depends on a lot of circumstances outside of most people's control. You can't, for example disconnect your phone if your zero-hours contracts specifies you will be dumped if you don't answer when they call, nor if you're in high-urgency occupations like law, medicine, or journalism; nor can you do it if you're the primary carer for anyone else. For a homeless person, their mobile phone may be their only hope of finding a job or a place to live.

Battery concerns being what they are, I've long had the habit of turning off wifi and GPS unless I'm actively using them. As Transport for London increasingly seeks to use passenger data to understand passenger flow through the network and within stations, people who do not carry data-generating devices are arguably anti-social because they are refusing to contribute to improving the quality of the service. This argument has been made in the past with reference to NHS data, suggesting that patients who declined to share their data didn't deserve care.

cybersalon-october.jpgToday's employers, as Cybersalon highlighted and as speakers have previously pointed out at the annual Health Privacy Summit, may learn an unprecedented amount of intimate information about their employees via efforts like wellness programs and the data those capture from devices like Fitbits and smart watches. At Cornell, Karen Levy has written extensively about the because-safety black box monitoring coming to what historically has been the most independent of occupations, truck driving. At Middlesex Phoebe Moore is studying the impact of workplace monitoring on white collar workers. How do you opt out of monitoring if doing so means "opting out" of employment?

The latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location. Your voice may be captured by the waiting speech-driven device in your friend's car or home; ever tried asking someone to turn off Alexa-Siri-OKGoogle while you're there?

For these reasons, publicly highlighting your choice to opt out reads as, "Look how privileged I am", or some much more compact and much more offensive term. This will be even more true soon, when opting out will require vastly more effort than it does now and there will be vastly fewer opportunities to do it. Even today, someone walking around London has no choice about how many CCTV cameras capture them in motion. You can ride anonymously on the tube and buses as long as you are careful to buy, and thereafter always top up, your Oyster smart card with cash. But the latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location.

It's clear "normal" people are beginning to know this. This week, in a supermarket well outside of London, I was mocking a friend for paying for some groceries by tapping a credit card. "Cash," I said. "What's wrong with nice, anonymous cash?" "It took 20 seconds!" my friend said. The aging cashier regarded us benignly. "They can still track you by the mobile phones you're carrying," she said helpfully. Touché.

Illustrations: George Orwell's house at 22 Portobello road; Cybersalon (Phoebe Moore, center).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 22, 2017


original-LOC-opper-newspaper.png"Fake news is not some unfortunate consequence," the writer and policy consultant Maria Farrell commented at the UK Internet Governance Forum last week. "It is the system working as it should in the attention economy."

The occasion was a panel featuring Simon Milner, Facebook's UK policy director; Carl Miller, from the Demos think tank, James Cook, Business Insider UK's technology editor; the MP and shadow minister for industrial strategy Chi Onwurah (Labour - Newcastle upon Tyne Central); and, as moderator, Nominet chair Mark Wood.

cropped-Official_portrait_of_Chi_Onwurah.jpgThey all agreed to disagree on the definition of "fake news". Cook largely saw it as a journalism problem: fact checkers and sub-editors are vanishing. Milner said Facebook has a four-pronged strategy: collaborate with others to find industry solutions, as in the Facebook Journalism Project; disrupt the economic flow - that is, target clickbait designed to take people *off* Facebook to sites full of ads (irony alert); take down fake accounts (30,000 before the French election); try to build new products that improve information diversity and educate users. Miller wants digital literacy added to the national curriculum: "We have to change the skills we teach people. Journalists used to make those decisions on our behalf, but they don't any more." Onwurah, a chartered electrical engineer who has worked for Ofcom, focused on consequences: she felt the technology giants could do more to combat the problem, and expressed intelligent concern about algorithmic "black boxes" that determine what we see.

Boil this down. Onwurah is talking technology and oversight. Milner also wants technology: solutions should be content-neutral but identify and eliminate bad behavior at the scale of 2 billion users, who don't want to read terms and conditions or be repeatedly asked for ratings. Miller - "It undermines our democracy" - wants governments to take greater responsibility: "it's a race between politics and technology". Cook wants better journalism, but, "It's terrifying, as someone in technology, to think of government seeing inside the Facebook algorithm." Because other governments will want their privilege, too; Apple is censoring its app store in order to continue selling iPhones in China.

Thumbnail image for MariaFarrellPortrait.jpgIt was Farrell's comment, though, that sparked the realization that fake news cannot be solved by thinking of it as a problem in only one of the fields of journalism, international relations, economic inequality, market forces, or technology. It is all those things and more, and we will not make any progress until we take an approach that combines all those disciplines.

Fake news is the democratization of institutional practices that have become structural over many decades. Much of today's fake news uses tactics originally developed by publishers to sell papers. Even journalists often fail to ask the right questions, sometimes because of editorial agendas, sometimes because the threat of lost access to top people inhibits what they ask.

Everyone needs the traditional journalist's mindset of asking, "What's the source?" and "What's their agenda?" before deciding on a story's truth. But there's no future in blaming the people who share these stories (with or without believing them) or calling them stupid. Today we're talking about absurdist junk designed to make people share it; tomorrow's equivalent may be crafted for greater credibility and hence be far more dangerous. Miller's concern for the future of democracy is right. It's not just that these stories are used to poison the information supply and sow division just before an election; the incessant stream of everyday crap causes people to disengage because they trust nothing.

In 1987 I founded The Skeptic in 1987 to counter what the late, great Simon Hoggart called paranormal beliefs' "background noise, interfering with the truth". Of course it matters that a lie on the internet can nearly cause a shoot-out at a pizza restaurant. But we can't solve it with technology, fact-checking, or government fiat at it. Today's generation is growing up in a world where everybody cheats and then lies about it: sports stars.

What we're really talking about here is where to draw the line between acceptable fakery ("spin") and unacceptable fakery. Astrology columns get a pass. Apparently so do professional PR people, as in the 1995 book Toxic Sludge Is Good for You: Lies, Damn Lies, and the Public Relations Industry, by John Stauber and Sheldon Rampton (made into a TV documentary in 2002). In mainstream discussions we don't hear that Big Tobacco's decades-long denial about its own research or Exxon Mobil's approach to climate change undermine democracy. If these are acceptable, it seems harder to condemn the Macedonian teen seeking ad revenue.

This is the same imbalance as prosecuting lone, young, often neuro-atypical computer hackers while the really pressing issues are attacks by criminals and organized gangs.

That analogy is the point: fake news and cybersecurity are sibling problems. Both are tennis, not figure skating; that is, at all times there is an adversary actively trying to frustrate you. "Fixing the users" through training is only one piece of either puzzle.

Treating cybersecurity as a purely technical problem failed. Today's crosses many fields: computer science, philosophy, psychology, law, international relations, economics. So does the VOX-Pol project to study online extremism. This is what we need for fake news.

Illustrations: "The fin de siecle newspaper proprietor", by Frederick Burr Opper, 1894 (from the Library of Congress via Wikipedia); Chi Onwurah; Maria Farrell.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 11, 2017

The lost generation

jean-m-twenge-pr.png"I have young children at home. I have to be in touch." The woman next to me in the stalls at The Book of Mormon in 2013 was defensive and a little angry when I commented blandly during the intermission that her texting during the last third of the first act (!) was distracting. I'm old enough that when my parents went out they told the babysitter where they were going, and that was it.

"Somehow," I muttered to my companion, "*we* lived to grow up."

But time has moved on, and today's teen panic is about smartphones. Jean M. Twenge, writing in The Atlantic, asks, "Have smartphones destroyed a generation?" and predicts that today's teenagers are all on the verge of a mental health crisis. She has charts - how much they date, sleep, hang out with friends, feel lonely, have sex. All, she says, trending downward since 2007, when the iPhone was released. Cause! Or just correlation?

In a non-expert view, I think that this generation, too, will find their way. Some of these things aren't necessarily bad: less driving, given climate change, for example. It's also kind of hilarious to see someone concerned that teens are less sexually active; how can a dropping teen pregnancy rate be a bad thing?

But, Twenge says, teens are increasingly unhappy, and she does find a correlation with phone, or at least, *screen* use: based on annual Monitoring the Future surveys from the National Institute on Drug Abuse, she writes, "Teens who spend more time than average on screen activities are more likely to be unhappy, and those who spend more time than average on nonscreen activities are more likely to be happy."

misschildimage-milk_cartons_sm.gifThe 2016 MTF survey finds that teen drug use is also dropping, even marijuana, despite legalization (if you don't go out, where do you get your drugs?). In the results presentation, NIDA spokespeople suggest that social media, cellphone use, and videogaming might be providing substitute activities, which they thought is worth investigating. I can't find the questions about relative happiness that Twenge refers to. However, it's notable that on most of her charts - driving, dating, and hanging out - the trend line was already heading downwards years before the iPhone. Sex plateaued long before 2007, and turned downward only around 2013, as did sleeping less. So the only chart that really matches Twenge's claim is loneliness, which hit bottom in 2007 and has climbed ever since, passing 1991 levels in about 2012-2013. What did it look like before 1991? We don't know. How much of that loneliness can be traced to economic issues following the 2007 financial crisis? Ditto.

Nor do we know for sure if there is a cause-and-effect relationship - or, if there is, whether teens are more depressed because they're spending more time using screens or are screen-bound because they're depressed. At the Parenting,digital blog, Sonia Livingston raises exactly this point regarding new research from ParentZone that finds school principals attributing increasing levels of mental health problems among their students to...the internet. In this case, too, we don't, as Livingston writes, really know. What we do know, from many, many sources, is that people with mental illness struggle desperately to find help.

In the midst of all this angst, it's cheering to read William Vaughan's blog posting Trusting Our Kids and the World, in which he discusses the theme I began with: kids have far less physical independence than in any previous generation. The pictures of missing kids on milk cartons, the ads on TV when I was 20 asking, "It's 10 pm. Do you know where your child is?", and a load of other media-fanned fears about abduction, child abuse, and other dangers have all served to gradually enclose teens in their bedrooms. The screens may be the only thing that makes that enclosure tolerable. Say you limit screen time, as so many advisories recommend. Then what?

geico-gecko.jpgAt the Guardian, Emma Brockes ponders the right way to handle a reader's two-year-old's obsessive interest in her phone. Brockes notes her own toddlers' alarmingly addict-like behaviour when she tries to reclaim her phone, and hilariously frets over discovering that one of them spent 12 minutes watching Geico ads back to back. I surmise said child liked the *animated talking lizard*.

Brockes, however, hits the main point with absolute clarity: she worries more about her own screen time than her children's, because, like most people, when she's on her phone she's utterly absent from the world around her. It's patently unfair to load extra burdens on parents who are working two jobs and wondering if they can make rent, or get out of debt. But walking in my neighborhood in any direction you'll find parents engrossed in their phone while shepherding their kids somewhere. Now, I know that a one-year-old isn't always the best of company, but interacting with them is your investment in your mutual future. In her 2016 book, Reclaiming Conversation, Sherry Turkle found many children who wished their parents would stop Googling and talk to them. We should, she wrote, listen and obey. Maybe start there.

Illustrations: Jean M. Twenge; National Child Safety Council milk cartons; the Geico gecko.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 29, 2014

Shared space

What difference does the Internet make? This is the modern policy maker's equivalent of "To be, or not to be?" This question has underlain so many net.wars as politicians and activists have wrangled over whether and how the same laws should apply online as offline. Transposing offline law to the cyberworld is fraught with approximately the same dilemmas as transposing a novel to film. What do you keep? What do you leave out? What whole chapter can be conveyed in a single shot? In some cases it's obvious: consumer protection for purchases looks about the same. But the impact of changing connections and the democratization of worldwide distribution? Frightened people whose formerly safe, familiar world is slipping out of control often fail to make rational decisions.

This week's inaugural VOX-Pol conference, kept circling around this question. Funded under the EU's FP-7, the organizing group is meant to be an "academic research network focused on researching the prevalence, contours, functions, and impacts of Violent Online Political Extremism and responses to it". Attendees included researchers from a wide variety of disciplines from computer science to social science. If there was any group that was lacking, I'd say it was computer security practitioners and researchers, many of whose on-the-ground experience studying cyberattacks and investigating the criminal underground could be helpfully emulated by this group.

Some help could also perhaps be provided by journalists with investigative experience. In considering SOCMINT, for example - social media intelligence - people wondered how far to go in interacting with the extremists being studied. Are fake profiles OK? And can you be sure whether you're studying them...or they're studying us? The most impressive presentation on this sort of topic came from Aaron Zelin who, among other things, runs a Web-based clearinghouse for jihadi primary source material.

It's not clear that what Zelin does would be legal, or even possible in the UK. The "lone wolf" theory holds that someone alone in his house can be radicalized simply by accessing Web-based material; if you believe that, the obvious response is to block the dangerous material. Which, TJ McIntyre explained, is exactly what the UK does, unknown to most of its population.

McIntyre knows because he spent three years filing freedom of information requests to find out. So now we know: approximately 1,000 full URLs are blocked under this program, based on criteria derived from Sections 57 and 58 of the 2000 Terrorism Act and Sections 1 and 2 of the 2006 Terrorism Act. The system is "voluntary" - or rather, voluntary for ISPs, not voluntary for their subscribers. McIntyre's FOI answers have found no impact assessment or study of liability for wrongful blocking, and no review of compliance with the 1998 Human Rights Act. It also seems to contradict the Council of Europe's clear statement that filtering must be necessary and transparent.

This is, as Michael Jablonski commented on Twitter yesterday, one of very few conferences that begins by explaining the etiquette for showing gruesome images. Probably more frightening, though, was the presentations laying out the spread - and even mainstreaming - of interlinked extremist groups across the world. Many among Hungary's and Italy's extremist networks host their domains in the US, where the First Amendment ensures their material is not illegal.

This is why the First Amendment can be hard to love: defending free speech inevitably means defending speech you despise. Repeating that "The best answer to bad speech is more, better speech" is not always consoling. Trying to change the minds of the already committed is frustrating and thankless. Jihadi Trending(PDF), a report produced by the Quilliam Foundation, which describes itself as "the world's first counter-extremism think tank", reminds us that's not the piont. Released a few months ago and a fount of good sense, Nick Cohen reminds us in the foreword: "The true goal of debate, however, is not to change the minds of your opponents, but the minds of the watching audience."

Among the report's conclusions:
- The vast majority of radicalized individuals make contact first through offline socialization.
- Negative measures - censorship and filtering - are ineffective and potentially counter-productive.
- There are not enough positive measures - the "better speech" above to challenge extremist ideologies.
- Better ideas are to improve digital literacy and critical consumption skills and debunk propaganda.

So: what difference does the Internet make? It lets extremists use Twitter to tell each other what they had for breakfast. It lets them use YouTube to post videos of their cats. It lets them connect to others with similar views on Facebook, on Web forums, in chat rooms, virtual worlds, and dating sites, and run tabloid news sites that draw in large audiences. Just like everyone else, in fact. And, like the rest of us, they do not own the infrastructure.

The best answer came late on the second day, when someone commented that in the physical world neo-Nazi groups do not hang out with street gangs; extreme right hate groups don't go to the same conferences as jihadis; and Guantanamo detainees don't share the same physical space with white supremacists or teach other tactics. "But they will online."

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2012

Democracy theater

So Facebook is the latest to discover that it's hard to come up with a governance structure online that functions in any meaningful way. This week, the company announced plans to disband the system of voting on privacy changes that it put in place in 2009. To be honest, I'm surprised it took this long.

Techcrunch explains the official reasons. First, with 1 billion users, it's now too easy to hit the threshold of 7,000 comments that triggers a vote on proposed changes. Second, with 1 billion users, amassing the 30 percent of the user base necessary to make the vote count has become...pretty much impossible. (Look, if you hate Facebook's policy changes, it's easier to simply stop using the system. Voting requires engagement.) The company also complained that the system as designed encourages comments' "quantity over quality". Really, it would be hard to come up with an online system that didn't unless it was so hard to use that no one would bother anyway.

The fundamental problem for any kind of online governance is that no one except some lawyers thinks governmance is fun. (For an example of tedious meetings producing embarrassing results, see this week's General Synod.) Even online, where no one can tell you're a dog watching the Outdoor Channel while typing screeds of debate, it takes strong motivation to stay engaged. That in turn means that ultimately the people who participate, once the novelty has worn off, are either paid, obsessed, or awash in free time.

The people who are paid - either because they work for the company running the service or because they work for governments or NGOs whose job it is to protect consumers or enforce the law - can and do talk directly to each other. They already know each other, and they don't need fancy online governmental structures to make themselves heard.

The obsessed can be divided into two categories: people with a cause and troublemakers - trolls. Trolls can be incredibly disruptive, but they do eventually get bored and go away, IF you can get everyone else to starve them of the oxygen of attention by just ignoring them.

That leaves two groups: those with time (and patience) and those with a cause. Both tend to fall into the category Mark Twain neatly summed up in: "Never argue with a man who buys his ink by the barrelful." Don't get me wrong: I'm not knocking either group. The cause may be good and righteous and deserving of having enormous amounts of time spent on it. The people with time on their hands may be smart, experienced, and expert. Nonetheless, they will tend to drown out opposing views with sheer volume and relentlessness.

All of which is to say that I don't blame Facebook if it found the comments process tedious and time-consuming, and as much of a black hole for its resources as the help desk for a company with impenetrable password policies. Others are less tolerant of the decision. History, however, is on Facebook's side: democratic governance of online communities does not work.

Even without the generic problems of online communities which have been replicated mutatis mutandem since the first modem uploaded the first bit, Facebook was always going to face problems of scale if it kept growing. As several stories have pointed out, how do you get 300 million people to care enough to vote? As a strategy, it's understandable why the company set a minimum percentage: so a small but vocal minority could not hijack the process. But scale matters, and that's why every democracy of any size has representative government rather than direct voting, like Greek citizens in the Acropolis. (Pause to imagine the complexities of deciding how to divvy up Facebook into tribes: would the basic unit of membership be nation, family, or circle of friends, or should people be allocated into groups based on when they joined or perhaps their average posting rate?)

The 2009 decision to allow votes came a time when Facebook was under recurring and frequent pressure over a multitude of changes to its privacy policies, all going one way: toward greater openness. That was the year, in fact, that the system effectively turned itself inside out. EFF has a helpful timeline of the changes from 2005 to 2010. Putting the voting system in place was certainly good PR: it made the company look like it was serious about listening to its users. But, as the Europe vs Facebook site says, the choice was always constrained to old policy or new policy, not new policy, old policy, or an entirely different policy proposed by users.

Even without all that, the underlying issue is this: what company would want democratic governance to succeed? The fact is that, as Roger Clarke observed before Facebook even existed, social networks have only one business model: to monetize their users. The pressure to do that has only increased since Facebook's IPO, even though founder Mark Zuckerberg created a dual-class structure that means his decisions cannot be effectively challenged. A commercial company- especially a *public* commercial company - cannot be run as a democracy. It's as simple as that. No matter how much their engagement makes them feel they own the place, the users are never in charge of the asylum. Not even on the WELL.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

November 2, 2012

Survival instincts

The biggest divide in New York this week, in the wake of Hurricane Sandy has been, as a friend pointed out, between the people who had to worry about getting to work and the people who didn't. Reuters summed this up pretty well. Slightly differently, The Atlantic had it as three New Yorks: one underwater, one dark and dry, and one close to normal. The stories I've read since by people living in "dark and dry" emerging into the light at around 40th street bear out just how profound the difference is between the powerless and the empowered - in the electrical sense.

This is not strictly speaking about rich and poor (although the Reuters piece linked above makes the point that the city is more economically divided than it has been in some time); the Lower Manhattan area known as Tribeca, for example, is home to plenty of wealthy people - and was flooded. Instead, my friend's more profound divide is about whether you do the kind of work that requires physical presence. Freelance writers, highly paid software engineers, financial services personnel, and a load of other people can work remotely. If your main office is a magazine or a large high-technology company like, say, Google, whose New York building is at 15th and 8th, as long as you have failover systems in place so that your network and data centers keep operating, your staff can work from wherever they can find power and Internet access. Even small companies and start-ups can keep going if their systems are built on or can failover to the right technology.

One of my favorite New York retailers, J&R (they sell everything from music to computers from a series of stores in lower Manhattan, not far from last year's Occupy Wall Street site), perfectly demonstrated this digital divide. The Web site noted yesterday (Thursday) that its shops, located in "dark and dry", are all closed, but the Web site is taking orders as normal.

Plumbers, doormen, shop owners, restaurateurs, and fire fighters, on the other hand, have to be physically present - and they are vital in keeping the other group functioning. So in one sense the Internet has made cities much more resilient, and in another it hasn't made a damn bit of difference.

The Internet was still very young when people began worrying about the potential for a "digital divide". Concerns surfaced early about the prospects for digital exclusion of vulnerable groups such as the elderly and, the cognitively impaired, as well as those in rural areas poorly served by the telecommunications infrastructure and the poor. And these are the groups that, in the UK, efforts at digital engagement are intended to help.

Yet the more significant difference may be not who *is* online - after all, why should anyone be forced online who doesn't want to go? - but who can *work* online. Like traveling with only carry-on luggage, it makes for a more flexible life that can be altered to suit conditions. If your physical presence is not required, today you avoided long lines and fights at gas stations, traffic jams approaching the bridges and tunnels, waits for buses, and long trudges from the last open subway station to your actual destination.

This is not the place to argue about climate change. A single storm is one data point in a pattern that is measured in timespans longer than those of individual human lives.

Nonetheless, it's interesting to note that this storm may be the catalyst the the US needed to stop dragging its feet. As Business Week indicates , the status quo is bad for business, and the people making this point are the insurance companies, not scientists who can be accused of supporting the consensus in the interests of retaining their grant money (something that's been said to me recently by people who normally view a scientific consensus as worth taking seriously).

There was a brief flurry of argument this week on Dave Farber's list about whether the Internet was designed to survive a bomb outage or not. I thought this had been made clear by contemporary historians long ago: that while the immediate impetus was to make it easy for people to share files and information, DARPA's goal was very much also to build resilient networks. And, given that New York City is a telecommunications hub it's clear we've done pretty well with this idea, especially after the events of September 11, 2001 forced network operators to rethink their plans for coping with emergencies.

It seems clear that the next stage will be to do better at coming up with better strategies for making cities more resilient. Ultimately, the cause of climate change doesn't matter: if there are more and more "freak" weather patterns resulting on more and more extreme storms and natural disasters, then it's only common sense to try to plan for them: disaster recovery for municipalities rather than businesses. The world's reinsurance companies - the companies that eventually bear the brunt of the costs - are going to insist on it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

April 13, 2012

The people perimeter

People with jobs are used to a sharp division between their working lives and their private lives. Even in these times, when everyone carries a mobile phone and may be on call at any moment, they still tend to believe that what they say to their friends is no concern of their employer's. (Freelances tend not to have these divisions; to a much larger extent we have always been "in public" most of the time.)

These divisions were always less in small towns, where teachers or clergy had little latitude, and where even less-folk would be well advised to leave town before doing anything they wouldn't want discussed in detail. Then came social media, which turns everywhere into a small town and where even if you behave impeccably details about you and your employer may be exposed without your knowledge.

That's all a roundabout way of leading to yesterday's London Tea camp, where the subject of discussion was developing guidelines for social media use by civil servants.

Civil servants! The supposedly faceless functionaries who, certainly at the senior levels, are probably still primarily understood by most people through the fictional constructs of TV shows like Yes, Minister and The Thick of It. All of the 50 or 60 people from across government who attended yesterday have Twitter IDs; they're on Facebook and Foursquare, and probably a few dozen other things that would horrify Sir Humphrey. And that's as it should be: the people administering the nation's benefits, transport, education, and health absolutely should live like the people they're trying to serve. That's how you get services that work for us rather than against us.

The problem with social media is the same as their benefit: they're public in a new and different way. Even if you never identify your employer, Foursquare or the geotagging on Twitter or Facebook checks you in at a postcode that's indelibly identified with the very large government building where your department is the sole occupant. Or a passerby photographs you in front of it and Facebook helpfully tags your photograph with your real name, which then pops up in outside searches. Or you say something to someone you know who tells someone else who posts it online for yet another person to identify and finally the whole thing comes back and bites you in the ass. Even if your Tweets are clearly personal, and even if your page says, "These are just my personal opinions and do not reflect those of my employer", the fact of where you can be deduced to work risks turning anything connected to you into something a - let's call it - excitable journalist can make into a scandal. Context is king.

What's new about this is the uncontrollable exposure of this context. Any Old Net Curmudgeon will tell you that the simple fact of people being caught online doing things their employers don't like goes back to the dawn of online services. Even now I'm sure someone dedicated could find appalling behavior in the Usenet archives by someone who is, 25 years on, a highly respected member of society. But Usenet was a minority pastime; Facebook, Twitter et al are mainstream.

Lots has been written by and about employers in this situation: they may suffer reputational damage, legal liability, or a breach that endangers their commercial secrets. Not enough has been written about individuals struggling to cope with sudden, unwanted exposure. Don't we have the right to private lives? someone asked yesterday. What they are experiencing is the same loss of border control that security engineers are trying to cope with. They call it "deperimeterization", because security used to mean securing the perimeter of your network and now security means coping with its loss. Adding wireless, remote access for workers at home, personal devices such as mobile phones, and links to supplier and partner networks have all blown holes in it.

There is no clear perimeter any more for networks - or individuals, either. Trying to secure one by dictating behavior, whether by education, leadership by example, or written guidelines, is inevitably doomed. There is, however, a very valid reason to have these things: to create a general understanding between employer and employee. It should be clear to all sides what you can and cannot get fired for.

In 2003, Danny O'Brien nailed a lot of this when he wrote about the loss of what he called the "private-intermediate sphere". In that vanishing country, things were private without being secret. You could have a conversation in a pub with strangers walking by and be confident that it would reach only the audience present at the time and that it would not unexpectedly be replayed or published later (see also Don Harmon and Chevy Chase's voicemail). Instead, he wrote, the Net is binary: secret or public, no middle ground.

What's at stake here is really not private life, but *social* life. It's the addition of the online component to our social lives that has torn holes in our personal perimeters.

"We'll learn a kind of tolerance for the private conversation that is not aimed at us, and that overreacting to that tone will be a sign of social naivete," O'Brien predicted. Maybe. For now, hard cases make bad law (and not much better guidelines) *First* cases are almost always hard cases.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 17, 2012

Foul play

You could have been excused for thinking you'd woken up in a foreign country on Wednesday, when the news broke about a new and deliberately terrifying notice replacing the front page of a previously little-known music site, RnBXclusive.

ZDNet has a nice screenshot of it; it's gone from the RnBXclusive site now, replaced by a more modest advisory.

It will be a while before the whole story is pieced together - and tested in court - but the gist so far seems to be that the takedown of this particular music site was under the fraud laws rather than the copyright laws. As far as I'm aware - and I don't say this often - this is the first time in the history of the Net that the owner of a music site has been arrested on suspicion of conspiracy to defraud (instead of copyright infringement ). It seems to me this is a marked escalation of the copyright wars.

Bearing in mind that at this stage these are only allegations, it's still possible to do some thinking about the principles involved.

The site is accused of making available, without the permission of the artists or recording companies, pre-release versions of new music. I have argued for years that file-sharing is not the economic enemy of the music industry and that the proper answer to it is legal, fast, reliable download services. (And there is increasing evidence bearing this out.) But material that has not yet been officially released is a different matter.

The notion that artists and creators should control the first publication of new material is a long-held principle and intuitively correct (unlike much else in copyright law). This was the stated purpose of copyright: to grant artists and creators a period of exclusivity in which to exploit their ideas. Absolutely fundamental to that is time in which to complete those ideas and shape them into their final form. So if the site was in fact distributing unreleased music as claimed, especially if, as is also alleged, the site's copies of that music were acquired by illegally hacking into servers, no one is going to defend either the site or its owner.

That said, I still think artists are missing a good bet here. The kind of rabid fan who can't wait for the official release of new music is exactly the kind of rabid fan who would be interested in subscribing to a feed from the studio while that music is being recorded. They would also, as a friend commented a few years ago, be willing to subscribe to a live feed from the musicians' rehearsal studio. Imagine, for example, being able to listen to great guitarists practice. How do they learn to play with such confidence and authority? What do they find hard? How long does it take to work out and learn something like Dave van Ronk's rendition, on guitar, of Scott Joplin rags with the original piano scoring intact?

I know why this doesn't happen: an artist learning a piece is like a dog with a wound (or maybe a bone): you want to go off in a forest by yourself until it's fixed. (Plus, it drives everyone around you mad.) The whole point of practicing is that it isn't performance. But musicians aren't magicians, and I find it hard to believe that showing the nuts and bolts of how the trick of playing music is worked would ruin the effect. For other types of artists - well, writers with works in progress really don't do much worth watching, but sculptors and painters surely do, as do dance troupes and theatrical companies.

However, none of that excuses the site if the allegations are true: artists and creators control the first release.

But also clearly wrong was the notice SOCA placed on the site, which displayed visitors' IP address, warned that downloading music from the site was a crime bearing a maximum penaltde y of up to ten years in prison, and claimed that SOCA has the capacity to monitor and investigate you with no mention of due process or court orders. Copyright infringement is a civil offense, not a criminal one; fraud is a criminal offense, but it's hard to see how the claim that downloading music is part of a conspiracy to commit fraud could be made to stick. (A day later, SOCA replaced the notice.) Someone browsing to The Pirate Bay and clicking on a magnet link is not conspiring to steal TV shows any more than someone buying a plane ticket is conspiring to destroy the ozone layer. That millions of people do both things is a contributing factor to the existence of the site and the airline, but if you accuse millions of people the term "organized crime" loses all meaning.

This was a bad, bad blunder on the part of authorities wishing to eliminate file-sharing. Today's unworkable laws against file-sharing are bringing the law into contempt already. Trying to scare people by misrepresenting what the law actually says at the behest of a single industry simply exacerbates the effect. First they're scared, then they're mad, and then they ignore you. Not a winning strategy - for anyone.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 10, 2012

Media cop

The behavior of The Times in the 2009 NightJack case, in which the paper outed an anonymous policeman blogging about his job, was always baffling since one of the key freedoms of the press is protecting sources. On occasion, journalists have gone to jail rather than give up a source's name, although it happens rarely enough that when it does, as in the Judith Miller case linked above, Hollywood makes movies about it. The principle at work here, writes NPR reporter David Folkenflik, who covered that case, is that, "You have to protect all of your sources if you want any of them to speak to you again."

Briefly, the background. In 2009, the first winner of the prestigious Orwell Prize for political blogging was a unidentified policeman. Blogging under the soubriquet of "NightJack", the blogger declined all interviews (I am not a media cop, he wrote ), sent a friend to deliver his acceptance speech, and had his prize money sent directly to charity. Shortly afterwards, he took The Times to court to prevent it from publishing his real-life identity. Controversially, Justice David Eady ruled for The Times on the basis that NightJack had no expectation of privacy - and freedom of expression was important. Ironic, since the upshot was to stifle NightJack's speech: his real-life alter ego, Richard Horton, was speedily reprimanded by his supervisor and the blog was deleted.

This is the case that has been reinvestigated this week by the Leveson inquiry into media phone hacking in the media. Justice Eady's decision seems to have rested on two prongs: first, that the Times had identified Horton from public sources, and second, that publication was in the public interest because Horton's blog posts disclosed confidential details about his police work. It seems clear from Times editor James Harding's testimony (PDF) that the first of these prongs was bent. The second seems to have been also: David Allen Green, who has followed this case closely, is arguing over at New Statesman (see the comments) that The Times's court testimony is the only source of the allegations that Horton's blog posts gave enough information that the real people in the cases he talked about could be identified. (In fact, I'd expect the cases are much more identifiable *after* his Times identification than before it.)

So Justice Eady's decision was not animated by research into the difficulty of real online anonymity. Instead, he was badly misled by incomplete, false evidence. Small wonder that Horton is suing.

One of the tools journalists use to get sources to disclose information they don't want tracked back to them is the concept of off-the-record background. When you are being briefed "on background", the rule is that you can't use what you're told unless you can find other sources to tell you the same thing on the record for publication. This is entirely logical because once you know what you're looking for you have a better chance of finding it. You now know where to start looking and what questions to ask.

But there should be every difference in an editor's mind between information willingly supplied under a promise not to publish and information obtained illegally. We can argue about whether NightJack's belief that he could remain anonymous was well-founded and whether he, like many people, did a poor job at securing his email account, but few would think he should have been outed as the result of a crime.

Once Foster knew Horton's name he couldn't un-know it - and, as noted, it's a lot easier to find evidence backing up things you already know. What should have happened is that Foster's managers should have barred him from pursuing or talking about the story. The paper should then either have dropped it or, if the editors really thought it sufficiently importance, assigned a different, uncontaminated reporter to start over with no prior knowledge and try to find the name from legal sources. Sounds too much like hard work? Yes. That this did not happen says a lot about the newsroom's culture: a focus on cheap, easy, quick, attention-getting stories acquired by whatever means. "I now see it was wrong" suggests that Harding and his editorial colleagues had lost all perspective.

Horton was, of course, not a source giving confidential information to one or more Times reporters. But it's so easy to imagine the Times - or any other newspaper - deciding to run a column written by "D.C. Plod" to give an intimate insight into how the police work. A newspaper running such a column would boast about it, especially if it won the Orwell Prize. And likely the only reason a rival paper would expose the columnist's real identity was if the columnist was a fraud.

Imagine Watergate if it had been investigated by this newsroom instead of that of the 1972 Washington Post. Instead of the President's malfeasance in seeking re-election, the story would be the identity of Deep Throat. Mark Felt would have gone to jail and Richard Milhous Nixon would have gone down in history as an honest man.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 3, 2012

Beyond the soup kitchen

"The whole idea of what a homeless service is, is a soup kitchen," one of the representatives for The Connection at St Martin-in-the-Fields said yesterday. But does it have to be?

It was in the middle of "Teacamp", a monthly series of meetings that sport the same mix of geeks, government, and do-gooders as the annual UK Govcamp we covered a couple of weeks back. Meetings like this seem to be going on all the time all over the place, trying to figure out ways to use technology to help people. Hardly anyone has any budget, yet that seems not to matter: the optimism is contagious. This week's Teacamp also featured Westminster in Touch, an effort to support local residents and charities; the organization runs a biannual IT Support Forum to brainstorm (the next is March 28).

I have to admit: when I first read about Martha Lane Fox's Digital Inclusion initiative my worst rebellious instincts were triggered: why should anyone be bullied online if they didn't want to go there? Maybe at least some of those 9 million people who have never used the Internet in Britain would like to be left in peace to read books and listen to - rather than use - the wireless.

But the "digital divide" predicted even in the earliest days of the Net is real: those 9 million are those in the most vulnerable sectors of society. According to research published on the RaceOnline site, the percentage of people who have never used the Net correlates closely with income. This isn't really much of a surprise, although you would expect to see a slight tick upwards again at the very top economic levels, where not so long ago people were too grand, too successful, and too set in their ways to feel the need to go online. But they have proxies: their assistants can answer their email and do their Web shopping.

When Internet access was tied to computers, the homeless in particular were at an extreme disadvantage. You can't keep a desktop computer if you have nowhere - or only a very tiny, insecure space - to put it or power it, and you can't afford broadband or a landline. A laptop presents only slightly fewer problems. Even assuming you can find free wifi to use somewhere, how do you keep the laptop from being stolen or damaged? Where and how do you keep it charged? And so The Connection, like libraries and other places, runs a day center with a computing area and resources to help, including computer training.

But even that, they said, hasn't been reaching the most excluded, the under-25s that The Connection sees. When you think about it, it's logical, but I had to be reminded to think about it. Having missed out on - or been failed by - school education, this group doesn't see the Net as the opportunity the rest of us imagine it to be for them.

"They have no idea of creating anything to help their involvement."

So rather than being "digital natives", their position might be comparable to people who have grown up without language or perhaps autistic children whose intelligence and ability to learn has been disrupted by their brain wiring and development so much that the gap between them and their normally wired peers keeps increasing. Today's elderly who lack the motivation, the cognitive functioning, or the physical ability to go online will be catered to, even if only by proxy, until they die out. But imagine being 20 today and having no digital life beyond the completely passive experience of watching a few clips on YouTube or glancing at a Facebook page and thinking they have nothing to do with you. You will go through your entire life at a progressively greater disadvantage. Just as we assume that today's 80-year-olds grew up with movies, radio, and postal mail, when *you* are 80 (if the planet hasn't run out of energy and water and been forced to turn off all the computers by then), in devising systems to help you society will assume you grew up with television, email, and ecommerce. Whatever is put in place to help you navigate whatever that complex future will be like, will be completely outside your grasp.

So The Connection is helping them to do some simple things: upload interviews about their lives, annotate YouTube clips, create comic strips - anything to break this passive lack of interest. Beyond that, there's a big opportunity in smart phones, which don't need charging so often and are easier to protect - and can take advantage of free wifi just as a laptop can. The Connection is working on things like an SMS service that goes out twice a day and provides weather reports, maps of food runs, and information about free things to do. Should you be technically skilled and willing, they're looking for geeky types to help them put these ideas together and automate them. There are still issues around getting people phones, of course - and around the street value of a phone - but once you have a phone where you can be contacted by friend, family, and agencies, it's a whole different life. As it is again if you can be convinced that the Net belongs to you, too, not just all those other people.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 21, 2012

Camping out

"Why hasn't the marvelous happened yet?" The speaker - at one of today's "unconference" sessions at this year's UK Govcamp - was complaining that with 13,000-odd data sets up on his organization's site there ought to be, you know, results.

At first glance, GovCamp seems peculiarly British: an incongruous mish-mash of government folks, coders, and activists, all brought together by the idea that technology makes it possible to remake government to serve us better. But the Web tells me that events like this are happening in various locations around Europe. James Hendler, who likes to collect government data sets from around the world (700,000 and counting now!), tells me that events like this are happening all over the US, too - except that there this size of event - a couple of hundred people - is New York City.

That's both good and bad: a local area in the US can find many more people to throw at more discrete problems - but on the other hand the federal level is almost impossible to connect with. And, as Hendler points out, the state charters mean that there are conversations the US federal government simply cannot have with its smaller, local counterparts. In the UK, if central government wants a local authority to do something, it can just issue an order.

This year's GovCamp is a two-day affair. Today was an "unConference": dozens of sessions organized by participants to talk about...stuff. Tomorrow will be hands-on, doing things in the limited time available. By the end of the day, the Twitter feed was filling up with eagerness to get on with things.

A veteran camper - I'm not sure how to count how many there have been - tells me that everyone leaves the event full of energy, convinced that they can change the world on Monday. By later next week, they'll have come down from this exhilarated high to find they're working with the same people and the same attitudes. Wonders do not happen overnight.

Along those lines, Mike Bracken, the guy who launched the Guardian's open data platform, now at the Cabinet Office, acknowledges this when he thanks the crowd for the ten years of persistence and pain that created his job. The user, his colleague Mark O'Neill said recently is at the center of everything they're working on. Are we, yet, past proving the concept?

"What should we do first?" someone I couldn't identify (never knowing who's speaking is a pitfall of unConferences) asked in the same session as the marvel-seeker. One offered answer was one any open-source programmer would recognize: ask yourself, in your daily life, what do you want to fix? The problem you want to solve - or the story you want to tell - determines the priorities and what gets published. That's if you're inside government; if you're outside, based on last summer's experience following the Osmosoft teams during Young Rewired State, often the limiting factor is what data is available and in what form.

With luck and perseverance, this should be a temporary situation. As time goes on, and open data gets built into everything, publishing it should become a natural part of everything government does. But getting there means eliminating a whole tranche of traditional culture and overcoming a lot of fear. If I open this data and others can review my decisions will I get fired? If I open this data and something goes wrong will it be my fault?

In a session on creative councils, I heard the suggestion that in the interests of getting rid of gatekeepers who obstruct change organizational structures should be transformed into networks with alternate routes to getting things done until the hierarchy is no longer needed. It sounds like a malcontent's dream for getting the desired technological change past a recalcitrant manager, but the kind of solution that solves one problem by breaking many other things. In such a set-up, who is accountable to taxpayers? Isn't some form of hierarchy inevitable given that someone has to do the hiring and firing?

It was in a session on engagement where what became apparent that as much as this event seems to be focused on technological fixes, the real goal is far broader. The discussion veered into consultations and how to build persistent networks of people engaged with particular topics.

"Work on a good democratic experience," advised the session's leader. Make the process more transparent, make people feel part of the process even if they don't get what they want, create the connection that makes for a truly representative democracy. In her view, what goes wrong with the consultation process now - where, for example, advocates of copyright reform find themselves writing the same ignored advice over and over again in response to the same questions - is that it's trying to compensate for the poor connections to their representatives that most people have. Building those persistent networks and relationships is only a partial answer.

"You can't activate the networks and not at the same time change how you make decisions," she said. "Without that parallel change you'll wind up disappointing people."

Marvels tomorrow, we hope.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 6, 2012

Only the paranoid

Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.

I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.

In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.

Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.

Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.

Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.

To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebok gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.

The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords have a potential catastrophe in waiting.

I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.

Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 20, 2011

The world we thought we lived in

If one thing is more annoying than another, it's the fantasy technology on display in so many TV shows. "Enhance that for me!" barks an investigator. And, obediently, his subordinate geek/squint/nerd pushes a button or few, a line washes over the blurry image on screen, and now he can read the maker's mark on a pill in the hand of the target subject that was captured by a distant CCTV camera. The show 24 ended for me 15 minutes into season one, episode one, when Kiefer Sutherland's Jack Bauer, trying to find his missing daughter, thrust a piece of paper at an underling and shouted, "Get me all the Internet passwords associated with that telephone number!" Um...

But time has moved on, and screenwriters are more likely to have spent their formative years online and playing computer games, and so we have arrived at The Good Wife, which gloriously wrapped up its second season on Tuesday night (in the US; in the UK the season is still winding to a close on Channel 4). The show is a lot of things: a character study of an archetypal humiliated politician's wife (Alicia Florrick, played by Julianna Margulies) who rebuilds her life after her husband's betrayal and corruption scandal; a legal drama full of moral murk and quirky judges ( Carob chip?); a political drama; and, not least, a romantic comedy. The show is full of interesting, layered men and great, great women - some of them mature, powerful, sexy, brilliant women. It is also the smartest show on television when it comes to life in the time of rapid technological change.

When it was good, in its first season, Gossip Girl cleverly combined high school mean girls with the citizen reportage of TMZ to produce a world in which everyone spied on everyone else by sending tips, photos, and rumors to a Web site, which picks the most damaging moment to publish them and blast them to everyone's mobile phones.

The Good Wife goes further to exploit the fact that most of us, especially those old enough to remember life before CCTV, go on about our lives forgetting that everywhere we leave a trail. Some are, of course, old staples of investigative dramas: phone records, voice messages, ballistics, and the results of a good, old-fashioned break-in-and-search. But some are myth-busting.

One case (S2e15, "Silver Bullet") hinges on the difference between the compressed, digitized video copy and the original analog video footage: dropped frames change everything. A much earlier case (S1e06, "Conjugal") hinges on eyewitness testimony; despite a slightly too-pat resolution (I suspect now, with more confidence, it might have been handled differently), the show does a textbook job of demonstrating the flaws in human memory and their application to police line-ups. In a third case (S1e17, "Heart"), a man faces the loss of his medical insurance because of a single photograph posted to Facebook showing him smoking a cigarette. And the disgraced husband's (Peter Florrick, played by Chris Noth) attempt to clear his own name comes down to a fancy bit of investigative work capped by camera footage from an ATM in the Cayman Islands that the litigator is barely technically able to display in court. As entertaining demonstrations and dramatizations of the stuff net.wars talks about every week and the way technology can be both good and bad - Alicia finds romance in a phone tap! - these could hardly be better. The stuffed lion speaker phone (S2e19, "Wrongful Termination") is just a very satisfying cherry topping of technically clever hilarity.

But there's yet another layer, surrounding the season two campaign mounted to get Florrick elected back into office as State's Attorney: the ways that technology undermines as well as assists today's candidates.

"Do you know what a tracker is?" Peter's campaign manager (Eli Gold, played by Alan Cumming) asks Alicia (S2e01, "Taking Control"). Answer: in this time of cellphones and YouTube, unpaid political operatives follow opposing candidates' family and friends to provoke and then publish anything that might hurt or embarrass the opponent. So now: Peter's daughter (Makenzie Vega) is captured praising his opponent and ham-fistedly trying to defend her father's transgressions ("One prostitute!"). His professor brother-in-law's (Dallas Roberts) in-class joke that the candidate hates gays is live-streamed over the Internet. Peter's son (Graham Phillips) and a manipulative girlfriend (Dreama Walker), unknown to Eli, create embarrassing, fake Facebook pages in the name of the opponent's son. Peter's biggest fan decides to (he thinks) help by posting lame YouTube videos apparently designed to alienate the very voters Eli's polls tell him to attract. (He's going to post one a week; isn't Eli lucky?) Polling is old hat, as are rumors leaked to newspaper reporters; but today's news cycle is 20 minutes and can we have a quote from the candidate? No wonder Eli spends so much time choking and throwing stuff.

All of this fits together because the underlying theme of all parts of the show is control: control of the campaign, the message, the case, the technology, the image, your life. At the beginning of season one, Alicia has lost all control over the life she had; by the end of season two, she's in charge of her new one. Was a camera watching in that elevator? I guess we'll find out next year.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 15, 2011

The open zone

This week my four-year-old computer had a hissy fit and demanded, more or less simultaneously, a new graphics card, a new motherboard, and a new power supply. It was the power supply that was the culprit: when it blew it damaged the other two pieces. I blame an incident about six months ago when the power went out twice for a few seconds each time, a few minutes apart. The computer's always been a bit fussy since.

I took it to the tech guys around the corner to confirm the diagnosis, and we discussed which replacements to order and where to order them from. I am not a particularly technical person, and yet even I can repair this machine by plugging in replacement parts and updating some software. (It's fine now, thank you.)

Here's the thing: at no time did anyone say, "It's four years old. Just get a new one." Instead, the tech guys said, "It's a good computer with a good processor. Sure, replace those parts." A watershed moment: the first time a four-year-old computer is not dismissed as obsolete.

As if by magic, confirmation turned up yesterday, when the Guardian's Charles Arthur asked whether the PC market has permanently passed its peak. Arthur goes on to quote Jay Chou, a senior research analyst at IDC, suggesting that we are now in the age of "good-enough computing" and that computer manufacturers will now need to find ways to create a "compelling user experience". Apple is the clear leader in that arena, although it's likely that if I'd had a Mac instead of a PC it would have been neither so easy nor so quick and inexpensive to fix my machine and get back to work on it, Macs are wonders of industrial design, but as I noted in 2007 when I built this machine, building PCs is now a color-by-numbers affair plugged together out of subsystem pieces that plug together in only one way. What it lacks in elegance compared to a Mac is more than made up for by being able to repair it myself.

But Chou is likely right that this is not the way the world is going.

In his 1998 book The Invisible Computer, usability pioneer Donald Norman projected a future of information appliances, arguing that computers would become invisible because they would be everywhere. (He did not, however, predict the ubiquitous 20-second delay that would accompany this development. You know, it used to be you could turn something on and it would work right away because it didn't have to load software into its memory?) For his model, Norman took electric motors: in the early days you bought one electric motor and used it to power all sorts of variegated attachments; later (now) you found yourself owning dozens of electric motors, all hidden inside appliances.

The trade-off is pretty much the same: the single electric motor with attachments was much more repairable by a knowledgeable end user than today's sealed black-box appliances are. Similarly, I can rebuild my PC, but I can only really replace the hard drive on my laptop, and the battery on my smart phone. Iphone users can't even do that. Norman, whose interest is usability, doesn't - or didn't, since he's written other books since - see this as necessarily a bad deal for consumers, who just want their technology to work intuitively so they can use it to get stuff done.

Jonathan Zittrain, though, has generally taken the opposite view, arguing in his book The Future of the Internet - and How to Stop It and in talks such as the one he gave at last year's Web science meeting that the general-purpose computer, which he dates to 1977, is dying. With it, to some extent, is going the open Internet; it was at that point that, to illustrate what he meant by curated content, he did a nice little morph from the ultra-controlled main menu of CompuServe circa 1992 to today's iPhone home screen.

"How curated do we want things to be?" he asked.

It's the key question. Zittrain's view, backed up by Tim Wu in The Master Switch is that security and copyright may be the levers used to close down general-purpose computers and the Internet, leaving us with a corporately-owned Internet that runs on black boxes to which individual consumers have little or no access. This is, ultimately, what the "Open" in Open Rights Group seems to me to be about: ensuring that the most democratic medium ever invented remains a democratic medium.

Clearly, there are limits. The earliest computer kits were open - but only to the relatively small group of people with - or willing to acquire - considerable technical skill. My computer would not be more open to me if I had to get out a soldering iron to fix my old motherboard and code my own operating system. Similarly, the skill required to deal with security threats like spam and malware attacks raises the technical bar of dealing with computers to the point where they might as well be the black boxes Zittrain fears. But somewhere between the soldering iron and the point-and-click of a TV remote control there has to be a sweet spot where the digital world is open to the most people. That's what I hope we can find.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 25, 2011

Return to the red page district

This week's agreement to create a .xxx generic top-level domain (generic in the sense of not being identified with a particular country) seems like a quaint throwback. Ten or 15 years ago it might have made mattered. Now, for all the stories rehashing the old controversies, it seems to be largely irrelevant to anyone except those who think they can make some money out of it. How can it be a vector for censorship if there is no prohibition on registering pornography sites elsewhere? How can it "validate" the porn industry any more than printers and film producers did? Honestly, if it didn't have sex in the title, who would care?

I think it was about 1995 when a geekish friend said, probably at the Computers, Freedom, and Privacy conference, "I think I have the solution. Just create a top-level domain just for porn."

It sounded like a good idea at the time. Many of the best ideas are simple - with a kind of simplicity mathematicians like to praise with the term "elegant". Unfortunately, many of the worst ideas are also simple - with a kind of simplicity we all like to diss with the term "simplistic". Which this is depends to some extent on when you're making the judgement..

In 1995, the sense was that creating a separate pornography domain would provide an effective alternative to broad-brush filtering. It was the era of Time magazine's Cyberporn cover story, which Netheads thoroughly debunked and leading up to the passage of the Communications Decency Act in 1996. The idea that children would innocently stumble upon pornography was entrenched and not wholly wrong. At that time, as PC Magazine points out while outlining the adult entertainment industry's objections to the new domain, a lot of Web surfing was done by guesswork, which is how the domain became famous.

A year or two later, I heard that one of the problems was that no one wanted to police domain registrations. Sure. Who could afford the legal liability? Besides, limiting who could register what in which domain was not going well: .com, which was intended to be for international commercial organizations, had become the home for all sorts of things that didn't fit under that description, while the .us country code domain had fallen into disuse. Even today, with organizations controlling every top-level domain, the rules keep having to adapt to user behavior. Basically, the fewer people interested in registering under your domain the more likely it is that your rules will continue to work.

No one has ever managed to settle - again - the question of what the domain name system is for, a debate that's as old as the system itself: its inventor, Paul Mockapetris, still carries the scars of the battles over whether to create .com. (If I remember correctly, he was against it, but finally gave on in that basis that: "What harm can it do?") Is the domain name system a directory, a set of mnemonics, a set of brands/labels, a zoning mechanism, or a free-for-all? ICANN began its life, in part, to manage the answers to this particular controversy; many long-time watchers don't understand why it's taken so long to expand the list of generic top-level domains. Fifteen years ago, finding a consensus and expanding the list would have made a difference to the development of the Net. Now it simply does not matter.

I've written before now that the domain name system has faded somewhat in importance as newer technologies - instant messaging, social networks, iPhone/iPad apps - bypass it altogether. And that is true. When the DNS was young, it was a perfect fit for the Internet applications of the day for which it was devised: Usenet, Web, email, FTP, and so on. But the domain name system enables email and the Web, which are typically the gateways through which people make first contact with those services (you download the client via the Web, email your friend for his ID, use email to verify your account).

The rise of search engines - first Altavista, then primarily Google - did away with much of consumers' need for a directory. Also a factor was branding: businesses wanted memorable domain names they could advertise to their customers. By now, though probably most people don't bother to remember more than a tiny handful of domain names now - Google, Facebook, perhaps one or two more. Anything else they either put into a search engine or get from either a bookmark or, more likely, their browser history.

Then came sites like Facebook, which take an approach akin to CompuServe in the old days or mobile networks now: they want to be your gateway to everything online (Facebook is going to stream movies now, in competition with NetFlix!) If they succeed, would it matter if you had - once - to teach your browser a user-unfriendly long, numbered address?

It is in this sense that the domain name system competes with Google and Facebook as the gateway to the Net. Of all the potential gateways, it is the only one that is intended as a public resource rather than a commercial company. That has to matter, and we should take seriously the threat that all the Net's entrances could become owned by giant commercial interests. But .xxx missed its moment to make history.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 4, 2011

Tax returns

In 1994, when Jeff Bezos was looking for a place to put the online bookseller he intended to grow into the giant, multi-faceted online presence it is today, he began with a set of criteria that included, high up on the list, avoiding liability for sales tax as much as possible. That meant choosing a small state, so that the vast majority of the new site's customers would be elsewhere.

Bezos could make this choice because of the 1992 Supreme Court decision in Quill Corp v. North Dakota, blocking states from compelling distance sellers to collect sales tax from customers unless the seller had a substantial physical operation (a "nexus") in the customer's state. Why, the reasoning went, should a company be required to pay taxes in a state where it receives no benefit in the form of public services? The decision helped fuel the growth of first mail-order sales and then ecommerce.

And so throughout the growth of electronic commerce Americans have gone along taking advantage of the relief from sales tax afforded by online sales. This is true despite the fact that many states have laws requiring their residents to declare and pay the sales tax on purchases over a certain amount. Until the current online tax disputes blew up, few knew about these laws - I only learned of them from a reader email some years ago - and as far as I'm aware it isn't enforced. Doing so would require comprehensive surveillance of ecommerce sites.

But this is the thing when something is new: those setting up businesses can take advantage of loopholes created for very different markets and conditions. A similar situation applies in the UK with respect to DVD and CD sales. Fulfilled by subsidiaries or partners based in the Channel Islands, the DVD and CD sales of major retailers such as Amazon, Tesco, and others take advantage of tax relief rules intended to speed shipments of agricultural products. Basically, any package valued under £18 is exempt from VAT. For consumers, this represents substantial savings; for local shops, it represents a tough challenge.

Even before that, in the early 1990s, CompuServe and AOL, as US-based Internet service providers, were able to avoid charging VAT in the UK based on a rule making services taxable based on their point of origin. That gave those two companies a significant - 17.5 percent - advantage over native ISPs like Demon and Pipex. There were many objections to this situation, and eventually the loophole was closed and both CompuServe and AOL began charging VAT.

You can't really blame companies for taking advantage of the structures that are there. No one wants to pay more tax - or pay for more administration - than is required by law, and anyone running those companies would make the same decisions. But as the recession continues to bite and state, federal, and central governments are all scrambling to replace lost revenues from a tax base that's been , the calls to level the playing field by closing off these tax-advantage workarounds are getting louder.

This type of argument is as old as mail order. But in the beginning there was a general view - implemented also in the US as a moratorium on taxing Internet services that was renewed as recently as 2007 - that exempting the Internet from as many taxes as possible would help the new medium take root and flourish. There was definitely some truth to the idea that this type of encouragement helped; an early FCC proposal to surcharge users for transmitting data was dropped after 10,000 users sent letters of complaint. Nonetheless, the FCC had to continue issuing denials for years as the dropped proposal continued to make the rounds as the "modem tax" hoax spam.

The arguments for requiring out-of-state sellers to collect and remit sales taxes (or VAT) are fairly obvious. Local retailers, especially small independents, are operating at a price disadvantage (even though customers must pay shipping and delivery charges when they buy online). Governments are losing one of their options for raising revenues to pay for public services. In addition, people buy online for many more reasons than saving money. Online shopping is convenient and offers greater choice. It is also true, though infrequently remembered, that the demographics of online shopping skew toward the wealthier members of our society - that is, the people who best afford to pay the tax.

The arguments against largely boil down to the fact that collecting taxes in many jurisdictions is administratively burdensome. There are some 8,000 different tax rates across the US's 50 states, and although there are many fewer VAT rates across Europe, once your business in a country has reached a certain threshold the rules and regulations governing each one can be byzantine and inconsistent. Creating a single, simple, and consistent tax rule to apply across the board to distance selling would answer these.

No one likes paying taxes (least of all us). But the fact that Amazon would apparently rather jettison the associates program that helped advertise and build its business than allow a state to claim those associates constitute a nexus exposing it to sales tax liability says volumes about how far we've come. And, therefore, how little the Net's biggest businesses now need the help.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 28, 2011


"You don't need this old math work," said my eighth grade geography teacher, paging through my loose-leaf notebook while I watched resentfully. It was 1967, the math work was no more than a couple of months old, and she was ahead of her time. She was an early prototype of that strange, new species littering the media these days: the declutterer.

People like her - they say "professional organizer", I say bully - seem to be everywhere. Their sudden visibility is probably due, at least in part, to the success of the US TV series Hoarders, in which mentally disordered people are forced to confront their pathological addiction to keeping and/or acquiring so much stuff that their houses are impassable, often hazardous. Of course, one person's pathological hoarder is another's more-or-less normal slob, packrat, serious collector, or disorganized procrastinator. Still, Newsweek's study of kids who are stuck with the clean-up after their hoarder parents die is decidedly sad.

But much of what I'm reading seems aimed at perfectly normal people who are being targeted with all the zealotry of an early riser insisting that late sleepers and insomniacs are lazy, immoral slugs who need to be reformed.

Some samples. LifeHacker profiles a book to help you estimate how much your clutter is costing you. The latest middle-class fear is that schools' obsession with art work will turn children into hoarders. The New York Times profiles a professional declutterer who has so little sympathy for attachment to stuff that she tosses out her children's party favors after 24 hours. At least she admits she's neurotic, and is just happy she's made it profitable to the tune of $150 an hour (well, Manhattan prices).

But take this comment from LifeHacker:

For example, look in your bedroom and consider the cost of unworn clothes and shoes, unread books, unworn jewelry, or unused makeup.

And this, from the Newsweek piece:

While he's thrown out, recycled, and donated years' worth of clothing, costume jewelry, and obvious trash, he's also kept a lot--including an envelope of clothing tags from items [his mother] bought him in 1972, hundreds of vinyl records, and an outdated tape recorder with corroded batteries leaking out the back.

OK, with her on the corroded batteries. (What does she mean, outdated? If it still functions for its intended purpose it's just old.) Little less sure about the clothing tags, which might evoke memories. But unread books? Unless you're talking 436 copies of The DaVinci Code, unread books aren't clutter. Unread books are mental food. They are promises of unknown worlds on a rainy day when the electricity goes bang. They are cultural heritage. Ditto vinyl records. Not all books and LPs are equally valuable, of course, but they should be presumed innocent until proven to be copies of Jeffrey Archer novels. Books are not shoeboxes marked "Pieces of string - too small to save".

Leaving aside my natural defensiveness at the suggestion that thousands of books, CDs, DVDs, and vinyl LPs are "clutter", it strikes me that one reason for this trend is that there is a generational shift taking place. Anyone born before about 1970 grew up knowing that the things they liked might become unavailable at any time. TV shows were broadcast once, books and records went out of print, and the sweater that sold out while you were saving up for it didn't reappear later on eBay. If you had any intellectual or artistic aspirations, building your own library was practically a necessity.

My generation also grew up making and fixing things: we have tools. (A couple of years ago I asked a pair of 20-somethings for a soldering iron; they stared as if I'd asked for a manual typewriter.) Plus, in the process of rebelling against our parents' largely cautious and thrifty lifestyles, Baby Boomers were the first to really exploit consumer credit. Put it together: endemic belief that the availability of any particular item was only temporary, unprecedented array of goods to choose from, extraordinary access to funding. The result: stuff.

To today's economically stressed-out younger generation, raised on reruns and computer storage, the physical manifestations of intellectual property must seem peculiarly unnecessary. Why bother when you can just go online and click a button? One of my 50-something writer friends loves this new world; he gives away or sells books as soon as he's read them, and buys them back used from Amazon or Alibris if he needs to consult them again. Except for the "buying it used" part, this is a business model the copyright industries ought to love, because you can keep selling the same thing over and over again to the same people. Essentially, it's rental, which means it may eventually be an even better business than changing the media format every decade or two so that people have to buy new copies. When 3D printers really get going, I imagine there will be people arguing that you really don't need to keep furniture around - just print it when you need it. Then the truly modern home environment will be just a bare floor and walls. If you want to live like that, fine, but on behalf of my home libraries, I say: ick.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 10, 2010


A new word came my way while I was reviewing the many complaints about the Transportation Security Administration and its new scanner toys and pat-down procedures: "Chertoffed". It's how "security theater" (Bruce Schneier's term) has transformed the US since 2001.

The description isn't entirely fair to Chertoff, who was only the *second* head of the Bush II-created Department of Homeland Security and has now been replaced: he served from 2005-2009. But since he's the guy who began the scanner push and also numbers scanner manufacturers among the clients of his consultancy company, The Chertoff Group - it's not really unfair either.

What do you do after defining the travel experience of a generation? A little over a month ago, Chertoff showed up at London's RSA Data Security conference to talk about what he thought needed to happen in order to secure cyberspace. We need, he said, a doctrine to lay out the rules of the road for dealing with cyber attacks and espionage - the sort of thing that only governments can negotiate. The analogy he chose was to the doctrine that governed nuclear armament, which he said (at the press Q&A) "gave us a very stable, secure environment over the next several decades."

In cyberspace, he argued, such a thing would be valuable because it makes clear to a prospective attacker what the consequences will be. "The greatest stress on security is when you have uncertainty - the attacker doesn't know what the consequences will be and misjudges the risk." The kinds of things he wants a doctrine to include are therefore things like defining what is a proportionate response: if your country is on the receiving end of an attack from another country that's taking out the electrical power to hospitals and air traffic control systems with lives at risk, do you have the right to launch a response to take out the platform they're operating from? Is there a right of self-defence of networks?

"I generally take the view that there ought to be a strong obligation on countries, subject to limitations of practicality and legal restrictions, to police the platforms in their own domains," he said.

Now, there are all sorts of reasons many techies are against government involvement - or interference - in the Internet. First and foremost is time: the World Summit on the Information Society and its successor, the Internet Governance Forum, have taken years to one's quite sure what, while the Internet's technology has gone on racing ahead creating new challenges. But second is a general distrust, especially among activists and civil libertarians. Chertoff even admitted that.

"There's a capability issue," he said, "and a question about whether governments put in that position will move from protecting us from worms and viruses to protecting us from dangerous ideas."

This was, of course, somewhat before everyone suddenly had an opinion about Wikileaks. But what has occurred since makes that distrust entirely reasonable: give powerful people a way to control the Net and they will attempt to use it. And the Net, as in John Gilmore's famous aphorism, "perceives censorship as damage and routes around it". Or, more correctly, the people do.

What is incredibly depressing about all this is watching the situation escalate into the kind of behavior that governments have quite reasonably wanted to outlaw and that will give ammunition to those who oppose allowing the Net to remain an open medium in which anyone can publish. The more Wikileaks defenders organize efforts like this week's distributed denial-of-service attacks, the more Wikileaks and its aftermath will become the justification for passing all kinds of restrictive laws that groups like the Electronic Frontier Foundation and the Open Rights Group have been fighting against all along.

Wikileaks itself is staying neutral on the subject, according to the statement on its (Swiss) Web site: Wikileaks spokesman Kristinn Hrafnsson said: "We neither condemn nor applaud these attacks. We believe they are a reflection of public opinion on the actions of the targets."

Well, that's true up to a point. It would be more correct to say that public opinion is highly polarized, and that the attacks are a reflection of the opinion of a relatively small section of the public: people who are at the angriest end of the spectrum and have enough technical expertise to download and install software to make their machines part of a botnet - and not enough sense to realize that this is a risky, even dangerous, thing to do. Boycotting during its busiest time of year to express your disapproval of its having booted Wikileaks off its servers would be an entirely reasonable protest. Vandalism is not. (In fact the announced attack on Amazon's servers seems not to have succeeded, though others have.

I have written about the Net and what I like to call the border wars between cyberspace and real life for nearly 20 years. Partly because it's fascinating, partly because when something is new you have a real chance to influence its development, and partly because I love the Net and want it to fulfill its promise as a democratic medium. I do not want to have to look back in another 20 years and say it's been "Chertoffed". Governments are already mad about the utterly defensible publication of the cables; do we have to give them the bullets to shoot us with, too?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 3, 2010

Open diplomacy

Probably most people have by now lived through the embarrassment of having a (it was intended to be) private communication made public. The email your fingers oopsishly sent to the entire office instead of your inamorata; the drunken Usenet postings scooped into Google's archive; the direct Tweet that wound up in the public timeline; the close friend your cellphone pocket-dialed while you were trashing them.

Most of these embarrassments are relatively short-lived. The personal relationships that weren't already too badly damaged recover, if slowly. Most of the people who get the misdirected email are kind enough to delete it and never mention it again. Even the stock market learns to forgive those drunken Usenet postings; you may be a CEO now but you were only a frat boy back then.

But the art of government-level diplomacy is creating understanding, tolerance, and some degree of cooperation among people who fundamentally distrust each other and whose countries may have substantial, centuries-old reasons why that is utterly rational. (Sometimes these internecine feuds are carried to extremes: would you buy from a store that filed Greek and Turkish DVDs in the same bin?) It's hardly surprising if diplomats' private conversations resemble those of Hollywood agents, telling each person what they want to hear about the others and maneuvering them carefully to get the desired result. And a large part of that desired result is avoiding mass destruction through warfare.

For that reason, it's hard to simply judge Wikileaks' behavior by the standard of our often-expressed goal of open data, transparency, accountability, and net.freedoms. Is there a line? And where do you draw it?

In the past, it was well-established news organizations who had to make this kind of decision - the New York Times and the Washington Post regarding the Pentagon Papers, for example. Those organizations, rooted in a known city in a single country, knew that mistakes would see them in court; they had reputations, businesses, and personal liberty to lose. As Jay Rosen: the world's first stateless news organization. (culture, laws, norms) - contract with those who have information that can submit - will encrypt to disguise source from us as well as others - and publish - can't subpoena because stateless. Failure of the watchdog press under George Bush and anxiety on part of press derived from denial of their own death.

Wikileaks wasn't *exactly* predicted by Internet pioneers, but it does have its antecedents and precursors. Before collaborative efforts - wikis - became commonplace on the Web there was already the notion of bypassing the nation-state to create stores of data that could not be subjected to subpoenas and other government demands. There was the Sealand data bunker. There was physicist Timothy May's Crypto Anarchist Manifesto, which posited that, "Crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded."

Note, however, that a key element of these ideas was anonymity. Julian Assange has told Guardian readers that in fact he originally envisioned Wikileaks as an anonymous service, but eventually concluded that someone must be responsible to the public.

Curiously, the strand of Internet history that is the closest to the current Wikileaks situation is the 1993-1997 wrangle between the Net and Scientology, which I wrote about for Wired in 1995. This particular net.war did a lot to establish the legal practices still in force with respect to user-generated content: notice and takedown, in particular. Like Wikileaks today, those posting the most closely guarded secrets of Scientology found their servers under attack and their material being taken down and, in response, replicated internationally on mirror sites to keep it available. Eventually, sophisticated systems were developed for locating the secret documents wherever they were hosted on a given day as they bounced from server to server (and they had to do all that without the help of Twitter. Today, much of the gist is on Wikipedia. At the time, however, calling it a "flame war with real bullets" wasn't far wrong: some of Scientology's fiercest online critics had their servers and/or homes raided. When Amazon removed Wikileaks from its servers because of "copyright", it operated according to practices defined in response to those Scientology actions.

The arguments over Wikileaks push at many other boundaries that have been hotly disputed over the last 20 years. Are they journalists, hackers, criminals, or heroes? Is Wikileaks important because, as NYU professor Jay Rosen points out, journalism has surrendered its watchdog role? Or because it is posing, as Techdirt says, the kind of challenge to governments that the music and film industries have already been facing? On a technical level, Wikileaks is showing us the extent to which the Internet can still resist centralised control.

A couple of years ago, Stefan Magdalinski noted the "horse-trading in a fairly raw form" his group of civic hackers discovered when they set out to open up the United Nations proceedings - another example of how people behave when they think no one is watching. Utimately governments will learn to function in a world in which they cannot trust that anything is secret, just as they had to learn to cope with CNN (PDF)

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 12, 2010

Just between ourselves

It is, I'm sure, pure coincidence that a New York revival of Vaclav Havel's wonderfully funny and sad 1965 play The Memorandum was launched while the judge was considering the Paul Chambers "Twitter joke trial" case. "Bureaucracy gone mad," they're billing the play, and they're right, but what that slogan omits is that the bureaucracy in question has gone mad because most of its members don't care and the one who does has been shut out of understanding what's going on. A new language, Ptydepe, has been secretly invented and introduced as a power grab by an underling claiming it will improve the efficiency of intra-office communications. The hero only discovers the shift when he receives a memorandum written in the new language and can't get it translated due to carefully designed circular rules. When these are abruptly changed the translated memorandum restores him to his original position.

It is one of the salient characteristics of Ptydepe that it has a different word for every nuance of the characters' natural language - Czech in the original, but of course English in the translation I read. Ptydepe didn't work for the organization in the play because it was too complicated for anyone to learn, but perhaps something like it that removes all doubt about nuance and context would assist older judges in making sense of modern social interactions over services such as Twitter. Clearly any understanding of how people talk and make casual jokes was completely lacking yesterday when Judge Jacqueline Davies upheld the conviction of Paul Chambers in a Doncaster court.

Chambers' crime, if you blinked and missed those 140 characters, was to post a frustrated message about snowbound Doncaster airport: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your shit together otherwise I'm blowing the airport sky high!" Everyone along the chain of accountability up to the Crown Prosecution Service - the airport duty manager, the airport's security personnel, the Doncaster police - seems to have understood he was venting harmlessly. And yet prosecution proceeded and led, in May, to a conviction that was widely criticized both for its lack of understanding of new media and for its failure to take Chambers' lack of malicious intent into account.

By now, everyone has been thoroughly schooled in the notion that it is unwise to make jokes about bombs, plane crashes, knives, terrorists, or security theater - when you're in an airport hoping to get on a plane. No one thinks any such wartime restraint need apply in a pub or its modern equivalent, the Twitter/Facebook/online forum circle of friends. I particularly like Heresy Corner's complaint that the judgement makes it illegal to be English.

Anyone familiar with online writing style immediately and correctly reads Chambers' Tweet for what it was: a perhaps ill-conceived expression of frustration among friends that happens to also be readable (and searchable) by the rest of the world. By all accounts, the judge seems to have read it as if it were a deliberately written personal telegram sent to the head of airport security. The kind of expert explanation on offer in this open letter apparently failed to reach her.

The whole thing is a perfect example of the growing danger of our data-mining era: that casual remarks are indelibly stored and can be taken out of context to give an utterly false picture. One of the consequences of the Internet's fundamental characteristic of allowing the like-minded and like-behaved to find each other is that tiny subcultures form all over the place, each with its own set of social norms and community standards. Of course, niche subcultures have always existed - probably every local pub had its own set of tropes that were well-known to and well-understood by the regulars. But here's the thing they weren't: permanently visible to outsiders. A regular who, for example, chose to routinely indicate his departure for the Gents with the statement, "I'm going out to piss on the church next door" could be well-known in context never to do any such thing. But if all outsiders saw was a ten-second clip of that statement and the others' relaxed reaction that had been posted to YouTube they might legitimately assume that pub was a shocking hotbed of anti-religiou slobs. Context is everything.

The good news is that the people on the ground whose job it was to protect the airport read the message, understood it correctly, and did not overreact. The bad news is that when the CPS and courts did not follow their lead it opened up a number of possibilities for the future, all bad. One, as so many have said, is that anyone who now posts anything online while drunk, angry, stupid, or sloppy-fingered is at risk of prosecution - with the consequence of wasting huge a