Main

January 20, 2023

New music

Nick_Cave_by_Amelia_Troubridge-370.jpgThe news this week that "AI" "wrote" a song "in the style of" Nick Cave (who was scathing about the results) seemed to me about on a par with the news in the 1970s that the self-proclaimed medium Rosemary Brown was able to take dictation of "new works" by long-dead famous composers. In that: neither approach seems likely to break new artistic ground.

In Brown's case, musicologists, psychologists, and skeptics generally converged on the belief that she was channeling only her own subconscious. AI doesn't *have* a subconscious...but it does have historical inputs, just as Brown did. You can say "AI" wrote a set of "song lyrics" if you want, but that "AI" is humans all the way down: people devised the algorithms and wrote the computer code, created the historical archive of songs on which the "AI" was trained, and crafted the prompt that guided the "AI"'s text generation. But "the machine did it by itself" is a better headline.

Meanwhile...

Forty-two years after the first one, I have been recording a new CD (more details later). In the traditional folk world, which is all I know, getting good recordings is typically more about being practiced enough to play accurately while getting the emotional performance you want. It's also generally about very small budgets. And therefore, not coincidentally, a whole lot less about sound effects and multiple overdubs.

These particular 42 years are a long time in recording technology. In 1980, if you wanted to fix a mistake in the best performance you had by editing it in from a different take where the error didn't appear, you had to do it with actual reels of tape, an edit block, a razor blade, splicing tape...and it was generally quicker to rerecord unless the musician had died in the interim. Here in digital 2023, the studio engineer notes the time codes, slices off a bit of sound file, and drops it in. Result! Also: even for traditional folk music, post-production editing has a much bigger role.

Autotune, which has turned many a wavering tone into perfect pitch, was invented in 1997. The first time I heard about it - it alters the pitch of a note without altering the playback speed! - it sounded indistinguishable from magic. How was this possible? It sounded like artificial intelligence - but wasn't.

The big, new thing now, however, *is* "AI" (or what currently passes for it), and it's got nothing to do with outputting phrases. Instead, it's stem splitting - that is, the ability to take a music file that includes multiple instruments and/or voices, and separate out each one so each can be edited separately.

Traditionally, the way you do this sort of thing is you record each instrument and vocal separately, either laying them down one at a time or enclosing each musician/singer into their own soundproof booth, from where they can play together by listening to each other over headphones. For musicians who are used to singing and playing at the same time in live performance, it can be difficult to record separate tracks. But in recording them together, vocal and instrumental tracks tend to bleed into each other - especially when the instrument is something like an autoharp, where the instrument's soundboard is very close to the singer's mouth. Bleed means you can't fix a small vocal or instrumental error without messing up the other track.

With stem splitting, now you can. You run your music file through one of the many services that have sprung up, and suddenly you have two separated tracks to work with. It's being described to me as a "game changer" for recording. Again: sounds indistinguishable from magic.

This explanation makes it sound less glamorous. Vocals and instruments whose frequencies don't overlap can be split out using masking techniques. Where there is overlap, splitting relies on a model that has been trained on human-split tracks and that improves with further training. Still a black box, but now one that sounds like so many other applications of machine learning. Nonetheless, heard in action it's startling: I tried LALAL_AI on a couple of tracks, and the separation seemed perfect.

There are some obvious early applications of this. As the explanation linked above notes, stem splitting enables much finer sampling and remixing. A singer whose voice is failing - or who is unavailable - could nonetheless issue new recordings by laying their old vocal over a new instrumental track. And vice-versa: when, in 2002, Paul Justman wanted to recreate the Funk Brothers' hit-making session work for Standing in the Shadows of Motown, he had to rerecord from scratch to add new singers. Doing that had the benefit of highlighting those musicians' ability and getting them royalties - but it also meant finding replacements for the ones who had died in the intervening decades.

I'm far more impressed by the potential of this AI development than of any chatbot that can put words in a row so they look like lyrics. This is a real thing with real results that will open up a world of new musical possibilities. By contrast, "AI"-written song lyrics rely on humans' ability to conceive meaning where none exists. It's humans all the way up.


Illustrations: Nick Cave in 2013 (by Amanda Troubridge, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 2, 2022

Hearing loss

amazon-echo-dot-charcoal-front-on-370.jpgSome technologies fail because they aren't worth the trouble (3D movies). Some fail because the necessary infrastructure and underlying technologies aren't good enough yet (AI in the 1980s, pen computing in the 1990s). Some fail because the world goes another, simpler, more readily available way (Open Systems Interconnection). Some fail because they are beset with fraud (the fate that appears to be unfolding with respect to cryptocurrencies), And some fail even though they work as advertised and people want them and use them because they make no money to sustain their development for their inventors and manufacturers.

The latter appears to be the situation with smart speakers, which in 2015 were going to take over the world, and today, in 2022, are installed in 75% of US homes. Despite this apparent success, they are losing money even for market leaders Amazon (third) and Google (second), as Business Insider reported this week. Amazon's Worldwide Digital division, which includes Prime Video as well as Echo smart speakers and Alexa voice technology, lost $3 billion in the first quarter of this year alone, primarily due to Alexa and other devices. The division will now be the biggest target for the layoffs the company announced last week.

The gist: they thought smart speakers would be like razors or inkjet printers, where you sell the hardware at or below cost and reap a steady income stream from selling razor blades or ink cartridges. Amazon thought people would buy their smart speakers, see something they liked, and order the speaker to put through the purchase. Instead, judging from the small sample I have observed personally, people use their smart speakers as timers, radios, and enhanced remote controls, and occasionally to get a quick answer from Wikipedia. And that's it. The friends I watched order their smart speaker to turn on the basement lights and manage their shopping list have, as far as I could tell on a recent visit, developed no new uses for their voice assistant in three years of being locked up at home with it.

The system has developed a new feature, though. It now routinely puts the shopping list items on the wrong shopping list. They don't know why.

In raising this topic at The Overspill, Charles Arthur referred back to a 2016 Wired aritcle summarizing venture capitalist Mary Meeker's assessment in her annual Internet Trends report that voice was going to take over the world and the iPhone had peaked. In slides 115-133, Meeker outlined her argument: improving accuracy would be a game-changer.

Even without looking at recent figures, it's clear voice hasn't taken over. People do use speech when their hands are occupied, especially when driving or when the alternative is to type painfully into their smartphone - but keyboards still populate everyone's desks, and the only people I know who use speech for data entry are people for whom typing is exceptionally difficult.

One unforeseen deterrent may be that privacy emerged as a larger issue than early prognosticators may have expected. Repeated stories have raised awareness that the price of being able to use a voice assistant at will is that microphones in your home listen to everything you say waiting for their cue to send your speech to a distant server to parse. Rising consciousness of the power of the big technology companies has made more of us aware that smart speakers are designed more to fulfill their manufacturers' desires to intermediate and monetize our lives than to help us.

The notion that consumers would want to use Amazon's Echo for shopping appears seriously deluded with hindsight. Even the most dedicated voice users I know want to see what they're buying. Years ago, I thought that as TV and the Internet converged we'd see a form of interactive product placement in which it would be possible to click to buy a copy of the shirt a football player was wearing during a game or the bed you liked in a sitcom. Obviously, this hasn't happened; instead a lot of TV has moved to streaming services without ads, and interactive broadcast TV is not a thing. But in *that* integrated world voice-activated shopping would work quite well, as in "Buy me that bed at the lowest price you can find", or "Send my brother the closest copy you can find of Novak Djokovic's dark red sweatshirt, size large, as soon as possible, all cotton if possible."

But that is not our world, and in our world we have to make those links and look up the details for ourselves. So voice does not work for shopping beyond adding items to lists. And if that doesn't work, what other options are there? As Ron Amadeo writes at Ars Technica, the queries where Alexa is frequently used can't be monetized, and customers showed little interest in using Alexa to interact with other companies such as Uber or Domino's Pizza. And, even Google, which is also cutting investment in its voice assistant, can't risk alienating consumers by using its smart speaker to play ads. Only Apple appears unaffected.

"If you build it, they will come," has been the driving motto of a lot of technological development over the last 30 years. In this case, they built it, they came, and almost everyone lost money. At what point do they turn the servers off?


Illustrations: Amazon Echo Dot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter and/or Mastodon.

November 18, 2022

Being the product

Twtter-bird-upside-down-370.jpgThe past week of Twitter has been marked by a general sense of waiting for the crash, heightened because no one knows when the bad thing will happen or what form it will take. On Twitter itself, I see everyone mourning the incoming loss and setting up camp elsewhere; on professional media journalists are frantically trying to report on what's going on at HQ, where there is now no communications team and precious few engineers.

As noted here last week, it is definitely not so simple as Twitter's loss is Mastodon's/Discord/s/SomeOtherSite's gain.

The general sense of anxiety feels like a localized version of the years of the Trump presidency - that is, people logging in constantly to check, "What's he done now?" Only the "he" is of course new owner Elon Musk, and the "what" is stuff like a team has been fired, someone crucial has quit, there's been a new order to employees ("check this box by 5pm or you're fired!"), making yet another change to the system of blue ticks that may or may not verify a person's identity, or appearing to disable two-factor authentication via SMS shortly after announcing the shutdown of "20% of microservices". This kind of thing makes everyone jumpy. Every tiny glitch could be the first sign that Twitter is crumbling around the edges before cascading into failure, will the process look like HAL losing its marbles in the movie 2001: A Space Odyseey? Ot will it just go black like the end of The Sopranos?

I have never felt so conscious of my data: 15 years of tweets and direct messages all held hostage inside a system with a renegade owner no one trusts. Deleting it feels like killing my past; leaving it in place teems with risks.

The risk level has been abruptly raised by the departure of various security and privacy personnel from Twitter's staff, which led Michael Veale to warn that the platform should be regarded as dangerously vulnerable and insecure. Veale went on to provide instructions for using the law (that is, the General Data Protection Regulation) rather than just Twitter's tools, to delete your data.

Some of my more cautious friends have been regularly deleting their data all along - at the end of every couple of weeks, or every six months, mostly to ensure they can't suddenly become a pariah for something they posted casually five years ago. (It turns out this is a function that Mastodon will automate through user settings.) But, as Veale asks, how do you know Twitter is really deleting the data? Hence his suggestion of applying the law: it gives your request teeth. But is there anyone left at Twitter to respond to legal requests?

The general sense of uncertainty is heightened by things like the reports I saw of strange behavior in response to requests to download account archives: instead of just asking for two-factor authentication before proceeding, the site sent these users to the help center and a form demanding government ID. There seem to be a number of these little weirdnesses, and they're raising users' overall distrust of the system and the sense that we're all just waiting for the thing to break and our data to become an asset in a fire sale - or for a major hack in which all our data gets auctioned on the dark web.

"If you're not paying for the product, you're the product," goes the saying (attribution uncertain). Right now, it feels like we're waiting to find out our product status.

Meanwhile, Apple has spent years now promoting its products by claiming they provide better privacy than the alternatives. It is currently helping destroy the revenue base of Meta (owner of Instagram, Facebook, and WhatsApp) by allowing users to opt to block third-party trackers on its devices. At The Drum, Chris Sutclifee cites estimates that 62% of Apple users have done so; at Forbes Daniel Newman reported in February that Meta projected that the move would cost the company $10 billion in lost ad sales this year. The financial results it's announced since have been accordingly grim.

Part of the point of this is that Apple's promise appeared to be that the money its customers pay for hardware and services also buys them privacy. This week, Tom Germain reported at Gizmodo that Apple's own apps continue to harvest data about users' every move even when those users have - they thought - turned data collection off.

"Even if you're paying for the product, you're the product," Cory Doctorow wrote on discovering this. Double-dipping is familiar in other contexts. But here Apple has broken the pay-with-data bargain that made the web. It may live to regret this; collecting data to which it has exclusive access while shutting down competitors has attracted the attention of German antitrust regulators.

If that's where the commercial world is going, the appeal of something like Mastodon, where we are *not* the product, and where accounts can be moved to other interoperable servers at any time, is obvious. But, as I've written before about professional media, the money to pay for services and servers has to come from *somewhere*. If we're not going to pay with data, then...how?


Illustrations: Twitter flies upside down.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or | | Comments (0) | TrackBacks (0)

November 11, 2022

Moving day

twitter-lettuce-FgnQBzFVIAEox-K-370.jpeg""On the Internet, your home always leaves you," someone observed on Twitter some months back.

Probably everyone who's been online for any length of time has had this experience. That site you visit every day, that's full of memories and familiar people suddenly is no more. Usually, the problem is a new owner, who buys it and closes it down (Television Without Pity, Geocities), or alters it beyond recognition (CompuServe). Or its paradigm falls out of fashion and users leach away until the juice is gone, the fate of many of the early text-based systems.

As the world and all have been reporting - because so many journalists make their online homes there - Twitter is in trouble. A new owner with poor impulse control and a new idea every day - Twitter will be a financial service! (like WeChat?) Twitter will be the world's leading source of accurate information! (like Wikipedia?) Twitter can do multimedia! (like TikTok?), who is driving out what staff he hasn't fired.

The result, Chris Stokel-Walker predicts, will be escalating degradation of the infrastructure - and possibly, Mike Masnick writes, violations of the company's 2011 20-year consent decree with the US Federal Trade Commission, which could ultimately cost the company billions, in addition to the $13 billion in debt Musk added to the company's existing debt load in order to purchase it.

All of that - and the unfolding sequelae Maria Farrell details - will no doubt be a widely used case study at business schools someday.

For me, Twitter has been a fantastic resource. In the 15 years since I created my account, Twitter is where I've followed breaking news, connected with friends, found expert communities. Tight clusters are, Peter Coy finds at the New York Times, why Twitter has been unexpectedly resilient despite its lack of profitability.

But my use of Twitter has nothing in common with its use by those with millions of followers. At that level, it's a broadcast medium. My own experience of chatting with friends or responding randomly to strangers' queries is largely closed to them. Like traveling on the subway, they *can* do it, but not the way the rest of us can. For someone in that position, Twitter is a large audience that fortuitously includes journalists, politicians, and entertainers. The writer Stephen King had the right reaction to the suggestion that verified accounts should pay $20 a month (since reduced to $8) for the privilege: screw *that*. Though even average Twitter users will resist paying to be sold to the advertisers who ultimately fund it the service.

Unusually, a number of alternative platforms are ready and waiting for disaffected Twitter users to experiment with. Chief among them is Mastodon, which looks enough like Twitter to suggest an easy learning curve. There are, however, profound differences, most of them good. Mastodon is a protocol, not a site; like the web, email, or Usenet, anyone can set up a server ("instance") using open source software and connect to other instances. You can form a community on a local instance - or you can use your account as merely a convenient address from which to access postings by users at dozens of other instances. One consequence of this is that hashtags are very much more important in helping people find each other and the postings they're interested in.

Over the last week, I've seen a lot of people trying to be considerate of the natives and their culture, most particularly that they are much more sensitive about content warnings. The reality remains, though, that Mastodon's user base has doubled in a week, and that level of influx will inevitably bring change - if they stay and post, and particularly if many of them adopt a bit of software that allows automated cross-posting between the two services.

All of this has happened without a commercial interest: no one owns Mastodon, it has no ads, and no one is recruiting Twitter users. But that right there may be the biggest problem: the huge influx of new users doesn't bring revenue or staff to help manage it. This will be a big, unplanned test of the system's resilience.

Many are now predicting Twitter's total demise, not least because new owner Elon Musk himself has told employees that the company may become bankrupt due to its burn rate (some of which is his own fault, as previously noted). Barring the system going offline, though, habit is a strong motivator, and it's more likely that many people will treat the new accounts they've set up as "in case of need".

But some will move, because unlike other such situations, whole communities can move together to Mastodon, aided by its ability to ingest lists. I'm seeing people compile lists of accounts in various academic fields, of journalists, of scientists. There are even tools that scans the bios of your Twitter contacts for Mastodon addresses and compiles them into a personal list, which, again, can be easily imported.

If Mastodon works for Twitter's hundreds of millions, there is a big upside: communities don't have to depend for their existence on the grace and favor of a commercial owner. Ultimately, the reason Musk now owns Twitter is he offered shareholders a lucrative exit. They didn't have to care about *us*. And they didn't.

Illustrations: Twitter versus lettuce (via Sheon Han on Twitter).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or Mastodon

August 26, 2022

Good-enough

rajesh-siri-date.jpgA couple of months on from Amazon's synthesized personal voices, it was intriguing to read this week, in the Financial Times ($) (thanks to Charles Arthur's The Overspill), that several AI startups are threatening voice actors's employment prospects. Actors Equity is campaigning to extend legal protection to the material computers synthesize from actors' voices and likeness that, as Equity puts it, "reproduces performances without generating a 'recording' or a 'copy'." The union's survey found that 65 of performance artists and 93% of audio artists thought AI voices pose a threat to their livelihood.

Voices gives a breakdown of their assignmenets. Fortuitously, most jobs seek "real person" acting - exactly where voice synthesizers fail. For many situations, though - railway announcements, customer service, marketing campaigns - "real person" is overkill. Plus, AI voices, the FT notes, "can be made to say anything at the push of a button". No moral qualms need apply.

We have seen this movie before. This is a more personalized version of appropriating our data in order to develop automated systems - think Google's language translation, developed from billions of human-translated web pages, or the cooption of images posted on Flickr to build facial recognition systems later used to identify deportees. More immediately pertinent are the stories of Susan Bennett, the actress whose voice was Siri 2011, and Jen Taylor, the voice of Microsoft's Cortana. Bennett reportedly had no idea that the phrases and sentences she'd spent so many hours recording were in use until a friend emailed. Shouldn't she have the right to object- or to royalties?

Freelance writers have been here: the 1990s saw an industry-wide shift from first-rights contracts under which we controlled our work and licensed one-time use to all-rights contracts that awarded ownership in perpetuity to a shrinking number of conglomerating publishers. Photographers have been here, watching as the ecosystem of small, dedicated agencies that cared about them got merged into Corbis and Getty while their work opportunities shrank under the confluence of digital cameras, smartphones, and social media. Translators, especially, have been here: while the most complex jobs require humans, for many uses machine translation is good enough. It's actors' "good-enough" ground that is threatened.

Like so many technologies, personalized voice synthesis started with noble intentions - to help people who'd lost their own voices to injury or illness. The new crop of companies the FT identifies are profit-focused; as so often, it's not the technology itself, but the rapidly decreasing cost that's making trouble.

First historical anecdote: Steve Williams, animation director for the 1991 film Terminator 2, warned the London Film Festival that it would soon be impossible to distinguish virtual reality from physical reality. Dead presidents would appear live on the news and Cary Grant would make new movies, Obvious result: just as musicians compete against the entire back catalogue of recorded music, might actors now be up against long-dead stars when auditioning for a role?

Second historical anecdote: in 1993, Silicon Graphics, then leading the field of computer graphics, in collaboration with sensor specialist SimGraphics, presented VActor, a system that captured measurements of body movements from live actors and turned them into computer simulations. Creating a few minutes of the liquid metal man (Robert Patrick) in Terminator 2, although a similar process, took 50 animators a year. VActor was faster and much cheaper at producting a reusable library of "good-enough" expressions and body movements. At the time, the company envisioned the system's use for presentations at exhibitions and trade shows and even talk shows. Prior art: Max Headroom 1987-1988, In 2022, SimGraphics is still offering "real-time interactive characters" - these days, for the metaverse. Its website says VActor, now "AI-VActor", is successfully animating Mario.

Third historical anecdote: in 1997, Fred Astaire, despite being dead at the time, appeared in ads performing some of his most memorable dance moves with a Dirt Devil vacuum cleaner. The ad used CGI to replace two of his dance partners - a mop, a hat rack. If old Cary Grant did have career prospects, they were now lost: the public *hated* the ad. Among the objectors was Astaire's daughter, who returned one of the company's vacuum cleaners with a letter that siad, in part, "Yes, he did dance with a mop but he wasn't selling that mop and it was his own idea " The public at large agreed: Astaire's extraordinary artistry deserved better than an afterlife as a shill.

Today, voice actors really could find themselves competing for work against synthesized versions of themselves. Equity's approach seems to be to push to extend copyright so that performers will get royalties for future reuse. Actors might, however, be better served by the personality rights as granted in some jurisdictions (not the UK). This right helped Cheers actors George Wendt and John Ratzenberger win when they sued and won against a company that created robots that looked like them, and the one Bette Midler used when the singer in an ad fooled people into thinking she herself was singing.

The bottom line: a tough profession looks like getting even tougher. As Michael (Dustin Hoffman) says in Tootsie (written by Murray Schisgal and Larry Gelbart), "I don't believe in Hell. I believe in unemployment, but I don't believe in Hell."


Illustrations:: The Big Bang Theory's Rajesh (Kumal Nayyar) tries to date Siri (Becky O'Donahue).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 12, 2022

Nebraska story

Thumbnail image for Facebook-76536_640.pngThis week saw the arrest of a Nebraska teenager and her mother, who are charged with multiple felonies for terminating the 17-year-old's pregnancy at 28 weeks and burying (and, apparently, trying to burn) the fetus. Allegedly, this was a home-based medication abortion...and the reason the authorities found out is that following a tip-off the police got a search warrant for the pair's Facebook accounts. There, the investigators found messages suggesting the mother had bought the pills and instructed her daughter how to use them.

Cue kneejerk reactions. "Abortion" is a hot button. Facebook privacy is a hot button. Result: in reporting these gruesome events most media have chosen to blame this horror story on Facebook for turning over the data.

As much as I love a good reason to bash Facebook, this isn't the right take.

Meta - Facebook's parent - has responded to the stories with a "correction" that says the company turned over the women's data in response to valid legal warrants issued by the Nebraska court *before* the Supreme Court ruling. The company adds, "The warrants did not mention abortion at all."

What the PR folks have elided is that both the Supreme Court's Dobbs decision, which overturned Roe v. Wade, and the wording of the warrants are entirely irrelevant. It doesn't *matter* that this case was about an abortion. Meta/Facebook will *always* turn over user data in compliance with a valid legal warrant issued by a court, especially in the US, its home country. So will every other major technology company.

You may dispute the justice of Nebraska's 2019 Pain-Capable Unborn Child Act, under which abortion is illegal after 20 weeks from fertilization (22 weeks in normal medical parlance). But that's not Meta's concern. What Meta cares about is legal compliance and the technical validity of the warrant. Meta is a business, not a social justice organization, and while many want Mark Zuckerberg to use his personal judgment and clout to refuse to do business with oppressive regimes (by which they usually mean China, or Myanmar), do you really want him and his company to obey only laws they agree with?

There will be many much worse cases to come, because states will enact and enforce the vastly more restrictive abortion laws that Dobbs enables, and there will be many valid legal warrants that force them to hand data to police bent on prosecuting people in excruciating pregnancy-related situations - and in many more countries. Even in the UK, where (except for Northern Ireland) abortion has been mostly non-contentious for decades, lurking behind the 1967 law which legalized abortion until 24 weeks is an 1861 statute under which abortion is criminal. That law, as Shanti Das recently wrote at the Guardian, has been used to prosecute dozens of women and a few men in the last decade. (See also Skeptical Inquirer.)

So if you're going to be mad at Facebook, be mad that the platform hadn't turned on end-to-end encryption for its messaging. That, as security engineer Alec Muffett has been pointing out on Twitter, would have protected the messages against access by both the system itself and by law enforcement. At the Guardian, Johana Bhuiyan reports the company is now testing turning on end-to-end encryption by default. Doubtless, soon to be followed by law enforcement and governments demanding special access.

Others advocate switching to other encrypted messaging platforms that, like Signal, provide a setting that allows you to ensure that messages automatically vape themselves after a specified number of days. Such systems retain no data that can be turned over.

It's good advice, up to a point. For one thing, it ignores most people's preference for using the familiar services their friends use. Adopting a second service just for, say, medical contacts adds complications; getting everyone you know to switch is almost impossible.

Second, it's also important to remember the power of metadata - data about data, which includes everything from email headers to search histories. "We kill people based on metadata," former NSA head Michael Hayden said in 2014 in a debate on the constitutionality of the NSA. (But not, he hastened to add, metadata collected from *Americans*.)

Logs of who has connected to whom and how frequently is often more revealing than the content of the messages sent back and forth. For example: the message content may be essentially meaningless to an outsider ("I can make it on Monday at two") until the system logs tell you that the sender is a woman of childbearing age and the recipient is an abortion clinic. This is why so many governments have favored retaining Internet connection data. Governments cite the usual use cases - organized crime, drug dealers, child abusers, and terrorists - when pushing for data retention, and they are helped by the fact that most people instinctively quail at the thought of others reading the *content* of their messages but overlook metadata's significance.intuitively grasp the importance of metadata - data about data, as in system logs, connection records - has helped enable mass Internet surveillance.

The net result of all this is to make surveillance capitalism-driven technology services dangerous for the 65.5 million women of childbearing age in the US (2020). That's a fair chunk of their most profitable users, a direct economic casualty of Dobbs.


Illustrations: Facebook.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 5, 2022

Painting by numbers

heron-thames-nw.JPGMy camera can see better than I can. I don't mean that it can take better pictures than I can because of its automated settings, although this is also true. I mean it can capture things I can't *see*. The heron above, captured on a grey day along the Thames towpath, was pretty much invisible to me. I was walking with a friend. Friend pointed and said, "Look. A heron" I pointed the camera more or less where she indicated, pushed zoom to maximum, hit the button, and when I got home there it was.

If the picture were a world-famous original, there might be a squabble about who owned the copyright. I pointed the camera and pushed the button, so in our world the copyright belongs to me. But my friend could stake a reasonable claim: without her, I wouldn't have known where or when to point the camera. The camera company (Sony) could argue, quite reasonably, that the camera and its embedded software, which took years to design and build, did all the work, while my entire contribution took but a second.

I imagine, however, that at the beginning of photography artists who made their living painting landscapes and portraits might have seen reason to be pretty scathing about the notion that photography deserved copyright at all. Instead of working for months to capture the right light and nuances...you just push a button? Where's the creative contribution in that?

This thought was inspired by a recent conversation on Twitter between two copyright experts - Lilian Edwards and Andres Guadamuz - who have been thinking for years about the allocation of intellectual property rights when an AI system creates or helps to create a new work. The proximate cause was Guadamuz's stunning experiments generating images usingMidjourney.

If you try out Midjourney's image-maker via the bot on its Discord server, you quickly find that each detail you add to your prompt adds detail and complexity to the resulting image; an expert at "prompt-craft" can come extraordinarily close to painting with the generation system. Writing prompts to control these generation systems and shape their output is becoming an art in itself, an expertise that will become highly valuable in itself. Guadamuz calls it "AI whispering".

Guadamuz touches on this in a June 2022 blog posting, in which he asks about the societal impact of being able to produce sophisticated essays, artworks, melodies, or software code based on a few prompts. The best human creators will still be the crucial element - I don't care how good you are at writing prompts, unless you're the human known as Vince Gilligan you+generator are not going to produce Breaking Bad or Better Call Saul. However, generation systems *might*, as Guadamuz says, produce material that's good-enough for many contexts, given that it''s free (ish).

More recently, Guadamuz considers the subject he and Edwards were mulling on Twitter: the ownership of copyright in generated images. Guadamuz had been reading the generators' terms and conditions. OpenAI, owner of DALL-E, specifies that users assign the copyright in all "Generations" its system produces, which it then places in the public domain whilegranting users a permanent license to do whatever they want with the Generations their prompts inspire. Midjourney takes the opposite approach: the user owns the generated image, and licenses it back to Midjourney.

What Guardamuz found notable was the trend toward assuming that generated images are subject to copyright, even though lawyers have argued that they can't be and fall into the public domain. Earlier this year, the US Copyright Office has rejected a request to allow an AI copyright a work. The UK is an outlier, awarding copyright in computer-generated works to the "person by whom the arrangements necessary for the creation of the work are undertaken". This is ambiguous: is that person the user who wrote the prompt or the programmers who trained the model and wrote the code?

Much of the discussion evolved around how that copyright might be divided up. Should it be shared between the user and the company that owns the generating tool? We don't assign copyright in the words we write to our pens or word processors; but as Edwards suggested, the generator tool is more like an artist for hire than a pen. Of course, if you hire a human artist to create an image for you, contract terms specify who owns the copyright. If it's a work made for hire, the artist retains no further interest.

So whatever copyright lawyers say, the companies who produce and own these systems are setting the norms as part of choosing their business model. The business of selling today's most sophisticated cameras derives from an industry that grew up selling physical objects. In a more recent age, they might have grown up selling software add-on tools on physical media. Today, they may sell subscriptions and tiers of functionality. Nonetheless, if a company's leaders come to believe there is potential for a low-cost revenue stream of royalties for reusing generated images, it will go for it. Corbis and Getty have already pioneered automated copyright enforcement.

For now, these terms and conditions aren't about developing legal theory; the companies just don't want to get sued. These are cover-your-ass exercises, like privacy policies.


Illustrations: Grey heron hanging out by the Thames in spring 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 15, 2022

Online harms

boris-johnson-on-his-bike-European-Cycling-Federation-370.jpgAn unexpected bonus of the gradual-then-sudden disappearance of Boris Johnson's government, followed by his own resignation, is that the Online Safety bill is being delayed until after Parliament's September return with a new prime minister and, presumably, cabinet.

This is a bill almost no one likes - child safety campaigners think it doesn't go far enough; digital and human rights campaigners - Big Brother Watch, Article 19, Electronic Frontier Foundation, Open Rights Group, Liberty, a coalition of 16 organizations (PDF) - because it threatens freedom of expression and privacy while failing to tackle genuine harms such as the platforms' business model; and technical and legal folks because it's largely unworkable.

The DCMS Parliamentary committee sees it as wrongly conceived. The he UK Independent Reviewer of Terrorism Legislation, Jonathan Hall QC, says it's muzzled and confused. Index on Censorship calls it fundamentally broken, and The Economist says it should be scrapped. The minister whose job it has been to defend it, Nadine Dorries (C-Mid Bedfordshire), remains in place at the Department for Culture, Media, and Sport, but her insistence that resigning-in-disgrace Johnson was brought down by a coup probably won't do her any favors in the incoming everything-that-goes-wrong-was-Johnson's-fault era.

In Wednesday's Parliamentary debate on the bill, the most interesting speaker was Kirsty Blackman (SNP-Aberdeen North), whose Internet usage began 30 years ago, when she was younger than her children are now. Among passionate pleas that her children should be protected from some of the high-risk encounters she experienced, was: "Every person, nearly, that I have encountered talking about this bill who's had any say over it, who continues to have any say, doesn't understand how children actually use the Internet." She called this the bill's biggest failing. "They don't understand the massive benefits of the Internet to children."

This point has long been stressed by academic researchers Sonia Livingstone and Andy Phippen, both of whom actually do talk to children. "If the only horse in town is the Online Safety bill, nothing's going to change," Phippen said at last week's Gikii, noting that Dorries' recent cringeworthy TikTok "rap" promoting the bill focused on platform liability. "The liability can't be only on one stakeholder." His suggestion: a multi-pronged harm reduction approach to online safety.

UK politicians have publicly wished to make "Britain the safest place in the world to be online" all the way back to Tony Blair's 1997-2007 government. It's a meaningless phrase. Online safety - however you define "safety" - is like public health; you need it everywhere to have it anywhere.

Along those lines, "Where were the regulators?" Paul Krugman asked in the New York Times this week, as the cryptocurrency crash continues to flow. The cryptocurrency market, which is now down to $1 trillion from its peak of $3 trillion, is recapitulating all the reasons why we regulate the financial sector. Given the ongoing collapses, it may yet fully vaporize. Krugman's take: "It evolved into a sort of postmodern pyramid scheme". The crash, he suggests, may provide the last, best opportunity to regulate it.

The wild rise of "crypto" - and the now-defunct Theranos - was partly fueled by high-trust individuals who boosted the apparent trustworthiness of dubious claims. The same, we learned this week was true of Uber 2014-2017, Based on the Uber files,124,000 documents provided by whistleblower Mark MacGann, a lobbyist for Uber 2014-2016, the Guardian exposes the falsity of Uber's claims that its gig economy jobs were good for drivers.

The most startling story - which transport industry expert Hubert Horan had already published in 2019 - is the news that the company paid academic economists six-figure sums to produce reports it could use to lobby governments to change the laws it disliked. Other things we knew about - for example, Greyball, the company's technology denying regulators and police rides so they couldn't document Uber's regulatory violations and Uber staff's abuse of customer data - are now shown to have been more widely used than we knew. Further appalling behavior, such as that of former CEO Travis Kalanick, who was ousted in 2017, has been thoroughly documented in the 2019 book, Super Pumped, by Mike Isaac, and the 2022 TV series based on it, Super Pumped.

But those scandals - and Thursday/s revelation that 559 passengers are suing the company for failing to protect them from rape and assault by drivers - aren't why Horan described Uber as a regulatory failure in 2019. For years, he has been indefatigably charting Uber's eternal unprofitability. In his latest, he notes that Uber has lost over $20 billion since 2015 while cutting driver compensation by 40%. The company's share price today is less than half its 2019 IPO price of $45 - and a third of its 2021 peak of $60. The "misleading investors" kind of regulatory failure.

So, returning to the Online Safety bill, if you undermine existing rights and increase the large platforms' power by devising requirements that small sites can't meet *and* do nothing to rein in the platforms' underlying business model...the regulatory failure is built in. This pause is a chance to rethink.

Illustrations: Boris Johnson on his bike (European Cyclists Federation via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 22, 2022

The new cable

Grace and Frankie.png"It's become the new cable," Andrew Lawrence writes about Netflix at the Guardian. He is expressing the theory that it's now Netflix's turn to suffer the fate of TV cable packages, which consumers have been cutting in favor of streaming, because, in his view, its content library has become stale, flat, and unprofitable. Ouch.

Lawrence was responding to Netflix's Tuesday evening announcement that the first quarter brought a loss of 200,000 subscribers and a projected second quarter loss of 2 million. About 700,000 of those were sacrificed when the company quit Russia as part of economic sanctions (some Russian subscribers are suing over this.)

Overnight, Netflix's shares dropped by 35%; having peaked at $700.99 on November 17, 2021, Thursday they closed around $218. One hedge fund sold off its 7% stake. Puncturing expectations of an ever-expanding future shrank Netflix's shares to something closer to their real value.

In the US especially, Netflix's trials may signal the beginning of a new industry phase. In Britain Netflix's biggest competitor remains the free-to-air BBC (for which we all must pay), ITV, and Channel 4, all of which commission world-class programming, plus, especially among younger people, YouTube. In the US, veteran screenwriter Ken Levine commented last week, broadcast networks are moving flagship content to their streaming arms. Eventually, he predicted, broadcast networks will "become the equivalent of the old neighborhood cineplex showing first run films a month after they've run everywhere else."

Another maybe-signal: on Thursday, Warner Bros Discovery (following a just-completed merger) announced it will close its month-old streaming platform CNN+ on April 30. At Axios, Sara Fischer reports that as of Tuesday the service had 150,000 subscribers, and that new owner WBD prefers to build HBO Max as a unified service.

I see this as a signal because the underlying question is: how many streaming services can people afford? Most of the cable cord-cutting Lawrence alluded to is for cost/value reasons.

Last week, Mark Sweney reported at the Guardian that due to the cost-of-living crisis the number of UK households that pay for at least one streaming service fell by 215,000 in the first quarter. Many still see Netflix as a "must-have"; first chopped are newer arrivals - Disney+ in particular. Amazon subscribers are also more likely to stay, perhaps because of Prime delivery. We'd guess also that the removal of pandemic restrictions coupled with warmer weather means people are going out more, which eats into both available time and entertainment budgets, and resuming commuters are rediscovering being time-stressed and cash-strapped.

Netflix has plans for recovery: it intends to create lower-priced subscription tiers part-subsidized by advertising and to crack down on the 100 million households it believes are sharing passwords instead of buying their own subs. The latter sounds like the next phase of the file-sharing wars; companies' reputations never came out well. In any event, it's unlikely Netflix will ever again see the adoption rates of the last ten years. It can put prices up for its ad-free tiers; it can (and almost certainly will at some point) pay artists less. In 2019, their outlays on talent led monopoly specialist Matt Stoller to call Amazon and Netflix predatory.

In order to build its own library of original content (the stuff Lawrence complained about), Netflix loaded up with as much as $16 billion in debt (at peak), apparently successfully. In January 2021 it announced an end to further borrowing because its subscriber revenues were now enough to support both operating costs and content investment. However, the company remains vulnerable to interest rate rises, given it still owes $14.5 billion.

At the Guardian, Alex Hern notes that Netflix, unlike competitors Amazon, Apple, and Disney, offers no news or sports, which people *will* pay to consume in real time, but adds that it has a gaming service for subscribers. Based on the complaints I see from subscribers, Netflix could also make its customers happier by improving its interface, particularly to aid content discovery.

The moment of peak streaming was always going to come. It's sooner because of the pandemic; it's later because the traditional broadcasters and media companies took so long to catch up with the technology companies who were the first movers.

For now, content is king, and all these companies hope their exclusive catalogues are sufficiently unique selling points to build their subscriber base. Anyone who was drawn to Netflix by Friends or The Office must now go elsewhere. Making new hits is *hard*. As Jeff Bezos recently learned, you can't make a new Game of Thrones by following a checklist.

Longer-term, the problem they all have is that no one cares about them. But we do care if every new series requires an extensive search and a new subscription. Even given apps like JustWatch, which find the best-priced option, piracy's single interface is far easier.

At a guess, there are three main future possibilities: the streaming services can consolidate, partner into something like cable packages, or open up content licensing and compete on pricing, features, absence of ads, interface design, and technical quality. Whatever its competitors do, Netflix's wild growth phase is over.


Illustrations: Lily Tomlin and Jane Fonda in the Netflix series Grace and Frankie.

net.wars does not accept guest posts, and does not accept payment, even in kind, to include links or "share resources". Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 25, 2022

Dangerous corner

War_damages_in_Mariupol,_12_March_2022_(01).jpgIf there is one thing the Western world has near-universally agreed in the last month, it's that in the Russian invasion of Ukraine, the Ukrainians are the injured party. The good guys.

If there's one thing that privacy advocates and much of the public agree on, it's that Clearview AI, which has amassed a database of (it claims) 10 billion facial images by scraping publicly accessible social media without the subjects' consent and sells access to it to myriad law enforcement organizations, is one of the world's creepiest companies. This assessment is exacerbated by the fact that the company and its CEO refuse to see anything wrong about their unconsented repurposing of other people's photos; it's out there for the scraping, innit?

Last week, Reuters reported that Clearview AI was offering Ukraine free access to its technology. Clearview's suggested uses: vetting people at checkpoints; debunking misinformation on social media; reuniting separated family members; and identifying the dead. Clearview's CEO, Hoan Ton-That, told Reuters that the company has 2 billion images of Russians scraped from Russian Facebook clone Vkonakte.

This week, it's widely reported that Ukraine is accepting the offer. At Forbes, Tom Brewster reports that Ukraine is using the technology to identify the dead.

Clearview AI has been controversial ever since January 2020, when Kashmir Hill reported its existence in the New York Times, calling it "the secretive company that might end privacy as we know it". Social media sites LinkedIn, Twitter, and YouTube all promptly sent cease-and-desist notices. A month later, Kim Lyons reported at The Verge that its 2,200 customers included the FBI, Interpol, the US Department of Justice, Immigration and Customs Enforcement, a UAE sovereign wealth fund, the Royal Canadian Mounted Police, and college campus police departments.

In May 2021, Privacy International filed complaints in five countries. In response, Canada, Australia, the UK, France, and Italy have all found Clearview to be in breach of data protection laws and ordered it to delete all the photos of people that it has collected in their territories. Sweden, Belgium, and Canada have declared law enforcement use of Clearview's technology to be illegal.

Ukraine is its first known use in a war zone. In a scathing blog posting, Privacy International says, "...the use of Clearview's database by authorities is a considerable expansion of the realm of surveillance, with very real potential for abuse."

Brewster cites critics, who lay out familiar privacy issues. Misidentification in a war zone could lead to death if a live soldier's nationality is wrongly assessed (especially common when the person is non-white) and unnecessary heartbreak for dead soldiers' families. Facial recognition can't distinguish civilians and combatants. In addition, the use of facial recognition by the "good guys" in a war zone might legitimize the technology. This last seems to me unlikely; we all distinguish the difference between what's acceptable in peace time versus an extreme context. This issue here is *company*, not the technology, as PI accurately pinpoints: "...it seems no human tragedy is off-limits to surveillance companies looking to sanitize their image."

Jack McDonald, a senior lecturer in war studies at Kings College London who researches the relationship between ethics, law, technology, and war, sees the situation differently.

Some of the fears Brewster cites, for example, are far-fetched. "They're probably not going to be executing people at checkpoints." If facial recognition finds a match in those situations, they'll more likely make an arrest and do a search. "If that helps them to do this, there's a very good case for it, because Russia does appear to be flooding the country with saboteurs." Cases of misidentification will be important, he agrees, but consider the scale of harm in the conflict itself.

McDonald notes, however, that the use of biometrics to identify refugees is an entirely different matter and poses huge problems. "They're two different contexts, even though they're happening in the same space."

That leaves the use Ukraine appears to be most interested in: identifying dead bodies. This, McDonald explains, represents a profound change from the established norms, which include social and institutional structures and has typically been closely guarded. Even though the standard of certainty is much lower, facial recognition offers the possibility of being able to do identification at scale. In both cases, the people making the identification typically have to rely on photographs taken elsewhere in other contexts, along with dental records and, if all else fails, public postings.

The reality of social media is already changing the norms. In this first month of the war, Twitter users posting pictures of captured Russian soldiers are typically reminded that it is technically against the Geneva Convention to do so. The extensive documentation - video clips, images, first-person reports - that is being posted from the conflict zones on services like TikTok and Twitter is a second front in its own right. In the information war, using facial recognition to identify the dead is strategic.

This is particularly true because of censorship in Russia, where independent media have almost entirely shut down and citizens have only very limited access to foreign news. Dead bodies are among the only incontrovertible sources of information that can break through the official denials. The risk that inaccurate identification could fuel Russian propaganda remains, however.

Clearview remains an awful idea. But if I thought it would help save my country from being destroyed, would I care?


Illustrations: War damage in Mariupol, Ukraine (Ministry of Internal Affairs of Ukraine, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 4, 2022

Consent spam

openRTB.pngThis week the system of adtech that constantly shoves banners in our face demanding consent to use tracking cookies was ruled illegal by the Belgian Data Protection Authority, leading 28 EU data protection authorities. The Internet Advertising Bureau, whose Transparency and Consent Framework formed the basis of the complaint that led to the decision, now has two months to redesign its system to bring it into compliance with the General Data Protection Regulation.

The ruling marks a new level of enforcement that could begin to see the law's potential fulfilled.

Ever since May 2018, when GDPR came into force, people have been complaining that so far all we've really gotten from it is bigger! worse! more annoying! cookie banners, while the invasiveness of the online advertising industry has done nothing but increase. In a May 2021 report, for example, Access Now examined the workings of GDPR and concluded that so far the law's potential had yet to be fulfilled and daily violations were going unpunished - and unchanged.

There have been fines, some of them eye-watering, such as Amazon' s 2021 fine of $877 million for its failure to get proper consent for cookies. But even Austrian activist lawyer Max Schrems' repeated European court victories have so far failed to force structural change, despite requiring the US and EU to rethink the basis of allowing data transfers.

To "celebrate" last week's data protection day, Schrems documented the situation: since the first data protection laws were passed,enforcement has been rare. Schrems' NGO, noyb, has plenty of its own experience to drawn on. Of the 51 individual cases noyb has filed in Europe since its founding in 2018, only 15% have been decided wthin a year, none of them pan-European. Four cases filed with the Irish DPA in May 2018, the day after GDPR came into force, have yet to be given a final decision.

Privacy International, which filed seven complaints against adtech companies in 2018, also has an enforcement timeline. Only one, against Experian, resulted in an investigation, and even in that case no action has been taken since Experian's appeal in 2021. A recent study of diet sites showed that they shared the sensitive information they collect with unspecified third parties, PI senior tecnologist Eliot Bendinelli told last week's Privacy Camp. PI's complaint is yet to be enforced, though it has led some companies to change their practices.

Bendinelli was speaking on a panel trying to learn from GDPR's enforcement issues in order to ensure better protection of fundamental rights from the EU's upcoming Digital Services Act. Among the complaints with respect to GDPR: the lack of deadlines to spur action and inconsistencies among the different national authorities.

The complaint at the heart of this week's judgment began in 2018, when Open Rights Group director Jim Killock, UCL researcher Michael Veale, and Irish Council on Civil Liberties senior fellow Johnny Ryan took the UK Information Commissioner's Office to court over the ICO's lack of action regarding real-time bidding, which the ICO itself had found illegal under the UK's Data Protection Act (2018), the UK's post-Brexit GDPR clone. In real-time bidding, your visit to a participating web page launches an instant mini-auction to find the advertiser willing to pay the most to fill the ad space you're about to see. Your value is determined by crunching all the data the site and its external sources have or can get about you.

If all this sounds like it oughtta be illegal under GDPR, well, yes. Enter the IAB's TCF, which extracts your permission via those cookie consent banners. With many of these, dark patterns design make "consent" instant and rejection painfully slow. The Big Tech sites, of course, handle all this by using logins; you agree to the terms and conditions when you create your account and then you helpfully forget how much they learn about you every time you use the site.

In December 2021, the UK's Upper Tribunal refused to require the ICO to reopen the complaint, though it did award Killock and Veal concessions they hope will make the ICO more accountable in future.

And so back to this week's judgment that the IAB's TCF, which is used on 80% of the European Internet, is illegal. The Irish DPA is also investigating Google's similar system, as well as Quantcast's consent management system. On Twitter, Ryan explained the gist: cookie-consent pop-ups don't give publishers adequate user consent, and everyone must delete all the data they've collected.

Ryan and the Open Rights Group also point out that the judgment spikes the UK government's claim that revamping data protection law is necessary to get rid of cookie banners (at the expense of some of the human rights enshrined in the law). Ryan points to DuckDuckGo as an example of the non-invasive alternative: contextual advertising. He also observed that all that "consent spam" makes GDPR into merely "compliance theater".

Meanwhile, other moves are also making their mark. Also this week, Facebook (Meta)'s latest earnings showed that Apple's new privacy controls, which let users opt out of tracking, will cost it $10 billion this year. Apparently 75% of Apple users opt out.

Moral: given the tools and a supportive legal environment, people will choose privacy.

Illustrations: Diagram of OpenRTB, from the Belgian decision.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 21, 2022

Power plays

thames-kew-2022-01-17.jpegWe are still catching up on updates and trends.

Two days before the self-imposed deadline, someone blinked in the game of financial chicken between Amazon UK and Visa. We don't know which one it was, but on January 17 Amazon said it wouldn't stop accepting Visa credit cards after all. Negotiations are reportedly ongoing.

Ostensibly, the dispute was about the size of Visa's transaction fees. At Quartz, Ananya Bhattacharya quotes Banked.com's Ben Goodall's alternative explanation: the dispute allowed Amazon to suck up a load of new data that will help it build "the super checkout for the future. For Visa, she concludes, resolving the dispute has relatively little value beyond PR: Amazon accounts for only 1% of its UK credit card volume. For the rest of us, it remains disturbing that our interests matter so little. If you want proof of market dominance, look no further.

In June 2021, the Federal Trade Commission tried to bring an antitrust suit against Facebook, and failed when the court ruled that in its complaint the FTC had failed to prove its most basic assumption: that Facebook had a dominant market position. Facebook was awarded the dismissal it requested. This week, however, the same judge ruled that the FTC's amended complaint, which was filed in August, will be allowed to go ahead, though he suggests in his opinion that the FTC will struggle to substantiate some of its claims. Essentially, the FTC accuses Facebook of a "buy or bury" policy when faced with a new and innovative competitor and says it needed to make up for its own inability to adapt to the mobile world.

We will know if Facebook (or its newly-renamed holding company owner, Meta) is worried if it starts claiming that damaging the company is bad for America. This approach began as satire, Robert Heller explained in his 1994 book The Fate of IBM. Heller cites a 1990 PC Magazine column by William E. Zachmann, who used it as the last step in an escalating list of how the "IBMpire" would respond to antitrust allegations.

This week, Google came close to a real-life copy in a blog posting opposing an amendment to the antitrust bill currently going through the US Congress. The goal behind the bill is to make it easier for smaller companies to compete by prohibiting the major platforms from advantaging their own products and services. Google argues, however, that if the bill goes through Americans might get worse service from Google's products, American technology companies could be placed at a competitive disadvantage, and America's national security could be threatened. Instead of suggesting ways to improve the bills, however, Google concludes with the advice that Congress should delay the whole thing.

To be fair, Google isn't the only one that dislikes the bill. Apple argues its provisions might make it harder for users to opt out of unwanted monitoring. Free Press Action argues that it will make it harder to combat online misinformation and hate speech by banning the platforms from "discriminating" against "similarly situated businesses" (the bill's language), competitor or not. EFF, on the other hand, thinks copyright is a bigger competition issue. All better points than Google's.

A secondary concern is the fact that these US actions are likely to leave the technology companies untouched in the rest of the world. In Africa, Nesrine Malik writes at the Guardian, Facebook is indispensable and the only Internet most people know because its zero-rating allows its free use outside of (expensive) data plans. Most African Internet users are mobile-only, and most data users are on pay-as-you-go plans. So while Westerners deleting their accounts is a real threat to the company's future - not least because, as Frances Haugen testified, they produce the most revenue - the company owns the market in Africa. There, it is literally the only game in town for both businesses and individuals. Twenty-five years ago, we thought the Internet would be a vehicle for exporting the First Amendment. Instead...

Much of the discussion about online misinformation focuses on content moderation. In a new report the Royal Society asks how to create a better information environment. Despite its harm, the report comes down against simply removing scientific misinformation. Like Charles Arthur in his 2021 book Social Warming, the report's authors argue for slowing the spread by various methods - adding a friction to social media sharing, reconfiguring algorithms, in a few cases de-platforming superspreaders. I like the scientists' conclusion that simple removal doesn't work; in science you must show your work, and deletion fuels conspiracy theories. During this pandemic, Twitter has been spectacular at making it possible to watch scientists grapple with uncertainty in real time.

The report also disputes some of our longstanding ideas about how online interaction works. A literature review finds that the filter bubbles and echo chambers Eli Pariser posited in 2011 are less important than we generally think. Instead most people have "relatively diverse media diets" and the minority who "inhabit politically partisan online news echo chambers" is about 6% to 8% of users.

Keeping it that way, however, depends on having choices, which leads back to these antitrust cases. The bigger and more powerful the platforms are, the less we - as both individuals and societies - matter to them.


Illustrations: The Thames at an unusually quiet moment, in January 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 12, 2021

Third wave

512px-Web_2.0_Map.svg.pngIt seems like only yesterday that we were hearing that Web 2.0 was the new operating system of the Internet. Pause to look up. It was 2008, in the short window between the founding of today's social media giants (2004-2006) and their smartphone-accelerated explosion (2010).

This week a random tweet led me to discover Web3. As Aaron Mak explains at Slate, "Web3" is an idea for running a next-generation Internet on public blockchains in the interests of decentralization (which net.wars has long advocated). To date, the aspect getting the most attention is decentralized finance (DeFi, or, per Mary Branscombe, deforestation finance), a plan for bypassing banks and governments by conducting financial transactions on the blockchain.

At Freecode, Nader Dabit goes into more of the technical underpinnings. At Fabric Ventures (Medium), Max Mersch and Richard Muirhead explain its importance. Web3 will bring a "borderless and frictionless" native payment layer (upending mediator businesses like Paypal and Square), bring the "token economy" to support new businesses (upending venture capitalists), and tie individual identity to wallets (bypassing authentication services like OAuth, email plus password, and technology giant logins), thereby enabling multiple identities, among other things. Also interesting is the Cloudflare blog, where Thibault Meunier states that as a peer-to-peer system Web3 will use cryptographic identifiers and allow users to selectively share their personal data at their discretion. Some of this - chiefly the robustness of avoiding central points of failure - is a return to the Internet's original design goals.

Standards-setter W3C is working on at least one aspect - cryptographically verifiable Decentralized Identifiers, and it's running into opposition, from Google, Apple, and Mozilla, whose browsers control 87% of the market.

Let's review a little history.

The 20th century Internet was sorta, kinda decentralized, but not as much as people like to think. The technical and practical difficulties of running your own server at home fueled the growth of portals and web farms to do the heavy lifting. Web design went from plain text (see for example, Live Journal and Blogspot (now owned by Google). You can argue about how exactly it was that a lot of blogs died off circa 2010, but I'd blame Twitter, writers found it easier to craft a sentence or two and skip writing the hundreds of words that make a blog post. Tim O'Reilly and Clay Shirky described the new era as interactive, and moving control "up the stack" from web browsers and servers to the services they enabled. Data, O'Reilly predicted, was the key enabler, and the "long tail" of niche sites and markets would be the winner. He was right about data, and largely wrong about the long tail. He was also right about this: "Network effects from user contributions are the key to market dominance in the Web 2.0 era." Nearly 15 years later, today's web feels like a landscape of walled cities encroaching on all the public pathways leading between them.

Point Network (Medium) has a slightly different version of this history; they call Web 1.0 the "read-only web"; Web 2.0 the "server/cloud-based social Web", and Web3 the "decentralized web".

The pattern here is that every phase began with a "Cambrian" explosion of small sites and businesses and ended with a consolidated and centralized ecosystem of large businesses that have eaten or killed everyone else. The largest may now be so big that they can overwhelm further development to ensure their future dominance; at least, that's one way of looking at Mark Zuckerberg's metaverse plan.

So the most logical outcome from Web3 is not the pendulum swing back to decentralization that we may hope, but a new iteration of the existing pattern, which is at least partly the result of network effects. The developing plans will have lots of enemies, not least governments, who are alert to anything that enables mass tax evasion. But the bigger issue is the difficulty of becoming a creator. TikTok is kicking ass, according to Chris Stokel-Walker, because it makes it extremely easy for users to edit and enhance their videos.

I spy five hard problems. One: simplicity and ease of use. If it's too hard, inconvenient, or expensive for people to participate as equals, they will turn to centralized mediators. Two: interoperability and interconnection. Right now, anyone wishing to escape the centralization of social media can set up a Discord or Mastodon server, yet these remain decidedly minority pastimes because you can't message from them to your friends on services like Facebook, WhatsApp, Snapchat, or TikTok. A decentralized web in which it's hard to reach your friends is dead on arrival. Three: financial incentives. It doesn't matter if it's venture capitalists or hundreds of thousands of investors each putting up $10, they want returns. As a rule of thumb, decentralized ecosystems benefit all of society; centralized ones benefit oligarchs - so investment flows to centralized systems. Four: sustainability. Five: how do we escape the power law of network effects?

Gloomy prognostications aside, I hope Web3 changes everything, because in terms of its design goals, Web 2.0 has been a bust.


Illustrations: Tag cloud from 2007 of Web 2.0 themes (Markus Angermeier and Luca Cremonini, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 15, 2021

The future is hybrid

grosser-somebody.JPGEvery longstanding annual event-turned-virtual these days has a certain tension.

"Next year, we'll be able to see each other in person!" says the host, beaming with hope. Excited nods in the Zoom windows and exclamation points in the chat.

Unnoticed, about a third of the attendees wince. They're the folks in Alaska, New Zealand, or Israel, who in normal times would struggle to attend this event in Miami, Washington DC, or London because of costs or logistics.

"We'll be able to hug!" the hosts say longingly.

Those of us who are otherwhere hear, "It was nice having you visit. Hope the rest of your life goes well."

When those hosts are reminded of this geographical disability, they immediately say how much they'd hate to lose the new international connections all these virtual events have fostered and the networks they have built. Of course they do. And they mean it.

"We're thinking about how to do a hybrid event," they say, still hopefully.

At one recent event, however, it was clear that hybrid won't be possible without considerable alterations to the event as it's historically been conducted - at a rural retreat, with wifi available only in the facility's main building. With concurrent sessions in probably six different rooms and only one with the basic capability to support remote participants, it's clear that there's a problem. No one wants to abandon the place they've used every year for decades. So: what then? Hybrid in just that one room? Push the facility whose selling point is its woodsy distance from modern life to upgrade its broadband connections? Bring a load of routers and repeaters and rig up a system for the weekend? Create clusters of attendees in different locations and do node-to-node Zoom calls? Send each remote participant a hugging pillow and a note saying, "Wish you were here"?

I am convinced that the future is hybrid events, if only because businesses sound so reluctant to resume paying for so much international travel, but the how is going to take a lot of thought, collaboration, and customization.

***

Recent events suggest that the technology companies' own employees are a bigger threat to business-as-usual than portending regulation and legislation. Facebook's had two major whistleblowers - Sophie Zhang and Frances Haugen in the last year, and basically everyone wants to fix the site's governance. But Facebook is not alone...

At Uber, a California court ruled in August that drivers are employees; a black British driver has filed a legal action complaining that Uber's driver identification face-matching algorithm is racist; and Kenyan drivers are suing over contract changes they say have cut their takehome pay to unsustainably low levels.

Meanwhile, at Google and Amazon, workers are demanding the companies pull out of contracts with the Israeli military. At Amazon India, a whistleblower has handed Reuters documents showing the company has exploited internal data to copy marketplace sellers' products and rig its search engine to display its own versions first. *And* Amazon's warehouse workers continue to consider unionizing - and some cities back them.

Unfortunately, the bigger threat of the legislation being proposed in the US, UK, New Zealand, Canada is *also* less to the big technology companies than to the rest of the Internet. For example, in reading the US legislation Mike Masnick finds intractable First Amendment problems. Last week I liked this idea of focusing on content social media companies' algorithms amplify, but Masnick persuasively argues it's not so simple, citing Daphne Koller, who thought more critically about the First Amendment problems that will arise in implementing that idea.

***

The governor of Missouri, Mike Parson, has accused Josh Renaud, a journalist with the St Louis Post-Dispatch, of hacking into a government website to view several teachers' social security numbers. From the governor's description, it sounds like Renaud hit either CTRL-U or hit F12, looked at the HTML code, saw startlingly personal data, and decided correctly that the security flaw was newsworthy. (He also responsibly didn't publish his article until he had notified the website administrators and they had fixed the issue.)

Parson disagrees about the legitimacy of all this, and has called for a criminal investigation into this incident of "hacking" (see also scraping). The ability to view the code that makes up a web page and tells the browser how to display it is a crucial building block of the web; when it was young and there were no instruction manuals, that was how you learned to make your own page by copying. A few years ago, the Guardian even posted technical job ads in its pages' HTML code, where the right applicants would see them. No password, purloined or otherwise, is required. The code is just sitting there in plain sight on a publicly accessible server. If it weren't, your web page would not display.

Twenty-five years ago, I believed that by now governments would be filled with 30-somethings who grew up with computers and the 2000-era exploding Internet and could restrain this sort of overreaction. I am very unhappy to be wrong about this. And it's only going to get worse: today's teens are growing up with tablets, phones, and closed apps, not the open web that was designed to encourage every person to roll their own.


Illustrations: Exhibit from Ben Grosser's "Software for Less, reimagining Facebook alerts, at the Arebyte Gallery until end October.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 8, 2021

The inside view

Facebook user revenue chart.pngSo many lessons, so little time.

We have learned a lot about Facebook in the last ten days, at least some of it new. Much of it is from a single source, the documents exfiltrated and published by Frances Haugen.

We knew - because Haugen is not the first to say so - that the company is driven by profits and a tendency to view its systemic problems as PR issues. We knew less about the math. One of the more novel points in Haugen's Senate testimony on Tuesday was her explanation of why Facebook will always be poorly moderated outside the US: safety does not scale. Safety costs the same for each new country Facebook adds - but each new country is also a progressively smaller market than the last. Consequence: the cost-benefit analysis fails. Currently, Haugen said, Facebook only covers 50 of the world's approximately 500 languages, and even in some of those cases the country does not have local experts to help understand the culture. What hope for the rest?

Additional data: at the New York Times, Kate Klonick checks Facebook's SEC filings to find that average revenue per North American user per *quarter* was $53.56 in the last quarter of 2020, compared to $16.87 for Europe, $4.05 for Asia, and $2.77 for the rest of the world. Therefore, Klonick said at In Lieu of Fun, most of its content moderation money is spent in the US, which has less than 10% of the service's users. All those revenue numbers dropped slightly in Q1 2021.

We knew that in some countries Facebook is the only Internet people can afford to access. We *thought* that it only represented a single point of failure in those countries. Now we know that when Facebook's routing goes down - its DNS and BGP routing were knocked out by a "maintenance error" - the damage can spread to other parts of the Internet. The whole point of the Internet was to provide communications in case of a bomb outage. This is bad.

As a corollary, the panic over losing connections to friends and customers even in countries where social pressure, not data plans, ties people to Facebook is a sign of monopoly. Haugen, like Kevin Roose in the New York Times, sees signs of desperation in the documents she leaked. This company knows its most profitable audiences are aging; Facebook is now for "old people". The tweens are over at Snapchat, TikTok, and even Telegram, which added 70 million signups in the six hours Facebook was out.

We already knew Facebook's business model was toxic, a problem it shares with numerous other data-driven companies not currently in the spotlight. A key difference: Zuckerberg's unassailable control of his company's voting shares. The eight SEC complaints Haugen has filed is the first potential dent in that.

Like Matt Stoller, I appreciate a lot of Haugen's ideas for remediation: pushing people to open links before sharing, and modifying Section 230 to make platforms responsible for their algorithmic amplification, an idea also suggested by fellow data scientist Roddy Lindsay and British technology journalist Charles Arthur in his new book, Social Warming. For Stoller, these are just tweaks to how Facebook works. Haugen says she wants to "save" Facebook, not harm it. Neither her changes nor Zuckerberg's call for government regulation touch its concentrated power. Stoller wants "radical decentralization". Arthur wants cap social network size.

One fundamental mistake may be to think of Facebook as *a* monopoly rather than several at once. As an economic monopoly, businesses all over the world depend on Facebook and subsidiaries to reach their customers, and advertisers have nowhere else to go. Despite last year's pledged advertising boycott over hate speech on Facebook, since Haugen's revelations began, advertisers have been notably silent. As a social monopoly, Facebook's outage was disastrous in regions where both humanitarians and vulnerable people rely on it for lifesaving connections; in richer countries, the inertia of established connections leaves Facebook in control of large swaths of our social and community networks. This week taught us that its size also threatens infrastructure. Each of these calls for a different approach.

Stoller has several suggestions for crashing Facebook's monopoly power, one of which is to ban surveillance advertising. But he rejects regulation and downplays the crucial element of interoperability; create a standard so that messaging can flow between platforms, and you've dismantled customer lock-in. The result would be much more like the decentralized Internet of the 1990s.

Greater transparency would help; just two months ago Facebook shut down independent research into content interactions and its political advertising - and tried to blame the Federal Trade Commission.

This is *not* a lesson. Whatever we have learned Mark Zuckerberg has not. At CNN, Donie O'Sullivan fact-checks Zuckerberg's response.

A day after Haugen's testimony, Zuckerberg wrote (on Facebook, requiring a login): "I think most of us just don't recognize the false picture of the company that is being painted." Cue Robert Burns: "O wad some Pow'r the giftie gie us | To see oursels as ithers see us!" But really, how blinkered do you have to be to not recognize that if your motto is Move fast and break things people are going to blame you for the broken stuff everywhere?


Illustrations: Slide showing revenue by Facebook user geography from its Q1 2021 SEC filing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 15, 2021

One thousand

net.wars-the-book.gifIn many ways, this 1,000th net.wars column is much like the first (the count is somewhat artificial, since net.wars began as a 1998 book, then presaged by four years of news analysis pieces for the Daily Telegraph, and another book in 2001...and a lot of my other writing also fits under "computers, freedom, and privacy"; *however*). That November 2001 column was sparked by former Home Office minister Jack Straw's smug assertion that after 9/11 those of us who had defended access to strong cryptography must be feeling "naive". Here, just over a week after the Capitol invasion, three long-running issues are pertinent: censorship; security and the intelligence failures that enabled the attack; and human rights when demands for increased surveillance capabilities surface, as they surely will.

Censorship first. The US First Amendment only applies to US governments (a point that apparently requires repeating). Under US law, private companies can impose their own terms of service. Most people expected Twitter would suspend Donald Trump's account approximately one second after he ceased being a world leader. Trump's incitement of the invasion moved that up, and led Facebook, including its subsidiaries Instagram and WhatsApp, Snapchat, and, a week after the others, YouTube to follow suit. Less noticeably, a Salesforce-owned email marketing company ceased distributing emails from the Republican National Committee.

None of these social media sites is a "public square", especially outside the US, where they've often ignored local concerns. They are effectively shopping malls, and ejecting Trump is the same as throwing out any other troll. Trump's special status kept him active when many others were unjustly banned, but ultimately the most we can demand from these services is clearly stated rules, fairly and impartially enforced. This is a tough proposition, especially when you are dependent on social media-driven engagement.

Last week's insurrection was planned on numerous openly accessible sites, many of which are still live. After Twitter suspended 70,000 accounts linked to QAnon, numerous Republicans complaining they had lost followers seemed to be heading to Parler, a relatively new and rising alt-right Twitterish site backed by Rebekah Mercer, among others. Moving elsewhere is an obvious outcome of these bans, but in this crisis short-term disruption may be helpful. The cost will be longer-term adoption of channels that are harder to monitor.

By January 9 Apple was removing Parler from the App Store, to be followed quickly by Android (albeit less comprehensively, since Android allows side-loading). Amazon then kicked Parler off its host, Amazon Web Services. It is unknown when, if ever, the site will return.

Parler promptly sued Amazon claiming an antitrust violation. AWS retaliated with a crisp brief that detailed examples of the kinds of comments the site felt it was under no obligation to host and noted previous warnings.

Whether or not you think Parler should be squashed - stipulating that the imminent inauguration requires an emergency response - three large Silicon Valley platforms have combined to destroy a social media company. This is, as Jillian C. York, Corynne McSherry, and Danny O'Brien write at EFF, a more serious issue. The "free speech stack", they write, requires the cooperation of numerous layers of service providers and other companies. Twitter's decision to ban one - or 70,000 - accounts has limited impact; companies lower down the stack can ban whole populations. If you were disturbed in 2010, when, shortly after the diplomatic cables release, Paypal effectively defunded Wikleaks after Amazon booted it off its servers, then you should be disturbed now. These decisions are made at obscure layers of the Internet where we have little influence. As the Internet continues to centralize, we do not want just these few oligarchs making these globally significant decisions.

Security. Previous attacks - 9/11 in particular - led to profound damage to the sense of ownership with which people regard their cities. In the UK, the early 1990s saw the ease of walking into an office building vanish, replaced by demands for identification and appointments. The same happened in New York and some other US cities after 9/11. Meanwhile, CCTV monitoring proliferated. Within a year of 9/11, the US passed the PATRIOT Act, and the UK had put in place a series of expansions to surveillance powers.

Currently, residents report that Washington, DC is filled with troops and fences. Clearly, it can't stay that way permanently. But DC is highly unlikely to return to the openness of just ten days ago. There will be profound and permanent changes, starting with decreased access to government buildings. This will be Trump's most visible legacy.

Which leads to human rights. Among the videos of insurrectionists shocked to discover that the laws do apply to them were several in which prospective airline passengers discovered they'd been placed preemptively on the controversial no-fly list. Many others who congregated at the Capitol were on a (separate) terrorism watch list. If the post-9/11 period is any guide, the fact that the security agencies failed to connect any of the dots available to them into actionable intelligence will be elided in favor of insisting that they need more surveillance powers. Just remember: eventually, those powers will be used to surveil all the wrong people.


Illustrations: net.wars, the book at the beginning.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 2, 2020

Searching for context

skyler-gundason-social-dilemma.pngIt's meant, I think, to be a horror movie. Unfortunately, Jeff Orlowski's The Social Dilemma comes across as too impressed with itself to scare as thoroughly as it would like.

The plot, such as it is: a group of Silicon Valley techies who have worked on Google, Facebook, Instagram, Palm (!), and so on present mea culpas. "I was co-inventor...of the Like button," Tristan Harris says by way of introduction. It seems such a small thing to include. I'm sure it wasn't that easy, but Slashdot was upvoting messages when Mark Zuckerberg was 14. The techies' thoughts are interspersed with those of outside critics. Intermittently, the film inserts illustrative scenarios using actors, a technique better handled in The Big Short. In these, Vincent Kartheiser plays a multiplicity of evil algorithmic masterminds doing their best to exploit their target, a fictional teenage boy (Skyler Gisondo) who has accepted the challenge of giving up his phone for a week with the predictable results of an addiction film. As he becomes paler and sweatier, you expect him to crash out in a grotty public toilet, like Julia Ormond's character in Traffik. Instead, he face-plants when the police arrest him at Charlottesville.

The first half of the movie is predominantly a compilation of favorite social media nightmares: teens are increasingly suffering from depression and other mental health issues; phone addiction is a serious problem; we are losing human connection; and so on. As so often, causality is unclear. The fact that these Silicon Valley types consciously sought to build addictive personal tracking and data crunching systems and change the world does not automatically tie every social problem to their products.

I say this because so much of this has a long history the movie needs for context. The too-much-screen-time of my childhood was TV, though my (older) parents worried far more about the intelligence-drainage perpetrated by comic books. Girls who now seek cosmetic surgery in order to look more like filter-enhanced Instagram images were preceded by girls who starved themselves to look like air-brushed, perfect models in teen magazines. Today's depressed girls could have been those profiled in Mary Pipher's 1994 Reviving Ophelia, and she, too, had forerunners. Claims about Internet addiction go back more than 20 years, and until very recently were focused on gaming. Finally, though data does show that teens are going out less, less interested in learning to drive, and are having less sex and using less drugs, is social media the cause or the compensation for a coincidental overall loss of physical freedom? Even pre-covid they were growing up into a precarious job market and a badly damaged planet; depression might just be the sane response.

In the second half the film moves on to consider social media divisions as assault on democracy. Here, it's on firmer ground, but really only because the much better film The Great Hack has already exposed how Facebook (in particular) was used to spark violence and sway elections even before 2016. And then it wraps up: people are trapped, the companies have no incentive to change, and (says Jaron Lanier) the planet will die. As solutions, the film's many spokespeople suggest familiar ideas: regulation, taxation, withdrawal. Shoshana Zuboff is the most radical: outlaw them. (Please don't take Twitter! I learn so much from Twitter!)

"We are allowing technologists to frame this as a problem that they are equipped to solve," says data scientist Cathy O'Neil. "That's a lie." She goes on to say that AI can't distinguish truth. Even if it could, truth is not part of the owners' business model.

Fair enough, but remove Facebook and YouTube, and you still have Fox News, OANN, and the Daily Mail inciting anger and division with expertise honed over a century of journalistic training - and amoral world leaders. This week, a study published this week from Cornell University found that Donald Trump is implicated in 38% of the coronavirus misinformation circulating on online and traditional media. Knock out a few social media sites...and that still won't change because his pulpit is too powerful.

Most of the film's speakers eventually close by recommending we delete our social media accounts. It seems a weak response, in part because the movie does a poor job of disentangling the dangers of algorithmic manipulation from the myriad different reasons why people use phones and social media: they listen to music, watch TV, connect with their friends, play games, take pictures, and navigate unfamiliar locations. It's absurd to ask them to give that up without suggesting alternatives for fulfilling those functions.

A better answer may be that offered this week by the 25-odd experts who have formed an independent Facebook oversight board (the actual oversight board Facebook announced months ago is still being set up and won't begin meeting until after the US presidential election). The expertise assembled is truly impressive, and I hope that, like the Independent SAGE group of scientists who have been pressuring the UK government into doing a better job on coronavirus, they will have a mind-focusing effect on our Facebook overlords, perhaps later to be copied for other sites. The problem - an aspect also omitted from The Social Dilemma - is that under the company's shareholder structure Zuckerberg is under no requirement to listen.


Illustrations: Skyler Gisondo as Ben, in The Social Dilemma.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 11, 2020

Autofail

sfo-fires-hasbrouck.jpegA new complaint surfaced on Twitter this week. Anthony Ryan may have captured it best: "In San Francisco everyone is trying unsuccessfully to capture the hellish pall that we're waking up to this morning but our phone cameras desperately want everything to be normal." california-fires-sffdpio.jpegIn other words: as in these pictures, the wildfires have turned the Bay Area sky dark orange ("like dusk on Mars," says one friend), but people attempting to capture it on their phone cameras are finding that the automated white balance correction algorithms recalibrate the color to wash out the orange in favor of grey daylight.

At least that's something the computer is actually doing, even if it's counter-productive. Also this week, the Guardian ran an editorial that it boasted had been "entirely" written by OpenAI's language generator, GPT-3. Here's what they mean by "written" and "entirely": the AI was given a word length, a theme, and the introduction, from which it produced eight unique essays, which the Guardian editors chopped up and pieced together into a single essay, which they then edited in the usual way, cutting lines and rearranging paragraphs as they saw fit. Trust me, human writers don't get to submit eight versions of anything; we'd be fired when the first one failed. But even if we did, editing, as any professional writer will tell you, is the most important part of writing anything. As I commented on Twitter, the whole thing sounds like a celebrity airily claiming she's written her new book herself, with "just some help with the organizing". I'd advise that celebrity (name withheld) to have a fire extinguisher ready for when her ghostwriter reads that and thinks of all the weeks they spent desperately rearranging giant piles of rambling tape transcripts into a (hopefully) compelling story.

The Twitter discussion of this little foray into "AI" briefly touched on copyright. It seems to me hard to argue that the AI is the author given the editors' recombination of its eight separately-generated pieces (which likely took longer than if one of them had simply written the piece). Perhaps you could say - if you're willing to overlook the humans who created, coded, and trained the AI - that the AI is the author of the eight pieces that became raw material for the essay. As things are, however, it seems clear that the Guardian is the copyright owner, just as it would be if the piece had been wholly staff-written (by humans).

Meanwhile, the fallout from Max Schrems' latest win continues to develop. The Irish Data Protection Authority has already issued a preliminary order to suspend data transfers to the US; Facebook is appealing. The Swiss data protection authority has issued a notice that the Swiss-US Privacy Shield is also void. During a September 3 hearing before the European Parliament Committee on Civil Liberties, Justice, and Home Affairs, MEP Sophie in't Veld said that by bringing the issue to the courts Schrems is doing the job data protection authorities should be doing themselves. All agreed that a workable - but this time "Schrems-proof" - solution must be found to the fundamental problem, which Gwendolyn Delbos-Corfield summed up as "how to make trade with a country that has decided to put mass surveillance as a rule in part of its business world". In't Veld appeared to sum up the entire group's feelings when she said, "There must be no Schrems III."

Of course we all knew that the UK was going to get caught in the middle between being able to trade with the EU, which requires a compatible data protection regime (either the continuation of the EU's GDPR or a regime that is ruled equal), and the US, which wants data to be free-flowing and which has been trying to use trade agreements to undermine the spread of data protection laws around the world (latest newcomer: Brazil). What I hadn't quite focused on (although it's been known for a while) is that, just like the US surveillance system, the UK's own surveillance regime could disqualify it from the adequacy ruling it needs to allow data to go on flowing. When the UK was an EU member state, this didn't arise as an issue because EU data protection law permits member states to claim exceptions for national security. Now that the UK is out, that exception no longer applies. It was a perk of being in the club.

Finally, the US Senate, not content with blocking literally hundreds of bills passed by the House of Reprsentatives over the last few years, has followed up July's antitrust hearings with the GAFA CEOs with a bill that's apparently intended to answer Republican complaints that conservative voices are being silenced on social media. This is, as Eric Goldman points out in disgust one of several dozen bits of legislation intended to modify various pieces of S230 or scrap it altogether. On Twitter, Tarleton Gillespie analyzes the silliness of this latest entrant into the fray. While modifying S230 is probably not the way to go about it, right now curbing online misinformation seems like a necessary move - especially since Facebook CEO Mark Zuckerberg has stated outright that Facebook won't remove anti-vaccine posts. Even in a pandemic.


Illustrations: The San Francisco sky on Wednesday ("full sun, no clouds, only smoke"), by Edward Hasbrouck; accurate color comparison from the San Francisco Fire Department.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 29, 2020

Tweeted

sbisson-parrot-49487515926_0c97364f80_o.jpgAnyone who's ever run an online forum has at some point grappled with a prolific poster who deliberately spreads division, takes over every thread of conversation, and aims for outraged attention. When your forum is a few hundred people, one alcohol-soaked obsessive bent on suggesting that anyone arguing with him should have their shoes filled with cement before being dropped into the nearest river is enormously disruptive, but the decision you make about whether to ban, admonish, or delete their postings matters only to you and your forum members. When you are a public company, your forum is several hundred million people, and the poster is a world leader...oy.

Some US Democrats have been calling Donald Trump's outrage this week over having two tweets labeled with a fact-check an attempt to distract us all from the terrible death toll of the pandemic under his watch. While this may be true, it's also true that the tweets Trump is so fiercely defending form part of a sustained effort to spread misinformation that effectively acts as voter suppression for the upcoming November election. In the 12 hours since I wrote this column, Trump has signed an Executive Order to "prevent online censorship", and Twitter has hidden, for "glorifying violence", Trump tweets suggesting shooting protesters in Minneapolis. It's clear this situation will escalate over the coming week. Twitter has a difficult balance to maintain: it's important not to hide the US president's thoughts from the public, but it's equally important to hold the US president to the same standards that apply to everyone else. Of course he feels unfairly picked on.

Rewind to Tuesday. Twitter applied its recently-updated rules regarding election integrity by marking two of Donald Trump's tweets. The tweets claimed that conducting the November presidential election via postal ballots would inevitably mean electoral fraud. Trump, who moved his legal residence to Florida last year, voted by mail in the last election. So did I. Twitter added a small, blue line to the bottom of each tweet: "! Get the facts about mail-in ballots". The link leads to numerous articles debunking Trump's claim. At OneZero, Will Oremus explains Twitter's decision making process. By Wednesday, Trump was threatening to "shut them down" and sign an Executive Order on Thursday.

Thursday morning, a leaked draft of the proposed executive order had been found, and Daphne Keller had color coded it to show which bits matter. In a fact-check of what power Trump actually has for Vox, Shirin Ghaffary quotes a tweet from Lawrence Tribe, who calls Trump's threat "legally illiterate". Unlike Facebook, Twitter doesn't accept political ads that Trump can threaten to withdraw, and unlike Facebook and Google, Twitter is too small for an antitrust action. Plus, Trump is addicted to it. At the Washington Post, Tribe adds that Trump himself *is* violating the First Amendment by continuing to block people who criticize his views, a direct violation of a 2019 court order.

What Trump *can* do - and what he appears to intend to do - is push the FTC and Congress to tinker with Section 230 of the Communications Decency Act (1996), which protects online platforms from liability for third-party postings spreading lies and defamation. S230 is widely credited with having helped create the giant Internet businesses we have today; without liability protection, it's generally believed that everything from web comment boards to big social media platforms will become non-viable.

On Twitter, US Senator Ron Wyden (D-OR), one of S230's authors, explains what the law does and does not do. At the New York Times, Peter Baker and Daisuke Wakabayashi argue, I think correctly, that the person a Trump move to weaken S230 will hurt most is...Trump himself. Last month, the Washington Post put the count of Trump's "false or misleading claims" while in office at 18,000 - and the rate has grown over time. Probably most of them have been published on Twitter.

As the lawyer Carrie A. Goldberg points out on Twitter, there are two very different sets of issues surrounding S230. The victims she represents cannot sue the platforms where they met serial rapists who preyed on them or continue to tolerate the revenge porn their exes have posted. Compare that very real damage to the victimhood conservatives are claiming: that the social media platforms are biased against them and disproportionately censor their posts. Goldberg wants access to justice for the victims she represents, who are genuinely harmed, and warns against altering S230 for purposes such as "to protect the right to spread misinformation, conspiracy theory, and misinformation".

However, while Goldberg's focus on her own clients is understandable, Trump's desire to tweet unimpeded about mail-in ballots or shooting protesters is not trivial. We are going to need to separate the issue of how and whether S230 should be updated from Trump's personal behavior and his clearly escalating war with the social medium that helped raise him from joke to viable presidential candidate. The S230 question and how it's handled in Congress is important. Calling out Trump when he flouts clearly stated rules is important. Trump's attempt to wield his power for a personal grudge is important. Trump versus Twitter, which unfortunately is much easier to write about, is a sideshow.


Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 22, 2020

The pod exclusion

Vintage_Gloritone_Model_27_Cathedral-Tombstone_Style_Vacuum_Tube_Radio,_AM_Band,_TRF,_Circa_1930_(14663394535).jpgThis week it became plain that another bit of the Internet is moving toward the kind of commercialization and control the Internet was supposed to make difficult in the first place: podcasts. The announcement that one of the two most popular podcasts, the Joe Rogan Experience, will move both new episodes and its 11-year back catalogue to Spotify exclusively in a $100 million multiyear deal is clearly a step change. Spotify has also been buying up podcast networks, and at the Verge, Ashley Carman suggests the podcast world will bifurcate into twin ecosystems, Spotify versus Everyone Else.

Like a few hundred million other people, I am an occasional Rogan listener, my interest piqued by a web forum mention of his interview with Jeff Novitzky, the investigator in the BALCO doping scandal. Other worth-the-time interviews from his prolific output include Lawrence Lessig, epidemiologist Michael Osterholm (particularly valuable because of its early March timing), Andrew Yang, and Bernie Sanders. Parts of Twitter despise him; Rogan certainly likes to book people (usually, but not always, men - for example Roseanne Barr) who are being pilloried in the news and jointly chew over their situation. Even his highest-profile interviewees rarely find, anywhere else, the two to three hours Rogan spends letting them talk quietly about their thinking. He draws them out by not challenging them much, and his predilection for conspiracy theories and interest in unproven ideas about nutrition make it advisable to be selective and look for countervailing critiques.

It's about 20 years since I first read about Dave Winers early experiments in "audio blogging", renamed "podcast" after the 2001 release of the iPod eclipsed all previously existing MP3 players. The earliest podcasts tended to be the typical early-stage is-this-thing-on? that leads the unimaginative to dismiss the potential. But people with skills honed in radio were obviously going to do better, and within a few years (to take one niche example) the skeptical world was seeing weekly podcasts like Skepchick (beginning 2005) and The Pod Delusion (2009-2014). By 2014, podcast networks were forming, and an estimated 20% of Americans were listening to podcasts at least once a month.

That era's podcasts, although high-quality, were - and in some cases still are - produced by people seeking to educate or promote a cause, and were not generally money-making enterprises in their own right. The change seems to have begun around 2010, as the acclerating rise of smartphones made podcasts as accessible as radio for mobile listening. I didn't notice until late 2016, when the veteran screenwriter and former radio announcer and DJ Ken Levine announced on his daily 11-year-old blog that he was starting up Hollywood & Levine and I discovered the ongoing influx of professional comedians, actors, and journalists into podcasting. Notably, they all carried ads for the same companies - at the minimum, SquareSpace and Blue Apron. Like old-time radio, these minimal-production ads were read by the host, sometimes making the whole affair feel uncomfortably fake. Per the Wall Street Journal, US advertising revenue from podcasting was $678.7 million last year, up 42% over 2018.

No wonder advertisers like podcasts: users can block ads on a website or read blog postings via RSS, but no matter how you listen to a podcast the ads remain in place, and if you, like most people, listen to podcasts (like radio) when your hands are occupied, you can't easily skip past them. For professional communicators, podcasts therefore provide direct access to revenues that blogging had begun to offer before it was subsumed by social media and targeted advertising.

The Rogan deal seems a watershed moment that will take all this to a new level. The key element really isn't the money, as impressive as it sounds at first glance; it's the exclusive licensing. Rogan built his massive audience by publishing his podcast in both video and audio formats widely on multiple platforms, primarily his own websites and YouTube; go to any streaming site and you're likely to find it listed. Now, his audience is big enough that Spotify apparently thinks that paying for exclusivity will net the company new subscribers. If you prefer downloads to streaming, however, you'll need a premium subscription. Rogan himself apparently thinks he will lose no control over his show; he distrusts YouTube's censorship.

At his blog on corporate competition, Matt Stoller proclaims that the Rogan deal means the death of independent podcasting. While I agree that podcasts circa 2017-2020 are in a state similar to the web in the 2000s, I don't agree this means the death of all independent podcasting - but it will be much harder for their creators to find audiences and revenues as Spotify becomes the primary gatekeeper. This is what happened with blogs between 2008 and 2015 as social media took over.

Both Carman's and Stoller's predictions are grim: that podcasts will go the way of today's web and become a vector for data collection and targeted advertising. Carman, however, imagines some survival for a privacy-protecting, open ecosystem of podcasts. I want to believe this. But, like blogging now, that ecosystem will likely have to find a new business model.


Illustrations: 1930s vacuum tube radio (via Joe Haupte).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2020

Uncontrolled digital unlending

800px-Books_HD_(8314929977).jpg
The Internet has made many aspects of intellectual property contentious at the best of times. In this global public health emergency, it seems inarguable that some of them should be set aside. Who can seriously object to copying ventilator parts so they can be used to save lives in this crisis? Similarly, if there were ever a moment for scientific journals to open up access to all paywalled research on coronaviruses to aid scientists all over the world, this is it.

But what about book authors, the vast majority of whom make only modest sums from their writing? This week, National Public Radio set off a Twitter storm when it highlighted the Internet Archive's "National Emergency Library". On Twitter, authors demanded to know why NPR was promoting a "pirate site". One wrote, "They stole [my book]." Another called it "Flagrant and wilful stealing." Some didn't mind: "Thrilled there's 15 of my books"; Longtime open access campaigner Cory Doctorow endorsed it.

The background: the Internet Archive's Open Library originally launched in 2006 with a plan to give every page of every book its own URL. Early last year, public conflict over the project built enough for net.wars to notice, when dozens of authors', creators', and publishers' organizations accused the site of mass copyright violation and demanded it cease distributing copyrighted works without permission.

The Internet Archive finds self-justification in a novel argument: that because the state of California has accepted it as a library it can buy and scan books and "lend" the digital copies without requiring explicit permission. On this basis, the Archive offers anyone two weeks to read any of the 1.4 million copyrighted books in its collection either online as images or downloaded as copy-protected Adobe Digital Editions. Meanwhile, the book is unavailable to others, who wait on a list, as in a physical library. The Archive's white paper by lawyers David Hansen and Kyle K. Courtney argues that this "controlled digital lending" is legal.

Enter the coronavirus,. On the basis that the emergency has removed access to library books from both school kids and adults for teaching, research, scholarship, and "intellectual stimulation", the Archive is dropping the controls - "suspending waitlists" - and is presenting those 1.4 million books as the globally accessible National Emergency Library. "An opportunistic attack", the Association of American Publishers calls it.

The anger directed at the Archive has led it to revise its FAQ (Google Doc) and publish a blog posting. In both it explains that you can still only "borrow" a book for 14 days, but no waitlists means others can, too, and you can renew immediately if you want more time. The change will last until June 30, 2020 or the end of the US national emergency, whichever is later. It claims support "from across the library and educational communities". According to the FAQ, the collection includes very few current textbooks; the collection is primarily ordinary books published between 1922 and the early 2000s.

The Archive still justifies all this as "fair use" by saying it's what libraries do: buy (or accept as donations) and lend books. Outside the US, however, library lending pays authors a small but real royalty on those loans, payments the Archive ignores. For the National Writers Union, Edward Hasbrouck objects strenuously: besides not paying authors or publishers, the Archive takes no account of whether the works are still in print or available elsewhere in authorized digital editions. Authors who have updated digital editions specifically for the current crisis have no way to annotate the holdings to redirect people. Authors *can* opt out -but opt-out is the opposite of how copyright law works. " Do librarians and archivists really want to kick authors while our incomes are down?" he asks, pointing to the NWU's 2019 explanation of why CDL is a harmful divergence from traditional library lending. Instead, he suggests that public funds should be spent to purchase or license the books for public use.

Other objectors make similar points: many authors make very little in the first place; authors with new books, the result of years of work, are seeing promotional tours and paid speaking engagements collapse. Others' books are being delayed or canceled. Everyone else involved in the project is being paid - just not the people who created the works in the first place.

At the New Yorker, writer Jill Lepore again cites Courtney, who argues that in exigent circumstances libraries have "superpowers" that allows them to grant exceptional access "for research, scholarship, and study". This certainly seems a reason for libraries of scientific journal articles, like JSTOR, to open up their archives. But is the Archive's collection comparable?

Overall, it seems to me there are two separate issues. The first is the service itself - the unique legal claim, the service's poor image quality and typo-ridden uncorrected ebooks, and the refusal to engage with creators and publishers. The second - that it's an emergency stop-gap - is more defensible; no one expected the abrupt closure of libraries and schools. A digital service is ideally placed to fill the resulting gaps, and ensuring universal access to books should be part of our post-crisis efforts to rebuild with better resilience. For the first, however, the Internet Archive should engage with authors and publishers. The result could be a better service for all sides.


Illustrations: Books (Abhi Sharma via wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 6, 2020

Mission creep

Haystack-Cora.png"We can't find the needles unless we collect the whole haystack," a character explains in the new play The Haystack, written by Al Blyth and in production at the Hampstead Theatre through March 7. The character is Hannah (Sarah Woodward), and she is director of a surveillance effort being coded and built by Neil (Oliver Johnstone) and Zef (Enyi Ororonkwo), familiarly geeky types whose preferred day-off activities are the cinema and the pub, rather than catching up on sleep and showers, as Hannah pointedly suggests. Zef has a girlfriend (and a "spank bank" of downloaded images) and is excited to work in "counter-terrorism". Neil is less certain, less socially comfortable, and, we eventually learn, more technically brilliant; he must come to grips with all three characteristics in his quest to save Cora (Rona Morison). Cue Fleabag: "This is a love story."

The play is framed by an encrypted chat between Neil and Denise, Cora's editor at the Guardian (Lucy Black). We know immediately from the technological checklist they run down in making contact that there has been a catastrophe, which we soon realize surrounds Cora. Even though we're unsure what it is, it's clear Neil is carrying a load of guilt, which the play explains in flashbacks.

As the action begins, Neil and Zef are waiting to start work as a task force seconded to Hannah's department to identify the source of a series of Ministry of Defence leaks that have led to press stories. She is unimpressed with their youth, attire, and casual attitude - they type madly while she issues instructions they've already read - but changes abruptly when they find the primary leaker in seconds. Two stories remain; because both bear Cora's byline she becomes their new target. Both like the look of her, but Neil is particularly smitten, and when a crisis overtakes her, he breaks every rule in the agency's book by grabbing a train to London, where, calling himself "Tom Flowers", he befriends her in a bar.

Neil's surveillance-informed "god mode" choices of Cora's favorite music, drinks, and food when he meets her remind of the movie Groundhog Day, in which Phil (Bill Murray) slowly builds up, day by day, the perfect approach to the women he hopes to seduce. In another cultural echo, the tense beginning is sufficiently reminiscent of the opening of Laura Poitras's film about Edward Snowden, CitizenFour, that I assumed Neil was calling from Moscow.

The requirement for the haystack, Hannah explains at the beginning of Act Two, is because the terrorist threat has changed from organized groups to home-grown "lone wolves", and threats can come from anywhere. Her department must know *everything* if it is to keep the nation safe. The lone-wolf theory is the one surveillance justification Blyth's characters don't chew over in the course of the play; for an evidence-based view, consult the VOX-Pol project. In a favorite moment, Neil and Hannah demonstrate the frustrating disconnect between technical reality and government targets. Neil correctly explains that terrorists are so rare that, given the UK's 66 million population, no matter how much you "improve" the system's detection rate it will still be swamped by false positives. Hannah, however, discovers he has nonetheless delivered. The false positive rate is 30% less! Her bosses are thrilled! Neil reacts like Alicia Florrick in The Good Wife after one of her morally uncomfortable wins.

Related: it is one of the great pleasures of The Haystack that its three female characters (out of a total of five) are smart, tough, self-reliant, ambitious, and good at their jobs.

The Haystack is impressively directed by Roxana Silbert. It isn't easy to make typing look interesting, but this play manages it, partly by the well-designed use of projections to show both the internal and external worlds they're seeing, and partly by carefully-staged quick cuts. In one section, cinema-style cross-cutting creates a montage that fast-forwards the action through six months of two key relationships.

Technically, The Haystack is impressive; Zef and Neil speak fluent Python, algorithms, and Bash scripts, and laugh realistically over a journalist's use of Hotmail and Word with no encryption ("I swear my dad has better infosec"), while the projections of their screens are plausible pieces of code, video games, media snippets, and maps. The production designers and Blyth, who has a degree in econometrics and a background as a research economist, have done well. There were just a few tiny nitpicks: Neil can't trace Cora's shut-down devices "without the passwords" (huh?); and although Neil and Zef also use Tor, at one point they use Firefox (maybe) and Google (doubtful). My companion leaned in: "They wouldn't use that." More startling, for me, the actors who play Neil and Zef pronounce "cache" as "cachet"; but this is the plaint of a sound-sensitive person. And that's it, for the play's 1:50 length (trust me; it flies by).

The result is an extraordinary mix of a well-plotted comic thriller that shows the personal and professional costs of both being watched and being the watcher. What's really remarkable is how many of the touchstone digital rights and policy issues Blyth manages to pack in. If you can, go see it, partly because it's a fine introduction to the debates around surveillance, but mostly because it's great entertainment.


Illustrations: Rona Morison, as Cora, in The Haystack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 13, 2019

Becoming a science writer

JamesRandi-Florida2016jpg
As an Association of British Science Writers board member, I occasionally speak to science PhD students and postdocs about science writing. Since the most recent of these excursions was just this week, I thought I'd summarize some of what I've said.

To trained scientists aiming to make the switch: you are starting from a more knowledgeable place than the arts graduates who mostly populate this field. You already know how to investigate and add to a complex field of study, have a body of knowledge from which to reliably evaluate new claims, and know the significant contributors to your field and adjacent ones. What you need to learn are basic journalism skills such as interviewing, identifying stories, pitching them to venues where they might fit, remaining on the right side of libel law, and journalistic ethics and culture. Your new deadlines will seem really short!

Figuring out what kind of help you need is where an organization like the ABSW (and its counterparts in other countries) can help, first by offering opportunities for networking with other science writers, and second by providing training and resources. ABSW maintains, for example, a page that includes some basics and links.

Besides that, if you put "So You Want to Be a Science Writer" into your favorite search engine, you will find many guides from reputable sources such as other science writers' associations and university programs. I particularly like Ivan Oransky's talk for the National Association of Science Writers, because he begins with "my first failures".

Every career path is idiosyncratic enough that no one can copy its specifics. I began my writing career by founding The Skeptic magazine in 1987. Through the skeptics, I met all sorts of people, including one who got me my first writing-related job as a temporary subeditor on a computer magazine. Within weeks, I knew the editors of all the other magazines on its floor, and began writing features for them. In 1991, when I got online and sent my first email, I decided to specialize on the Internet because it was obviously the future of communication. A friend advises that if you find a fast-moving field, there will always be people willing to pay you to explain it to them.

So: I self-published, networked early and often - I joined the ABSW as soon as I was qualified - and luckily landed on a green field at the beginning of a complex and far-reaching social, cultural, political, and technological revolution. Today's early-career science writers will have to work harder to build their own networks than in the early 1990s, when we all met regularly at press conferences and shows - but they have vastly further reach than we had.

I have never had a job, so I can't tell people how to get one. I can, however, observe that if you focus solely on traditional media you will be aiming at a shrinking number of slots. Think more broadly about what science communication is, who does it, and in what context. The kind of journalism that used to be the sole province of newspapers and news magazines now has a home in NGOs, who also hire people who can do solid research, crunch data, and think creatively about new areas for investigation. You should also broaden your idea of "media" and "science communication". Few can be Robin Ince or Richard Wiseman, who combine comedy, magic, and science into sell-out shows, but everyone can work in non-traditional contexts in which to communicate science.

At the moment, commercial money is going into podcasts; people are building big followings for niche interests on YouTube and through self-publishing ebooks; and constant tweeters are important communicators, as botanist James Wong proves every day. Edward Hasbrouck, at the National Writers Union, has published solid advice on writing for digital formats: look to build revenue streams. The Internet offers many opportunities, but, as Hasbrouck writes, many are invisible to traditional publishing; as he also writes, traditional employment is just one of writers' many business models.

The big difficulty for trained academics is rethinking how you approach telling a story. Forget the academic structure of: 1) here is what I am going to say; 2) this is what I'm saying; 3) this is my summary of what I just said. Instead, when writing for the general public, put your most important findings first and tell your specific audience why it matters to *them*. Then show why they can have confidence in your claim by explaining your methods and how your findings fit into the rest of the relevant body of scientific knowledge. (Do not use net.wars as your model!)

Over time, you will probably want to branch out into other fields. Do not fear this; you know how to learn a complex field, and if you can learn one you can learn another.

It's inevitable that you will make some mistakes. When it happens, do your best to correct them, learn from how you made them, and avoid making the same one again.

Finally just a couple of other resources. My favorite book on writing is William Goldman's Adventures in the Screen Trade. He has solid advice for story structure no matter what you're writing. A handout I wrote for a blogging workshop for scientists (PDF) has some (I hope, useful) writing tips. Good luck!


Illustrations: Magician James Randi communicates science, Florida 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2019

When we were

zittrain-cim-iphone.jpg
"These people changed the world," said Jeff Wilkins, looking out across a Columbus, Ohio ballroom filled with more than 400 people. "And they know it, and are proud of it."

At one time, all this was his.

Wilkins was talking about...CompuServe, which he co-founded in 1969. How does it happen, he asked, that more than 400 people show up to celebrate a company that hasn't really existed for the last 23 years? I can't say, but a group of people happier to see each other (and random outsiders) again would be hard to find. "This is the only reunion I go to," one woman said.

It's easy to forget - or to never have known - CompuServe's former importance. Circa 1993, the Twitter handle now displayed on everyone's business cards and slides was their numbered CompuServe ID. My inclusion of mine (70007,5537) at the end of a Guardian article led a reader to complain that I should instead promote the small ISPs it would kill when broadband arrived. In 1994, Aerosmith released a single on CompuServe, the first time a major label tried online distribution. It probably took five hours to download.

In Wilkins' story, he was studying electrical engineering at the University of Arizona when his father-in-law asked for help with data processing for his new insurance company. Wilkins and fellow grad students Sandy Trevor, John Goltz, Larry Shelley, and Doug Chinnock, soon relocated to Columbus. It was, Wilkins said, Shelley who suggested starting a time-sharing company - "or should I say cloud computing?" Wilkins quipped, to applause and cheers.

Yes, he should. Everything new is old again.

In time-sharing, the fledgling company competed with GE and IBM. The information service started in 1979, as a way to occupy the computers during the empty evenings when the businesses had gone home. For the next 20 years, CompuServers invented everything for themselves: "GO" navigation commands, commercial email (first customer: HJ Heinz), live chat ("CB , news wires, online games and virtual worlds (partnering with Fujitsu on a graphical MUD), shopping... The now-ubiquitous GIF was the brainchild of Steve Wilhite (it's pronounced "JIF"). The legend of CompuServe inventions is kept alive by by Sandy Trevor and Dave Eastburn, whose Nuvocom "software archeology" business holds archives that have backed expert defense against numerous patent claims on technologies that CompuServe provably pioneered.

A panel reminisced about the CIS shopping mall. "We had an online stockbroker before anyone else thought about it," one said. Another remembered a call asking for a 30-minute meeting from the then-CEO of the nationwide flowers delivery service FTD. "I was too busy." (The CEO was Meg Whitman.). For CompuServe's 25th anniversary, the mall's travel agency collaborated on a three-day cruise with, as invited guests, the film critic Roger Ebert, who disseminated his movie reviews through the service and hosted the "Ask Roger Ebert" section in the Movies Forum, and his wife, Chaz. "That may have been the peak."

Mall stores paid an annual fee; curation ensured there weren't too many of any one category of store. Banners advertising products were such a novelty at the time - and often the liveliest, most visually attractive thing on the page - that as many as 25% of viewers clicked on them. Today, Amazon takes a percentage of transactions instead. "If we could have had a universal shopping cart, like Amazon," lamented one, "what might have been?"

Well, what? Could CompuServe now be under threat of a government-mandated breakup to separate its social media business, search, cloud provider, and shopping? Both CompuServe and AOL, whose speed to embrace graphical interfaces and aggressive marketing led it to first outstrip and then buy and dismantle CompuServe in the 1990s, would have had to cannibalize their existing businesses. Used to profits from access fees, both resisted the Internet's monthly subscription model.

One veteran openly admitted how profoundly he underestimated the threat of the Internet after surveying the rickety infrastructure designed by/for academics and students. "I didn't think that the Internet could survive in the reality of a business..." Instead, the information services saw their competition as each other. A contemporary view of the challenges is visible in this 1995 interview with Barry Berkov, the vice-president in charge of CIS.

However, CompuServe's closed approach left no opening for individuals' self-expression. The 1990s rising Internet stars, Geocities and MySpace, were all about that, as are today's social media.

So many shifts have changed social media since then: from topic-centered to person-centered forums, from proprietary to open to centralized, from dial-up modems to pervasive connections, the massive ramp-up of scale and, mobile-fueled, speed, along with the reconfiguration of business models and tehcnical infrastructure. Some things have degraded: past postings on Twitter and Facebook are much harder to find, and unwanted noise is everywhere. CompuServe would have had to navigate each of those shifts without error. As we know now, they didn't make it.

And yet, for 20-odd years, a company of early 20-somethings 2,500 miles from Silicon Valley, invented a prototype of today's world, at first unaware of the near-simultaneous first ARPAnet connection, the beginnings of the network they couldn't imagine would ever be trustworthy enough for businesses and governments to rely on. They may yet be proven right about that.

cis50-banner.jpg

Illustrations: Jonathan Zittrain's mockup of the CompuServe welcome screen (left, with thanks) next to today's iPhone showing how little things have changed; the reunion banner.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2019

The China syndrome

800px-The_Great_wall_-_by_Hao_Wei.jpgAbout five years ago, a friend commented that despite the early belief - promulgated by, among others, then-US president Bill Clinton and vice-president Al Gore - that the Internet would spread democracy around the world, so far the opposite seemed to be the case. I suggested perhaps it's like the rising sea level, where local results don't give the full picture.

Much longer ago, I remember wondering how Americans would react when large parts of the Internet were in Chinese. My friend shrugged. Why should they care? They don't have to read them.

This week's news shows that we may both have been wrong in both cases. The reality, as the veteran technology journalist Charles Arthur suggested in the Wednesday and Thursday editions of his weekday news digest, The Overspill, is that the Hong Kong protests are exposing and enabling the collision between China's censorship controls and Western standards for free speech, aided by companies anxious to access the Chinese market. We may have thought we were exporting the First Amendment, but it doesn't apply to non-government entities.

It's only relatively recently that it's become generally acknowledged that governments can harness the Internet themselves. In 2008, the New York Times thought there was a significant domestic backlash against China's censors; by 2018, the Times was admitting China's success, first in walling off its own edited version of the Internet, and second in building rival giant technology companies and speeding past the US in areas such as AI, smartphone payments, and media creation.

So, this week. On Saturday, Demos researcher Carl Miller documented an ongoing edit war at Wikipedia: 1,600 "tendentious" edits across 22 articles on topics such as Taiwan, Tiananmen Square, and the Dalai Lama to "systematically correct what [officials and academics from within China] argue are serious anti-Chinese biases endemic across Wikipedia".

On Sunday, the general manager of the Houston Rockets, an American professional basketball team, withdrew a tweet supporting the Hong Kong protesters after it caused an outcry in China. Who knew China was the largest international market for the National Basketball Association? On Tuesday, China responded that it wouldn't show NBA pre-season games, and Chinese fans may boycott the games scheduled for Shanghai. The NBA commissioner eventually released a statement saying the organization would not regulate what players or managers say. The Americanness of basketball: restored.

Also on Tuesday, Activision Blizzard suspended Chung Ng Wai, a professional player of the company's digital card game, Hearthstone, after he expressed support for the Hong Kong protesters in a post-win official interview and fired the interviewers. Chung's suspension is set to last for a year, and includes forfeiting his thousands of dollars of 2019 prize money. A group of the company's employees walked out in protest, and the gamer backlash against the company was such that the moderators briefly took the Blizzard subreddit private in order to control the flood of angry posts (it was reopened within a day). By Wednesday, EU-based Hearthstone gamers were beginning to consider mounting a denial-of-service-attack against Blizzard by sending so many subject access requests under the General Data Protection Regulation that it will swamp the company's resources complying with the legal requirement to fulfill them.

On Wednesday, numerous media outlets reported that in its latest iOS update Apple has removed the Taiwan flag emoji from the keyboard for users who have set their location to Hong Kong or Macau - you can still use the emoji, but the procedure for doing so is more elaborate. (We will save the rant about the uselessness of these unreadable blobs for another time.)

More seriously, also on Wednesday, the New York Times reported that Apple has withdrawn the HKmap.live app that Hong Kong protesters were using to track police after China's state media accusing and protecting the protesters.

Local versus global is a long-standing variety of net.war, dating back to the 1991 Amateur Action bulletin board case. At Stratechery, Ben Thompson discusses the China-US cultural clash, with particular reference to TikTok, the first Chinese company to reach a global market; a couple of weeks ago, the Guardian revealed the site's censorship policies.

Thompson argues that, "Attempts by China to leverage market access into self-censorship by U.S. companies should also be treated as trade violations that are subject to retaliation." Maybe. But American companies can't win at this game.

In her recent book, The Big Nine, Amy Webb discusses China AI advantage as it pours resources and, above all, data into becoming the world leader via Baidu, Ali Baba, and Tencent, which have grown to rival Google, Amazon, and Facebook, without ever needing to leave home. Beyond that, China has been spreading its influence by funding telecommunications infrastructure. The Belt and Road initiative has projects in 152 countries. In this, China is taking advantage of the present US administration's inward turn and worldwide loss of trust.

After reviewing the NBA's ultimate decision, Thompson writes, "I am increasingly convinced this is the point every company dealing with China will reach: what matters more, money or values?" The answer will always be money; whose values count will depend on which market they can least afford to alienate. This week is just a coincidental concatenation of early skirmishes; just wait for the Internet of Things.

Illustrations: The Great Wall of China (by Hao Wei, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 13, 2019

Purposeful dystopianism

Truman-Show-exist.pngA university comparative literature class on utopian fiction taught me this: all utopias are dystopias underneath. I was reminded of this at this week's Gikii, when someone noted the converse, that all dystopias contain within themselves the flaw that leads to their destruction. Of course, I also immediately thought of the bare patch on Smaug's chest in The Hobbit because at Gikii your law and technology come entangled with pop culture. (Write-ups of past years: 2018; 2016; 2014; 2013; 2008.)

Granted, as was pointed out to me, fictional utopias would have no dramatic conflict without dystopian underpinnings, just as dystopias would have none without their misfits plotting to overcome. But the context for this subdiscussion was the talk by Andres Guadamuz, which he began by locating "peak Cyber-utopianism" at 2006 to 2010, when Time magazine celebrated the power the Internet had brought each of us, Wikileaks was doing journalism, bitcoin was new, and social media appeared to have created the Arab Spring. "It looked like we could do anything." (Ah, youth.)

Since then, serially, every item on his list has disappointed. One startling statistic Guadamuz cited: streaming now creates more carbon emissions than airplanes. Streaming online video generates as much carbon dioxide per year as Belgium; bitcoin uses as much energy as Austria. By 2030, the Internet is projected to account for 20% of all energy consumption. Cue another memory, from 1995, when MIT Media Lab founder Nicholas Negroponte was feted for predicting in Being Digital that wired and wireless would switch places: broadcasting would move to the Internet's series of tubes, and historically wired connections such as the telephone network would become mobile and wireless. Meanwhile, all physical forms of information would become bits. No one then queried the sense of doing this. This week, the lab Negroponte was running then is in trouble, too. This has deep repercussions beyond any one institution.

Twenty-five years ago, in Tainted Truth, journalist Cynthia Crossen documented the extent to which funders get the research results they want. Successive generations of research have backed this up. What the Media Lab story tells us is that they also get the research they want - not just, as in the cases of Big Oil and Big Tobacco, the *specific* conclusions they want promoted but the research ecosystem. We have often told the story of how the Internet's origins as a cooperative have been coopted into a highly centralized system with central points of failure, a process Guadamuz this week called "cybercolonialism". Yet in focusing on the drivers of the commercial world we have paid insufficient attention to those driving the academic underpinnings that have defined today's technological world.

To be fair, fretting over centralization was the most mundane topic this week: presentations skittered through cultural appropriation via intellectual property law (Michael Dunford, on Disney's use of Māui, a case study of moderation in a Facebook group that crosses RuPaul and Twin Peaks fandom (Carolina Are), and a taxonomy of lying and deception intended to help decode deepfakes of all types (Andrea Matwyshyn and Miranda Mowbray).

Especially, it is hard for a non-lawyer to do justice to the discussions of how and whether data protection rights persist after death, led by Edina Harbinja, Lilian Edwards, Michael Veale, and Jef Ausloos. You can't libel the dead, they explained, because under common law, personal actions die with the person: your obligation not to lie about someone dies when they do. This conflicts with information rights that persist as your digital ghost: privacy versus property, a reinvention of "body" and "soul". The Internet is *so many* dystopias.

Centralization captured so much of my attention because it is ongoing and threatening. One example is the impending rollout of DNS-over-HTTPS. We need better security for the Internet's infrastructure, but DoH further concentrates centralized control. In his presentation Derek MacAuley noted that individuals who need the kind of protection DoH is claimed to provide would do better to just use Tor. It, too, is not perfect, but it's here and it works. This adds one more to so many historical examples where improving the technology we had that worked would have spared us the level of control now exercised by the largest technology companies.

Centralization completely undermines the Internet's original purpose: to withstand a bomb outage. Mozilla and Google surely know this. The third DoH partner, Cloudflare, the content delivery network in the middle, certainly does: when it goes down, as it did for 15 minutes in July, millions of websites become unreachable. The only sensible response is to increase resilience with multiple pathways. Instead, we have Facebook proposing to further entrench its central role in many people's lives with its nascent Libra cryptocurrency. "Well, *I*'m not going to use it" isn't an adequate response when in some countries Facebook effectively *is* the Internet.

So where are the flaws in our present Internet dystopias? We've suggested before that advertising saturation may be one; the fakery that runs all the way through the advertising stack is probably another. Government takeovers and pervasive surveillance provide motivation to rebuild alternative pathways. The built-in lack of security is, as ever, a growing threat. But the biggest flaw built into the centralized Internet may be this: boredom.


Illustrations: The Truman Show.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 12, 2019

Public access

WestWing-Bartlet-campaign-phone.pngIn the fantasy TV show The West Wing, when fictional US president Jed Bartlet wants to make campaign phone calls, he departs the Oval Office for the "residence", a few feet away, to avoid confusing his official and political roles. In reality, even before the show began in 1999, the Internet was altering the boundaries between public and private; the show's end in 2006 coincided with the founding of Twitter, which is arguably completing the job.

The delineation of public and private is at the heart of a case filed in 2017 by seven Twitter users backed by the Knight First Amendment Institute against US president Donald Trump. Their contention: Trump violated the First Amendment by blocking them for responding to his tweets with criticism. That Trump is easily offended, is not news. But, their lawyers argued, because Trump uses his Twitter account in his official capacity as well as for personal and campaign purposes, barring their access to his feed means effectively barring his critics from participating in policy. I liked their case. More important, lawyers liked their case; the plaintiffs cited many instances where Trump or members of his administration had characterized his tweets as official policy..

In May 2018, Trump lost in the Southern District of New York. This week, the US Court of Appeals for the Second Circuit unanimously upheld the lower court. Trump is perfectly free to block people from a personal account where he posts his golf scores as a private individual, but not from an account he uses for public policy announcements, however improvised and off-the-cuff they may be.

At The Volokh Conspiracy, Stuart Benjamin finds an unexplored tension between the government's ability to designate a space as a public forum and the fact that a privately-owned company sets the forum's rules. Here, as Lawrence Lessig showed in 1999, system design is everything. The government's lawyers contended that Twitter's lack of tools for account-holders leaves Trump with the sole option of blocking them. Benjamin's answer is: Trump didn't have to choose Twitter for his forum. True, but what other site would so reward his particular combination of impulsiveness and desperate need for self-promotion? A moderated blog, as Benjamin suggests, would surely have all the life sucked out of it by being ghost-written.

Trump's habit of posting comments that would get almost anyone else suspended or banned has been frequently documented - see for example Cory Scarola at Inverse in November 2016. In 2017, Jack Moore at GQ begged Twitter to delete his account to keep us all safer after a series of tweets in which Trump appeared to threaten North Korea with nuclear war. The site's policy team defended its decision not to delete the tweets on the grounds of "public interest". At the New York Times, Kara Swisher (heralding the piece on Twitter with the neat twist on Sartre, Hell is other tweeters) believes that the ruling will make a full-on Trump ban less likely.

Others have wondered whether the case gives Americans that Twitter has banned for racism and hate speech the right to demand readmission by claiming that they are being denied their First Amendment rights. Trump was already known to be trying to prove that social media sites are systemically biased towards banning far-right voices; those are the people he invited to the White House this week for a summit on social media.

It seems to me, however, that the judges in this case have correctly understood the difference between being banned from a public forum because of your own behavior and being banned because the government doesn't like your kind. The first can and does happen in every public space anywhere; as a privately-owned space, Twitter is free to make such decisions. But when the government decides to ban its critics, that is censorship, and the First Amendment is very clear about it. It's logical enough, therefore, to feel that the court was right.

Female politicians, however, probably already see the downside. Recently, Amnesty International highlighted the quantity and ferocity of abuse they get. No surprise that within a day the case was being cited by a Twitter user suing Alexandria Ocasio-Cortez for blocking him. How this case resolves will be important; we can't make soaking up abuse the price of political office, while the social media platforms are notoriously unresponsive to such complaints.

No one needs an account to read any Twitter user's unprotected tweets. Being banned costs the right to interact,, not the right to read. But because many tweets turn into long threads of public discussion it makes sense that the judges viewed the plaintiffs' loss as significant. One consequence, though, is that the judgment conceptually changes Trump's account from a stream through an indivisible pool into a subcommunity with special rules. Simultaneously, the company says it will obscure - though not delete - tweets from verified accounts belonging to politicians and government officials with more than 100,000 followers that violate its terms and conditions. I like this compromise: yes, we need to know if leaders are lighting matches, but it shouldn't be too easy to pour gasoline on them - and we should be able to talk (non-abusively) back.


Illustrations:The West Wing's Jed Bartlet making phone calls from the residence.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 5, 2019

Legal friction

ny-public-library-lions.JPGWe normally think of the Internet Archive, founded in 1996 by Brewster Kahle, as doing good things. With a mission of "universal access to all knowledge", it archives the web (including many of my otherwise lost articles), archives TV news footage and live concerts, and provides access to all sorts of information that would otherwise be lost.

Equally, authors usually love libraries. Most grew up burrowed into the stacks, and for many libraries are an important channel to a wider public. A key element of the Archive's position in what follows rests on the 2007 California decision officially recognizing it as a library.

Early this year, myriad authors and publishers organizations - including the UK's Society of Authors and the US's Authors Guild - issued a joint statement attacking the Archive's Open Library project. In this "controlled digital lending" program, borrowers - anyone, via an Archive account - get two weeks to read ebooks, either online in the Archive's book reader or offline in a copy-protected format in Adobe Digital Editions.

What offends rights holders is that unlike the Gutenberg Project, which offers downloadable copies of works in the public domain, Open Library includes still-copyrighted modern works (including net.wars-the-book). The Archive believes this is legal "fair use".

You may, like me, wonder if the Archive is right. The few precedents are mixed. In 2000, "My MP3.com" let users stream CDs after proving ownership of a physical copy by inserting it in their CD drive. In the resulting lawsuit the court ruled MP3.com's database of digitized CDs an infringement, partly because it was a commercial, ad-supported service. Years later, Amazon does practically the same thing..

In 2004, Google Books began scanning libraries' book and magazine collections into a giant database that allows searchers to view scraps of interior text. In 2015, publishers lost their lawsuit. Google is a commercial company - but Google Books carries no ads (though it presumably does collect user data), and directs users to source copies from libraries or booksellers.

A third precedent, cited by the Authors Guild, is Capitol Records v. ReDigi. In that case, rulings have so far held that ReDigi's resale process, which transfers music purchased on iTunes from old to new owners means making new and therefore infringing copies. Since the same is true of everything from cochlear implants to reading a web page, this reasoning seems wrong.

Cambridge University Press v. Patton, filed in 2008 and still ongoing, has three publishers suing Georgia State University over its e-reserves system, which loans out course readings on CDL-type terms. In 2012, the district court ruled that most of this is fair use; appeal courts have so far mostly upheld that view.

The Georgia case is cited David R. Hansen and Kyle K. Courtney in their white paper defending CDL. As "format-shifting", they argue CDL is fair use because it replicates existing library lending. In their view, authors don't lose income because the libraries already bought copies, and it's all covered by fair use, no permission needed. One section of their paper focuses on helping libraries assess and minimize their legal risk. They concede their analysis is US-only.

From a geek standpoint, deliberately introducing friction into ebook lending in order to replicate the time it takes the book to find its way back into the stacks (for example) is silly, like requiring a guy with a flag on a horse to escort every motor car. And it doesn't really resolve the authors' main complaints: lack of permission and no payment. Of equal concern ought to be user complaints about zillions of OCR errors. The Authors Guild's complaint that saved ebooks "can be made readable by stripping DRM protection" is, true, but it's just as true of publishers' own DRM - so, wash.

To this non-lawyer, the white paper appears to make a reasonable case - for the US, where libraries enjoy wider fair use protection and there is no public lending right, which elsewhere pays royalties on borrowing that collection societies distribute proportionately to authors.

Outside the US, the Archive is probably screwed if anyone gets around to bringing a case. In the UK, for example, the "fair dealing" exceptions allowed in the Copyright, Designs, and Patents Act (1988) are narrowly limited to "private study", and unless CDL is limited to students and researchers, its claim to legality appears much weaker.

The Authors Guild also argues that scanning in physical copies allows libraries to evade paying for library ebook licenses. The Guild's preference, extended collective licensing, has collection societies negotiating on behalf of authors. So that's at least two possible solutions to compensation: ECL, PLR.

Differentiating the Archive from commercial companies seems to me fair, even though the ask-forgiveness-not-permission attitude so pervasive in Silicon Valley is annoying. No author wants to be an indistinguishable bunch of bits an an undifferentiated giant pool of knowledge, but we all consume far more knowledge than we create. How little authors earn in general is sad, but not a legal argument: no one lied to us or forced us into the profession at gunpoint. Ebook lending is a tiny part of the challenges facing anyone in the profession now, and my best guess is that whatever the courts decide now eventually this dispute will just seem quaint.

Illustrations: New York Public Library (via pfhlai at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2019

Reopening the source

SphericalCow2.gif
"There is a disruption coming." Words of doom?

Several months back we discussed Michael Salmony's fear that the Internet is about to destroy science. Salmony reminded that his comments came in a talk on the virtues of the open economy, and then noted the following dangers:

- Current quality-assurance methods (peer-review, quality editing, fact checking etc) are being undermined. Thus potentially leading to an avalanche of attention-seeking open garbage drowning out the quality research;
- The excellent high-minded ideals (breaking the hold of the big controllers, making all knowledge freely accessible etc) of OA are now being subverted by models that actually ask authors (or their funders) to spend thousands of dollars per article to get it "openly accessible". Thus again privileging the rich and well connected.

The University of Bath associate professor Joanna Bryson rather agreed with Salmony, also citing the importance of peer review. So I stipulate: yes, peer review is crucial for doing good science.

In a posting deploring the death of the monograph, Bryson notes that, like other forms of publishing, many academic publishers are small and struggle for sustainability. She also points to a Dutch presentation arguing that open access costs more.

Since she, as an academic researcher, has skin in this game, we have to give weight to her thoughts. However, many researchers dissent, arguing that academic publishers like Elsevier, Axel Springer profit from an unfair and unsustainable business model. Either way, an existential crisis is rolling toward academic publishers like a giant spherical concrete cow.

So to yesterday's session on the ten-year future of research, hosted by European Health Forum Gastein and sponsored by Elsevier. The quote of doom we began with was voiced there.

The focal point was a report (PDF), the result of a study by Elsevier and Ipsos MORI. Their efforts eventually generated three scenarios: 1) "brave open world", in which open access publishing, collaboration, and extensive data sharing rule; 2) "tech titans", in which technology companies dominate research; 3) "Eastern ascendance", in which China leads. The most likely is a mix of the three. This is where several of us agreed that the mix is already our present. We surmised, cattily, that this was more an event looking for a solution to Elsevier's future. That remains cloudy.

The rest does not. For the last year I've been listening to discussions about how academic work can find greater and more meaningful impact. While journal publication remains essential for promotions and tenure within academia, funders increasingly demand that research produce new government policies, change public conversations, and provide fundamentally more effective practice.

Similarly, is there any doubt that China is leading innovation in areas like AI? The country is rising fast. As for "tech titans", while there's no doubt that these companies lead in some fields, it's not clear that they are following the lead of the great 1960s and 1970s corporate labs like Bell Labs, Xerox PARC and IBM Watson, which invested in fundamental research with no connection to products. While Google, Facebook, and Microsoft researchers do impressive work, Google is the only one publicly showing off research, that seems unrelated to its core business">.

So how long is ten years? A long time in technology, sure: in 2009: Twitter, Android, and "there's an app for that" were new(ish), the iPad was a year from release, smartphones got GPS, netbooks were rising, and 3D was poised to change the world of cinema. "The academic world is very conservative," someone at my table said. "Not much can change in ten years."

Despite Sci-Hub, the push to open access is not just another Internet plot to make everything free. Much of it is coming from academics, funders, librarians, and administrators. In the last year, the University of California dropped Elsevier rather than modify its open access policy or pay extra for the privilege of keeping it. Research consortia in Sweden, Germany, and Hungary have had similar disputes; a group of Norwegian institutions recently agreed to pay €9 million a year to cover access to Elsevier's journals and the publishing costs of its expected 2,000 articles.

What is slow to change is incentives within academia. Rising scholars are judged much as they were 50 years ago: how much have they published, and where? The conflict means that younger researchers whose work has immediate consequences find themselves forced to choose between prioritizing career management - via journal publication - or more immediately effective efforts such as training workshops and newspaper coverage to alert practitioners in the field of new problems and solutions. Choosing the latter may help tens of thousands of people - at a cost of a "You haven't published" stall to their careers. Equally difficult, today's structure of departments and journals is poorly suited for the increasing range of multi-, inter-, and trans-disciplinary research. Where such projects can find publication remains a conundrum.

All of that is without considering other misplaced or perverse incensitives in the present system: novel ideas struggle to emerge; replication largely does not happen or fails, and journal impact factors are overvalued. The Internet has opened up beneficial change: Ben Goldacre's COMPare project to identify dubious practices such as outcome switching and misreported findings, and the push to publish data sets; and preprint servers give much wider access to new work. It may not be all good; but it certainly isn't all bad.


Illustrations: A spherical cow jumping over the moon (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 14, 2019

Copywrong

Anti-copyright.svg.pngJust a couple of weeks ago it looked like the EU's proposed reform of the Copyright Directive, last updated in 2001, was going to run out of time. In the last three days, it's revived, and it's heading straight for us. As Joe McNamee, the outgoing director of European Digital Rights (EDRi), said last year, the EU seems bent on regulating Facebook and Google by creating an Internet in which *only* Facebook and Google can operate.

We'll start with copyright. As previously noted, the EU's proposed reforms include two particularly contentious clauses: Article 11, the "link tax", which would require anyone using more than one or two words to link to a news article elsewhere to get a license, and Article 13, the "upload filter", which requires any site older than three years *or* earning more than €10,000,000 a year in revenue to ensure that no user posts anything that violates copyright, and sites that allow user-generated content must make "best efforts" to buy licenses for anything they might post. So even a tiny site - like net.wars, which is 13 years old - that hosted comments would logically be required to license all copyrighted content in the known universe, just in case. In reviewing the situation at TechDirt, Mike Masnick writes, "If this becomes law, I'm not sure Techdirt can continue publishing in the EU." Article 13, he continues, makes hosting comments impossible, and Article 11 makes their own posts untenable. What's left?

Thumbnail image for Thumbnail image for Julia Reda-wg-2016-06-24-cropped.jpgTo these known evils, the German Pirate Party MEP Julia Reda finds that the final text adds two more: limitations on text and data mining that allow rights holders to opt out under most circumstances, and - wouldn't you know it? - the removal of provisions that would have granted authors the right to proportionate remuneration (that is, royalties) instead of continuing to allow all-rights buy-out contracts. Many younger writers, particularly in journalism, now have no idea that as recently as 1990 limited contracts were the norm; the ability to resell and exploit their own past work was one reason the writers of the mid-20th century made much better livings than their counterparts do now. Communia, an association of digital rights organizations, writes that at least this final text can't get any *worse*.

Well, I can hear Brexiteers cry, what do you care? We'll be out soon. No, we won't - at least, we won't be out from under the Copyright Directive. For one thing, the final plenary vote is expected in March or April - before the May European Parliament general election. The good side of this is that UK MEPs will have a vote, and can be lobbied to use that vote wisely; from all accounts the present agreed final text settled differences between France and Germany, against which the UK could provide some balance. The bad side is that the UK, which relies heavily on exports of intellectual property, has rarely shown any signs of favoring either Internet users or creators against the demands of rights holders. The ugly side is that presuming this thing is passed before the UK brexits - assuming that happens - it will be the law of the land until or unless the British Parliament can be persuaded to amend it. And the direction of travel in copyright law for the last 50 years has very much been toward "harmonization".

Plus, the UK never seems to be satisfied with the amount of material its various systems are blocking, as the Open Rights Group documented this week. If the blocks in place weren't enough, Rebecca Hill writes at the Register: under the just-passed Counter-Terrorism and Border Security Act, clicking on a link to information likely to be useful to a person committing or preparing an act of terrorism is committing an offense. It seems to me that could be almost anything - automotive listings on eBay, chemistry textbooks, a *dictionary*.

What's infuriating about the copyright situation in particular is that no one appears to be asking the question that really matters, which is: what is the problem we're trying to solve? If the problem is how the news media will survive, this week's Cairncross Review, intended to study that exact problem, makes some suggestions. Like them or loathe them, they involve oversight and funding; none involve changing copyright law or closing down the Internet.

Similarly, if the problem is market dominance, try anti-competition law. If the problem is the increasing difficulty of making a living as an author or creator, improve their rights under contract law - the very provisions that Reda notes have been removed. And, finally, if the problem is the future of democracy in a world where two companies are responsible for poisoning politics, then delving into campaign finances, voter rights, and systemic social inequality pays dividends. None of the many problems we have with Facebook and Google are actually issues that tightening copyright law solves - nor is their role in spreading anti-science, such as this, just in from Twitter, anti-vaccination ads targeted at pregnant women.

All of those are problems we really do need to work on. Instead, the only problem copyright reform appears to be trying to solve is, "How can we make rights holders happier?" That may be *a* problem, but it's not nearly so much worth solving.


Illustrations: Anti-copyright symbol (via Wikimedia); Julia Reda MEP in 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 25, 2019

Reversal of fortunes

Seabees_remove_corroded_zinc_anodes_from_an_undersea_cable._(28073762161).jpgIt may seem unfair to keep busting on the explosion of the Internet's origin myths, but documenting what happens to the beliefs surrounding the beginning of a new technology may help foster more rational thinking next time.

Today's two cherished early-Internet beliefs: 1) the Internet was designed withstand a bomb outage; 2) the Internet is impossible to censor. The first of these is true - the history books are clear on this - but it was taken to mean that the Internet could withstand all damage. That's just not true; it can certainly be badly disrupted on a national or regional basis.

While the Internet was new, a favorite route to overload was introducing a new application - the web, for example. Around 1996, Peter Dawe, the founder of one of Britain's first two ISPs, predicted that video would kill the Internet. For "kill" read "slow down horribly". Bear in mind that this was BB - before broadband - so an 11MB video file took hours to trickle in. Stream? Ha!

In 1995, Bob Metcalfe, the co-inventor of ethernet, predicted that the Internet would start to collapse in 1996. In 1997, he literally ate his column as penance for being wrong.

It was weird: with one of their brains people were staking their lives on online businesses, yet with another part the Internet was always vulnerable. My favorite was Simson Garfinkel, writing "Fifty Ways to Kill the Internet" for Wired in 1997 who nailed the best killswitch: "Buy ten backhoes." Underneath all the rhetoric about virtuality the Internet remains a physical network of cables. You'd probably need more than ten backhoes today, but it's still a finite number.

People have given up these worries even though parts of the Internet are actually being blacked out - by governments. In the acute form either access providers (ISPs, mobile networks) are ordered to shut down, or the government orders blocks on widely-used social media that people use to distribute news (and false news) and coordinate action, such as Twitter, Facebook, or WhatsApp.

In 2018 , governments shutting down "the Internet" became an increasingly frequent fixture of the the fortnightly Open Society Foundation Information Program News Digest. The list for 2018 is long, as Access Now says. At New America, Justin Sherman predicts that 2019 will see a rise in Internet blackouts - and I doubt he'll have to eat his pixels. The Democratic Republic of Congo was first, on January 1, soon followed by Zimbabwe.

There's general agreement that Internet shutdowns are bad for both democracy and the economy. In a 2016 study, the Brookings Institution estimated that Internet shutdowns cost countries $2.4 billion in 2015 (PDF), an amount that surely rises as the Internet becomes more deeply embedded in our infrastructure.

But the less-worse thing about the acute form is that it's visible to both internal and external actors. The chronic form, the second of our "things they thought couldn't be done in 1993", is long-term and less visible, and for that reason is the more dangerous of the two. The notion that censoring the Internet is impossible was best expressed by EFF co-founder John Gilmore in 1993: "The Internet perceives censorship as damage and routes around it". This was never a happy anthropomorphization of a computer network; more correctly, *people* on the Internet... Even today, ejected Twitterers head toGab; disaffected 4chan users create 8chan. But "routing around the damage" only works as long as open protocols permit anyone to build a new service. No one suggests that *Facebook* regards censorship as damage and routes around it; instead, Facebook applies unaccountable censorship we don't see or understand. The shift from hundreds of dial-up ISPs to a handful of broadband providers is part of this problem: centralization.

The country that has most publicly and comprehensively defied Gilmore's aphorism is China; in the New York Times, Raymond Zhong recently traced its strategy. At Technology Review, James Griffiths reports that the country is beginning to export its censorship via malware infestations and DDoS attacks, while Abdi Latif Dahir writes at Quartz that it is also exporting digital surveillance to African countries such as Morocco, Egypt, and Libya inside the infrastructure it's helping them build as part of its digital Silk Road.

The Guardian offers a guide to what Internet use is like in Russia, Cuba, India, and China. Additional insight comes from Chinese professor Bai Tongdong, who complains in the South China Morning Post that Westerners opposing Google's Dragonfly censored search engine project do not understand the "paternalism" they are displaying in "deciding the fate of Chinese Internet users" without considering their opinion.

Mini-shutdowns are endemic in democratic countries: unfair copyright takedowns, the UK's web blocking, and EU law limiting hate speech. "From being the colonizers of cyberspace, Americans are now being colonized by the standards adopted in Brussels and Berlin," Jaccob Mchangama complains at Quillette.

In the mid-1990s, Americans could believe they were exporting the First Amendment. Another EFF co-founder, John Perry Barlow, was more right than he'd have liked when, in a January 1992 column for Communications of the ACM, he called the US First Amendment "a local ordinance". That is much less true of the control being built into our infrastructure now.


Illustrations: The old threat model: Seabees remove corroded zinc anodes from an undersea cable (via Wikimedia, from the US Navy site.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2019

Misforgotten

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpg"It's amazing. We're all just sitting here having lunch like nothing's happening, but..." This was on Tuesday, as the British Parliament was getting ready to vote down the Brexit deal. This is definitely a form of privilege, but it's hard to say whether it's confidence born of knowing your nation's democracy is 900 years old, or aristocrats-on-the-verge denial as when World War I or the US Civil War was breaking out.

Either way, it's a reminder that for many people historical events proceed in the background while they're trying to get lunch or take the kids to school. This despite the fact that all of us in the UK and the US are currently hostages to a paralyzed government. The only winner in either case is the politics of disgust, and the resulting damage will be felt for decades. Meanwhile, everything else is overshadowed.

One of the more interesting developments of the past digital week is the European advocate general's preliminary opinion that the right to be forgotten, part of data protection law, should not be enforceable outside the EU. In other words, Google, which brought the case, should not have to prevent access to material to those mounting searches from the rest of the world. The European Court of Justice - one of the things British prime minister Theresa May has most wanted the UK to leave behind since her days as Home Secretary - typically follows these preliminary opinions.

The right to be forgotten is one piece of a wider dispute that one could characterize as the Internet versus national jurisdiction. The broader debate includes who gets access to data stored in another country, who gets to crack crypto, and who gets to spy on whose citizens.

This particular story began in France, where the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection regulator, fined Google €100,000 for selectively removing a particular person's name from its search results on just its French site. CNIL argued that instead the company should delink it worldwide. You can see their point: otherwise, anyone can bypass the removal by switching to .com or .co.jp. On the other hand, following that logic imposes EU law on other countries, such as the US's First Amendment. Americans in particular tend to regard the right to be forgotten with the sort of angry horror of Lady Bracknell contemplating a handbag. Google applied to the European Court of Justice to override CNIL and vacate the fine.

A group of eight digital rights NGOs, led by Article 19 and including Derechos Digitales, the Center for Democracy and Technology, the Clinique d'intérêt public et de politique d'Internet du Canada (CIPPIC), the Electronic Frontier Foundation, Human Rights Watch, Open Net Korea, and Pen International, welcomed the ruling. Many others would certainly agree.

The arguments about jurisdiction and censorship were, like so much else, foreseen early. By 1991 or thereabouts, the question of whether the Internet would be open everywhere or devolve to lowest-common-denominator censorship was frequently debated, particularly after the United States v. Thomas case that featured a clash of community standards between Tennessee and California. If you say that every country has the right to impose its standards on the rest of the world, it's unclear what would be left other than a few Disney characters and some cat videos.

France has figured in several of these disputes: in (I think) the first international case of this kind, in 2000, it was a French court that ruled that the sale of Nazi memorabilia on Yahoo!'s site was illegal; after trying to argue that France was trying to rule over something it could not control, Yahoo! banned the sales on its French auction site and then, eventually, worldwide.

Data protection law gave these debates a new and practical twist. The origins of this particular case go back to 2014, when the European Court of Justice ruled in Google Spain v AEPD and Mario Costeja González that search engines must remove links to web pages that turn up in a name search and contain information that is irrelevant, inadequate, or out of date. This ruling, which arguably sought to redress the imbalance of power between individuals and corporations publishing information about them and free expression. Finding this kind of difficult balance, the law scholar Judith Rauhofer argued at that year's Computers, Freedom, and Privacy, is what courts *do*. The court required search engines to remove from the search results that show up in a *name* search the link to the original material; it did not require the original websites to remove it entirely or require the link's removal from other search results. The ruling removed, if you like, a specific type of power amplification, but not the signal.

How far the search engines have to go is the question the ECJ is now trying to settle. This is one of those cases where no one gets everything they want because the perfect is the enemy of the good. The people who want their past histories delinked from their names don't get a complete solution, and no one country gets to decide what people in other countries can see. Unfortunately, the real winner appears to be geofencing, which everyone hates.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 28, 2018

Opening the source

Participants_at_Budapest_meeting,_December_1,_2001.jpegRecently, Michael Salmony, who has appeared here before appeared horrified to discover open access, the movement for publishing scientific research so it's freely accessible to the public (who usually paid for it) instead of closed to subscribers. In an email, he wrote, "...looks like the Internet is now going to destroy science as well".

This is not my view.

The idea about science that I grew up with was that scientists building on and reviewing each other's work is necessary for good science, a self-correcting process that depends on being able to critique and replicate each other's work. So the question we should ask is: does the business model of traditional publishing support that process? Are there other models that would support that process better? Science spawns businesses, serves businesses, and may even be a business itself, but good-quality science first serves the public interest.

There are three separate issues here. The first is the process of science itself: how best to fund, support, and nurture it. The second is the business model of scientific *publishing*. The third, which relates to both of those, is how to combat abuse. Obviously, they're interlinked.

The second of these is the one that resonates with copyright battles past. Salmony: "OA reminds me warmly of Napster disrupting music publishing, but in the end iTunes (another commercial, quality controlled) model has won."

iTunes and the music industry are not the right models. No one dies of lack of access to Lady Gaga's latest hit. People *have* died through being unable to afford access to published research.

Plus, the push is coming from an entirely different direction. Napster specifically and file-sharing generally were created by young, anti-establishment independents who coded copyright bypasses because they could. The open access movement began with a statement of principles codified by university research types - mavericks, sure, but representing the Public Library of Science, Open Society Institute, BioMed Central, and universities in Montreal, London, and Southampton. My first contact with the concept was circa 1993, when World Health Organization staffer Christopher Zielinski raised the deep injustice of pricing research access out of developing countries' reach.

Sci-Hub is a symptom, not a cause. Another symptom: several months ago, 60 German universities canceled their subscriptions to Elsevier journals to protest the high fees and restricted access. Many scientists are offended at the journals' expectation that they will write papers for free and donate their time for peer review while then charging them to read the published results. One way we know this is that Sci-Hub builds its giant cache via educational institution proxies that bypass the paywalls. At least some of these are donated by frustrated people inside those institutions. Many scientists use it.

As I understand it, publication costs are incorporated into research grants; there seems no reason why open access should impede peer review or indexing. Why shouldn't this become financially sustainable and assure assure quality control as before?

A more difficult issue is that one reason traditional journals still matter is that academic culture has internalized their importance in determining promotions and tenure. Building credibility takes time, and many universities have been slow to adapt. However, governments and research councils in Germany, the UK, and South Africa are all pushing open access policies via their grant-making conditions.

Plus, the old model is no longer logistically viable in many fields as the pace of change accelerates. Computer scientists were first to ignore it, relying instead on conference proceedings and trading papers and research online.

Back to Salmony: "Just replacing one bad model with another one that only allows authors who can afford to pay thousands of dollars (or is based on theft, like Sci Hub) and that threatens the quality (edited, peer review, indexed etc) sounds less than convincing." In this he's at odds with scientists such as Ben Goldacre, who in 2007 called open access "self-evidently right and good".

This is the first issue. In 1992, Marcel C. LaFollette's Stealing into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing documented many failures of traditional peer review. In 2010, the Greek researcher John Ioannidis established how often medical research is retracted. At Retraction Watch, science journalist Ivan Oransky finds remarkable endemic sloppiness and outright fraud. Admire the self-correction, but the reality is that journals have little interest in replication, preferring newsworthy new material - though not *too* new.

Ralph Merkle, the "third man", alongside Whit Diffie and Martin Hellman, inventing public key cryptography, has complained that journals favor safe, incremental steps. Merkle's cryptography idea was dismissed with: "There is nothing like this in the established literature." True. But it was crucial for enabling ecommerce.

Salmony's third point: "[Garbage] is the plague of the open Internet", adding a link to a Defon 26 talk. Sarah Jeong's Internet of Garbage applies.

Abuse and fakery are indeed rampant, but a lot is due to academic incentives. For several years, my 2014 article for IEEE Security & Privacy explaining the Data Retention and Investigatory Powers Act (2014) attracted invitations to speak at (probably) fake conferences and publish papers in (probably) fake journals. Real researchers tell me this is par for the course. But this is a problem of human predators, not "the open Internet", and certainly not open access.


Illustrations: Participants in drafting the Budapest principles (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 30, 2018

Digital rights management

parliament-whereszuck.jpg"I think we would distinguish between the Internet and Facebook. They're not the same thing." With this, the MP Damian Collins (Conservative, Folkstone and Hythe) closed Tuesday's hearing on fake news, in which representatives of nine countries, combined population 400 million, posed questions to Facebook VP for policy Richard Allan, proxying for non-appearing CEO Mark Zuckerberg.

Collins was correct when you're talking about the countries present: UK, Ireland, France, Belgium, Latvia, Canada, Argentina, Brazil, and Singapore. However, the distinction is without a difference in numerous countries where poverty and no-cost access to Facebook or its WhatsApp subsidiary keeps the population within their boundaries. Foreseeing this probable outcome, India's regulator banned Facebook's Free Basics on network neutrality grounds.

Much less noticed, the nine also signed a set of principles for governing the Internet. Probably the most salient point is the last one, which says technology companies "must demonstrate their accountability to users by making themselves fully answerable to national legislatures and other organs of representative democracy". They could just as well have phrased it, "Hey, Zuckerberg: start showing up."

This was, they said, the first time multiple parliaments have joined together in the House of Commons since 1933, and the first time ever that so many nations assembled - and even that wasn't enough to get Zuckerberg on a plane. Even if Allan was the person best-placed to answer the committee's questions, it looks bad, like you think your company is above governments.

The difficulty that has faced would-be Internet regulators from the beginning is this: how do you get 200-odd disparate cultures to agree? China would openly argue for censorship; many other countries would openly embrace freedom of expression while happening to continue expanding web blocking, filtering, and other restrictions. We've seen the national disparities in cultural sensitivities played out for decades in movie ratings and TV broadcasting rules. So what's striking about this declaration is that nine countries from three continents have found some things they can agree on - and that is that libertarian billionaires running the largest and most influential technology companies should accept the authority of national governments. Hence, the group's first stated principle: "The internet is global and law relating to it must derive from globally agreed principles". It took 22 years, but at last governments are responding to John Perry Barlow's 1996 Declaration of the Independence of Cyberspace: "Not bloody likely."

Even Allan, a member of the House of Lords and a former MP (LibDem, Sheffield Hallam), admitted, when Collins asked how he thought it looked that Zuckerberg had sent a proxy to testify, "Not great!"

The governments' principles, however, are a statement of authority, not a bill of rights for *us*, a tougher proposition that many have tried to meet. In 2010-2012, there was a flurry of attempts. Then-US president Barack Obama published a list of privacy principles; the 2010 Computers, Freedom, and Privacy conference, led by co-chair Jon Pincus, brainstormed a bill of rights mostly aimed at social media; UK deputy Labour leader Tom Watson ran for his seat on a platform of digital rights (now gone from his website); and US Congressman Darrell Issa (R-OH) had a try.

Then a couple of years ago, Cybersalon began an effort to build on all these attempts to draft a bill of rights hoping it would become a bill in Parliament. Labour drew on it for its Digital Democracy Manifesto (PDF) in 2016 - though this hasn't stopped the party from supporting the Investigatory Powers Act.

The latest attempt came a few weeks ago, when Tim Berners-Lee launched a contract for the web, which has been signed by numerous organizations and individuals. There is little to object to: universal access, respect for privacy, free expression, and human rights, civil discourse. Granted, the contract is, like the Bishop of Oxford's ten commandments for artificial intelligence, aspirational more than practically prescriptive. The civil discourse element is reminiscent of Tim O'Reilly's 2007 Code of Conduct, which many, net.wars included, felt was unworkable.

The reality is that it's unlikely that O'Reilly's code of conduct or any of its antecedents and successors will ever work without rigorous human moderatorial intervention. There's a similar problem with the government pledges: is China likely to abandon censorship? Next year half the world will be online - but alongside the Contract a Web Foundation study finds that the rate at which people are getting online has fallen sharply since 2015. Particularly excluded are women and the rural poor, and getting them online will require significant investment in not only broadband but education - in other words, commitments from both companies and governments.

Popular Mechanics calls the proposal 30 years too late; a writer on Medium calls it communist; and Bloomberg, among others, argues that the only entities that can rein in the big technology companies is governments. Yet the need for them to do this appears nowhere in the manifesto. "...The web is long past attempts at self-regulation and voluntary ethics codes," Bloomberg concludes.

Sadly, this is true. The big design error in creating both the Internet and the web was omitting human psychology and business behavior. Changing today's situation requires very big gorillas. As we've seen this week, even nine governments together need more weight.


Illustrations: Zuckerberg's empty chair in the House of Commons.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2018

Phished

cupidsmessage-missourihistoricalsociety.jpgI regularly get Friend requests on Facebook from things I doubt are real people. They are always male and, at a guess, 40-something, have no Friends in common with me, and don't bother to write a message explaining how I know them. If I take the trouble to click through to their profiles, their Friends lists are empty. This week's request, from "Smith Thomson", is muscled, middle-aged, and slightly brooding. He lists his workplace as a US Army base and his birthplace as Houston. His effort is laughably minimal: zero Friends and the only profile content is the cover photograph plus a second photo with a family in front of a Disney castle, probably Photoshopped. I have a nasty, suspicious mind, and do not accept the request.

One of the most interesting projects under the umbrella of the Research Institute for Science of Cyber Security is Detecting and Preventing Mass-Marketing Fraud, led from the University of Warwick by Monica Whitty, and explained here. We tend to think of romance scams in particular, less so advance-fee fraud, as one-to-one rip-offs. Instead, the reality behind them is highly organized criminals operating at scale.

This is a billion-dollar industry with numerous victims. On Monday, the BBC news show Panorama offered a carefully worked example. The journalists followed the trail of these "catfish" by setting up a fake profile and awaiting contact, which quickly arrived. Following clues and payment instructions led the journalists to the scammer himself, in Lagos, Nigeria. One of the victims in particular displays reactions Whitty has seen in her work, too: even when you explain the fraud, some victims still don't recognize the same pattern when they are victimized again. Panorama's saddest moment is an older man who was clearly being retargeted after having already been fleeced of £100,000, his life savings. The new scammer was using exactly the same methodology, and yet he justified sending his new "girlfriend" £500 on the basis that it was comparatively modest, though at least he sounded disinclined to send more. He explained his thinking this way: "They reckon that drink and drugs are big killers. Yeah, they are, but loneliness is a bigger killer than any of them, and trying to not be lonely is what I do every day."

I doubt Panorama had to look very hard to find victims. They pop up a lot at security events, where everyone seems to know someone who's been had: the relative whose computer they had to clean after they'd been taken in by a tech support scam, the friend they'd had to stop from sending money. Last year, one friend spent several months seeking restitution for her mother, who was at least saved from the worst by an alert bank teller at her local branch. The loss of those backstops - people in local bank branches and other businesses who knew you and could spot when you were doing something odd - is a largely unnoticed piece of why these scams work.

In a 2016 survey, Microsoft found that two-thirds of US consumers had been exposed to a tech support scam in the previous year. In the UK in 2016, a report by the US Better Business Bureau says (PDF) , there were more than 34,000 complaints about this type of fraud alone - and it's known that less than 10% of victims complain. Each scam has its preferred demographic. Tech support fraud doesn't typically catch older people, who have life experience and have seen other scams even if not this particular one. The biggest victims of this type of scam are millennials aged 18 to 34 - with no gender difference.

DAPM's meeting mostly focused on dating scams, a particular interest of Whitty's because the emotional damage, on top of the financial damage, is so fierce. From her work, I've learned that the military connection "Smith Thomson" claimed is a common pattern. Apparently some people are more inclined to trust a military background, and claiming that they're located on a military base makes it easy for scammers to dodge questions about exactly what they're doing and where they are and resist pressure to schedule a real-life meeting.

Whitty and her fellow researchers have already discovered that the standard advice we give people doesn't work. "If something looks too good to be true it usually is" is only meaningful at the beginning - and that's not when the "too good to be true" manifests itself. Fraudsters know to establish trust before ratcheting up the emotions and starting to ask - always urgently - for money. By then, requests that would raise alarm flags at the beginning seem like merely the natural next steps in a developed relationship. Being scammed once gets you onto a "suckers list", ripe for retargeting - like Panorama's victim. These, too, are not new; they have been passed around among fraudsters for at least a century.

The point of DAPM's research is to develop interventions. They've had some statistically significant success with instructions teaching people to recognize scams. However, this method requires imparting a lot of information, which means the real conundrum is how you motivate people to participate when most believe they're too smart to get caught. The situation is very like the paranormal claims The Skeptic deals with: no matter how smart you are or how highly educated, you, too, can be fooledz. And, unlike in other crimes, DAPM finds, 52% of these victims blame themselves.


Illustrations: Cupid's Message (via Missouri Historical Society.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 16, 2018

Septet

bush-gore-hanging-chad-florida.jpgThis week catches up on some things we've overlooked. Among them, in response to a Twitter comment: two weeks ago, on November 2, net.wars started its 18th unbroken year of Fridays.

Last year, the writer and documentary filmaker Astra Taylor coined the term "fauxtomation" to describe things that are hyped as AI but that actually rely on the low-paid labor of numerous humans. In The Automation Charade she examines the consequences: undervaluing human labor and making it both invisible and insecure. Along these lines, it was fascinating to read that in Kenya, workers drawn from one of the poorest places in the world are paid to draw outlines around every object in an image in order to help train AI systems for self-driving cars. How many of us look at a self-driving car see someone tracing every pixel?

***

Last Friday, Index on Censorship launched Demonising the media: Threats to journalists in Europe, which documents journalists' diminishing safety in western democracies. Italy takes the EU prize, with 83 verified physical assaults, followed by Spain with 38 and France with 36. Overall, the report found 437 verified incidents of arrest or detention and 697 verified incidents of intimidation. It's tempting - as in the White House dispute with CNN's Jim Acosta - to hope for solidarity in response, but it's equally likely that years of politicization have left whole sectors of the press as divided as any bullying politician could wish.

***

We utterly missed the UK Supreme Court's June decision in the dispute pitting ISPs against "luxury" brands including Cartier, Mont Blanc, and International Watch Company. The goods manufacturers wanted to force BT, EE, and the three other original defendants, which jointly provide 90% of Britain's consumer Internet access, to block more than 46,000 websites that were marketing and selling counterfeits. In 2014, the High Court ordered the blocks. In 2016, the Court of Appeal upheld that on the basis that without ISPs no one could access those websites. The final appeal was solely about who pays for these blocks. The Court of Appeal had said: ISPs. The Supreme Court decided instead that under English law innocent bystanders shouldn't pay for solving other people's problems, especially when solving them benefits only those others. This seems a good deal for the rest of us, too: being required to pay may constrain blocking demands to reasonable levels. It's particularly welcome after years of expanded blocking for everything from copyright, hate speech, and libel to data retention and interception that neither we nor ISPs much want in the first place.

***

For the first time the Information Commissioner's Office has used the Computer Misuse Act rather than data protection law in a prosecution. Mustafa Kasim, who worked for Nationwide Accident Repair Services, will serve six months in prison for using former colleagues' logins to access thousands of customer records and spam the owners with nuisance calls. While the case reminds us that the CMA still catches only the small fry, we see the ICO's point.

***

In finally catching up with Douglas Rushkoff's Throwing Rocks at the Google Bus, the section on cashless societies and local currencies reminded us that in the 1960s and 1970s, New Yorkers considered it acceptable to tip with subway tokens, even in the best restaurants. Who now would leave a Metro Card? Currencies may be local or national; cashlessness is global. It may be great for those who don't need to think about how much they spend, but it means all transactions are intermediated, with a percentage skimmed off the top for the middlefolk. The costs of cash have been invisible to us, as Dave Birch says, but it is public infrastructure. Cashlessness privatizes that without any debate about the social benefits or costs. How centralized will this new infrastructure become? What happens to sectors that aren't commercially valuable? When do those commissions start to rise? What power will we have to push back? Even on-the-brink Sweden is reportedly rethinking its approach for just these reasons In a survey, only 25% wanted a fully cashless society.

***

Incredibly, 18 years after chad hung and people disposed in Bush versus Gore, ballots are still being designed in ways that confuse voters, even in Broward County, which should have learned better. The Washington Post tell us that in both New York and Florida ballot designs left people confused (seeing them, we can see why). For UK voters accustomed to a bit of paper with big names and boxes to check with a stubby pencil, it's baffling. Granted, the multiple federal races, state races, local officers, judges, referendums, and propositions in an average US election make ballot design a far more complex problem. There is advice available, from the US Election Assistance Commission, which publishes design best practices, but I'm reliably told it's nonetheless difficult to do well. On Twitter, Dana Chisnell provides a series of links that taken together explain some background. Among them is this one from the Center for Civic Design, which explains why voting in the US is *hard* - and not just because of the ballots.

***

Finally, a word of advice. No matter how cool it sounds, you do not want a solar-powered, radio-controlled watch. Especially not for travel. TMOT.

Illustrations: Chad 2000.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 2, 2018

The Brother proliferation

Thumbnail image for Security_Monitoring_Centre-wikimedia.jpgThere's this about having one or two big threats: they distract attention from the copycat threats forming behind them. Unnoticed by most of us - the notable exception being Jeff Chester and his Center for Digital Democracy, the landscape of data brokers is both consolidating and expanding in new and alarming ways. Facebook and Google remain the biggest data hogs, but lining up behind them are scores of others embracing the business model of surveillance capitalism. For many, it's an attempt to refresh their aging business models; no one wants to become an unexciting solid business.

The most obvious group is the telephone companies - we could call them "legacy creepy". We've previously noted their moves into TV. For today's purposes, Exhibit A is Verizon's 2015 acquisition of AOL, which Fortune magazine attributed to AOL's collection of advertising platforms, particularly in video, as well as its more visible publishing sites (which include the Huffington Post, Engadget, and TechCrunch). Verizon's 2016 acquisition of Yahoo! and its 3 billion user accounts and long history also drew notice, most of it negative. Yahoo!, the reasoning went, was old and dying, plus: data breaches that were eventually found to have affected all 3 billion Yahoo! accounts. Oath, Verizon's name for the division that owns AOL and Yahoo!, also owns MapQuest and Tumblr. For our purposes, though, the notable factor is that with these content sites Verizon gets a huge historical pile of their users' data that it can combine with what it knows about its subscribers in truly disturbing ways. This is a company that only two years ago was fined $1.35 million for secretly tracking its customers.

Exhibit B is AT&T, which was barely finished swallowing Time-Warner (and presumably its customer database along with it) when it announced it would acquire the adtech company AppNexus, a deal Forrester's Joanna O'Connell calls a material alternative to Facebook and Google. Should you feel insufficiently disturbed by that prospect, in 2016 AT&T was caught profiting from handing off data to federal and local drug officials without a warrant. In 2015, the company also came up with the bright idea of charging its subscribers not to spy on them via deep packet inspection. For what it's worth, AT&T is also the longest-serving campaigner against network neutrality.

In 2017, Verizon and AT&T were among the biggest lobbyists seeking to up-end the Federal Communications Commission's privacy protections.

The move into data mining appears likely to be copied by legacy telcos internationally. As evidence, we can offer Exhibit C, Telenor, which in 2016 announced its entry into the data mining business by buying the marketing technology company Tapad.

Category number two - which we can call "you-thought-they-had-a-different-business-model creepy" - is a surprise, at least to me. Here, Exhibit A is Oracle, which is reinventing itself from enterprise software company to cloud and advertising platform supplier. Oracle's list of recent acquisitions is striking: the consumer spending tracker Datalogix, the "predictive intelligence" company DataFox, the cross-channel marketing company Responsys, the data management platform BlueKai, the cross-channel machine learning company Crosswise, and audience tracker AddThis. As a result, Oracle claims it can link consumers' activities across devices, online and offline, something just about everyone finds creepy except, apparently, the people who run the companies that do it. It may surprise you to find Adobe is also in this category.

Category number three - "newtech creepy" - includes data brokers like Acxiom, perhaps the best-known of the companies that have everyone's data but that no one's ever heard of. It, too, has been scooping up competitors and complementary companies, for example LiveRamp, which it acquired from fellow profiling company RapLeaf, and which is intended to help it link online and offline identities. The French company Criteo uses probabilistic matching to send ads following you around the web and into your email inbox. My favorite in this category is Quantcast, whose advertising and targeting activities include "consent management". In other words, they collect your consent or lack thereof to cookies and tracking at one website and then follow you around the web with it. Um...you have to opt into tracking to opt out?

Meanwhile, the older credit bureaus Experian and Equifax - "traditional creepy" - have been buying enhanced capabilities and expanded geographical reach and partnering with telcos. One of Equifax's acquisitions, TALX, gave the company employment and payroll information on 54 million Americans.

The detail amounts to this: big companies with large resources are moving into the business of identifying us across devices, linking our offline purchases to our online histories, and packaging into audience segments to sell to advertisers. They're all competing for the same zircon ring: our attention and our money. Doesn't that make you feel like a valued member of society?

At the 2000 Computers, Freedom, and Privacy conference, the science fiction writer Neal Stephenson presciently warned that focusing solely on the threat of Big Brother was leaving us open to invasion by dozens of Little Brothers. It was good advice. Now, Very Large Brothers are proliferating all around us. GDPR is supposed to redress this imbalance of power, but it only works when you know who's watching you so you can mount a challenge.


Illustrations: "Security Monitoring Centre" (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2018

We know where you should live

Thumbnail image for PatCadigan-Worldcon75.jpgIn the memorable panel "We Know Where You Will Live" at the 1996 Computers, Freedom, and Privacy conference, the science fiction writer Pat Cadigan startled everyone, including fellow panelists Vernor Vinge, Tom Maddox, and Bruce Sterling, by suggesting that some time in the future insurance companies would levy premiums for "risk purchases" - beer, junk foods - in supermarkets in real time.

Cadigan may have been proved right sooner than she expected. Last week, John Hancock, a 156-year-old US insurance company, announced it would discontinue underwriting traditional life insurance policies. Instead, in future all its policies will be "interactive"; that is, they will come with the "Vitality" program, under which customers supply data collected by their wearable fitness trackers or smartphones. John Hancock promotes the program, which it says is already used by 8 million customers in 18 countries, and as providing discounts. In the company's characterization, it's a sort of second reward for "living healthy". In the company's depiction, everyone wins - you get lower premiums and a healthier life, and John Hancock gets your data, enabling it to make more accurate risk assessments and increase its efficiency.

Even then, Cadigan was not the only one with the idea that insurance companies would exploit the Internet and the greater availability of data. A couple of years later, a smart and prescient friend suggested that we might soon be seeing insurance companies offer discounts for mounting a camera on the hood of your car so they could mine the footage to determine blame when accidents occurred. This was long before smartphones and GoPros, but the idea of small, portable cameras logging everything goes back at least to 1945, when Vannevar Bush wrote As We May Think, an essay that imagined something a lot like the web, if you make allowances for storing the whole thing on microfilm.

This "interactive" initiative is clearly a close relative of all these ideas, and is very much the kind of thing University of Maryland professor Frank Pasquale had in mind when writing his book The Black Box Society. John Hancock may argue that customers know what data they're providing, so it's not all that black a box, but the reality is that you only know what you upload. Just like when you download your data from Facebook, you do not know what other data the company matches it with, what else is (wrongly or rightly) in your profile, or how long the company will keep penalizing you for the month you went bonkers and ate four pounds of candy corn. Surely it's only a short step to scanning your shopping cart or your restaurant meal with your smartphone to get back an assessment of how your planned consumption will be reflected in your insurance premium. And from there, to automated warnings, and...look, if I wanted my mother lecturing me in my ear I wouldn't have left home at 17.

There has been some confusion about how much choice John Hancock's customers have about providing their data. The company's announcement is vague about this. However, it does make some specific claims: Vitality policy holders so far have been found to live 13-21 years longer than the rest of the insured population; generate 30% lower hospitalization costs; take nearly twice as many steps as the average American; and "engage with" the program 576 times a year.

John Hancock doesn't mention it, but there are some obvious caveats about these figures. First of all, the program began in 2015. How does the company have data showing its users live so much longer? Doesn't that suggest that these users were living longer *before* they adopted the program? Which leads to the second point: the segment of the population that has wearable fitness trackers and smartphones tends to be more affluent (which tends to favor better health already) and more focused on their health to begin with (ditto). I can see why an insurance company would like me to "engage with" its program twice a day, but I can't see why I would want to. Insurance companies are not my *friends*.

At the 2017 Computers, Privacy, and Data Protection, one of the better panels discussed the future for the insurance industry in the big data era. For the insurance industry to make sense, it requires an element of uncertainty: insurance is about pooling risk. For individuals, it's a way of managing the financial cost of catastrophes. Continuously feeding our data into insurance companies so they can more precisely quantify the risk we pose to their bottom line will eventually mean a simple equation: being able to get insurance at a reasonable rate is a pretty good indicator you're unlikely to need it. The result, taken far enough, will be to undermine the whole idea of insurance: if everything is known, there is no risk, so what's the point? betting on a sure thing is cheating in insurance just as surely as it is in gambling. In the panel, both Katja De Vries and Mireille Hildebrandt noted the sinister side of insurance companies acting as "nudgers" to improve our behavior for their benefit.

So, less "We know where you will live" and more "We know where and how you *should* live."


Illustrations: Pat Cadigan (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 21, 2018

Facts are screwed

vlad-enemies-impaled.gif"Fake news uses the best means of the time," Paul Bernal said at last week's gikii conference, an annual mingling of law, pop culture, and technology. Among his examples of old media turned to propaganda purposes: hand-printed woodcut leaflets, street singers, plays, and pamphlets stuck in cracks in buildings. The big difference today is data mining, profiling, targeting, and the real-time ability to see what works and improve it.

Bernal's most interesting point, however, is that like a magician's plausible diversion the surface fantasy story may stand in front of an earlier fake news story that is never questioned. His primary example was Vlad, the Impaler, the historical figure who is thought to have inspired Dracula. Vlad's fame as a vicious and profligate killer, derives from those woodcut leaflets. Bernal suggests the reasons: a) Vlad had many enemies who wrote against him, some of it true, most of it false; b) most of the stories were published ten to 20 years after he died; and c) there was a whole complicated thing about the rights to Transylvanian territory.

"Today, people can see through the vampire to the historical figure, but not past that," he said.

His main point was that governments' focus on content to defeat fake news is relatively useless. A more effective approach would have us stop getting our news from Facebook. Easy for me personally, but hard to turn into public policy.

Soon afterwards, Judith Rauhofer outlined a related problem: because Russian bots are aimed at exacerbating existing divisions, almost anyone can fall for one of the fake messages. Spurred on by a message from the Tumblr powers that be advising that she had shared a small number of messages that were traced to now-closed Russian accounts, Rauhofer investigated. In all, she had shared 18 posts - and these had been reblogged 2.7 million times, and are still being recirculated. The focus on paid ads means there is relatively little research on organic and viral sharing of influential political messages. Yet these reach vastly bigger audiences and are far more trusted, especially because people believe they are not being influenced by them.

In the particular case Rauhofer studied, "There are a lot of minority groups under attack in the US, the UK, Germany, and so on. If they all united in their voting behavior and political activity they would have a chance, but if they're fighting each other that's unlikely to happen." Divide and conquer, in other words, works as well as it ever has.

The worst part of the whole thing, she said, is that looking over those 18 posts, she would absolutely share them again and for the same reason: she agreed with them.

Rauhofer's conclusion was that the combination of prioritization - that is, the ordering of what you see according to what the site believes you're interested in - and targeting form "a fail-safe way of creating an environment where we are set against each other."

So in Bernal's example, an obvious fantasy masks an equally untrue - or at least wildly exaggerated - story, while in Rauhofer's the things you actually believe can be turned into weapons of mass division. Both scenarios require much more nuance and, as we've argued here before, many more disciplines to solve than are currently being deployed.

Andrea Matwyshyn, in providing five mini-fables as a way of illustrating five problems to consider when designing AI - or, as she put it, five stories of "future AI failure". These were:

- "AI inside" a product can mean sophisticated machine learning algorithms or a simple regression analysis; you cannot tell from the outside what is real and what's just hype, and the specifics of design matter. When Google's algorithm tagged black people as "gorillas", the company "fixed" the algorithm by removing "gorilla" from its list of possible labels. The algorithm itself wasn't improved.

- "Pseudo-AI" has humans doing the work of bots. Lots of historical examples for this one, most notably the mechanical Turk; Matwyshyn chose the fake autonomaton the Digesting Duck.

- Decisions that bring short-term wins may also bring long-term losses in the form of unintended negative consequences that haven't been thought through. Among Matwyshyn's examples were a number of cases where human interaction changed the analysis such as the failure of Google flu trends and Microsoft's Tay bot.

- Minute variations or errors in implementation or deployment can produce very different results than intended. Matwyshyn's prime example was a pair of electronic hamsters she thought could be set up to repeat each other w1ords to form a recursive loop. Perhaps responding to harmonics less audible to humans, they instead screeched unintelligibly at each other. "I thought it was a controlled experiment," she said, "and it wasn't."

- There will always be system vulnerabilities and unforeseen attacks. Her example was squirrels that eat power lines, but ten backhoes is the traditional example.

To prevent these situations, Matwyshyn emphasized disclosure about code, verification in the form of third-party audits, substantiation in the form of evidence to back up the claims that are made, anticipation - that is, liability and good corporate governance, and remediation - again a function of good corporate governance.

"Fail well," she concluded. Words for our time.


Illustrations: Woodcut of Vlad, with impaled enemies.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?


Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 7, 2018

Watching brief

Amazon-error-message-usopen-2018.png"Hope the TV goes out at the same time," the local cable company advised me regarding outages when they supplied my Internet service circa 2001. "Because then so many people complain that it gets fixed right away."

Amazon is discovering the need to follow this principle. As the Guardian reported last week, this year's US Open tennis is one of Amazon Prime's first forays into live sports streaming, and tennis fans are unhappy.

"Please leave tennis alone," says one of the more polite user reviews.

It seems like only yesterday that being able to watch grainy, stuttering video in a corner of one's computer screen was like a miracle (and an experience no one would ever want to repeat unless they had to). Now, streaming is so well established that people complain about the quality, the (lack of) features, and even the camera angles. People! Only ten years ago you'd have been *grateful*!

A friend, seeing the Guardian's story, emailed: "Are you seeing this?" Well, yes. Most of it. On my desktop machine the picture looks fine to me, but it's a 24-inch monitor, not a giant HD TV, and as long as I can pick out the ball consistently, who cares whether it's 1020p? However, on two Windows laptops both audio and video stutter badly. That was a clue: my Linux-based desktop has a settings advisory: "HD TV Not Available - Why?" It transpires that because Linux machines lack the copy protection built into HDMI, Amazon doesn't send HD. I'm guessing that the smaller amount of data means smoother reception and a better experience, even if the resolution is lower. That said, even on the Linux machines the stream fails regularly. Reload window, click play.

The camera angle is indeed annoying, but for that you have to blame the USTA and the new Armstrong stadium design. There's only one set of cameras, and the footage is distributed by the host broadcaster to everyone else. Whine to Amazon all you want; but all the company can do is forward the complaints.

One reason tennis fans are so picky is that the tennis tours adopted streaming years ago, as did Eurosport, as a way of reaching widely dispersed fans: tennis is a global minority sport. So they are experienced, and they have expectations. On the ATP (men's) tour's own site, TennisTV, if you're getting a stuttering picture you can throttle the bitrate; the scores and schedule are ready to hand; and you can pause a match and resume it later or step back to the beginning or any point in between. Replays are available very soon after a match ends. On Amazon, there's an icon to click to replay the last ten seconds, but you can't pause and resume, and you can only go back about a half an hour. Lest you think that's trivial: US Open night sessions, which generally feature the most popular matches, start at 7pm New York time - and therefore midnight in the UK.

In general, it's clear that Amazon hasn't really thought through the realities of the way fans embrace the US Open. Instead of treating the US Open as an *event*: instead of replays, Amazon treats live matches, and highlights compilations as separate "items". The replays Amazon began posting after a couple of days seem to be particularly well-hidden in that they're not flagged from either the highlights page or the live page and they're called "match of the day". When I did find them, they refused to play.

I would probably have been more annoyed about all this if UK coverage of the US Open hadn't been so frequently frustrating in the past (I remember "watching" the 1990 men's final by watching the Teletext scores update, and the frustrations of finding live matches when Sky scattered them across four premium channels). Watching the US Open in Britain is like boarding a plane for a long flight in economy: you don't ask if you're going to be uncomfortable. Instead, you assemble a toolkit and then which ask components you're going to need to make it as tolerable as possible within the constraints. So: I know where the Internet hides recordings of recently played matches and free streams. The US Open site has the scores and schedule of who's playing where. All streams bomb out at exactly the wrong moment. Unlike the USTA, however, it only took a day or two for Amazon to respond to viewer complaints by labeling the streams with who was playing. I *have* liked hearing some different commentators for a change. But I do not want to be a Prime subscriber.

Amazon will likely get better at this over the next four years of its five-year, $40 million contract and the course of its £50 million, five-year contract to show the ATP Tour. Nonetheless, sports are almost the only programming viewers are guaranteed to want to watch in real time, and fans, broadcasters, and the sports themselves are unlikely to be well-served in the long term by a company that uses live sports is a loss-leader - like below-cost pricing on milk in a grocery store - to build platform loyalty and subscribers for its delivery service. Sports are a strategy for the company, not its business. Book publishers welcomed Amazon, too, once.

Illustrations: Amazon error message.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2018

Ghosted

GDPR-LATimes.pngThree months after the arrival into force of Europe's General Data Protection Regulation, Nieman Lab finds that more than 1,000 US newspapers are still blocking EU visitors.

"We are engaged on the issue", says the placard that blocks access to even the front pages of the New York Daily News and the Chicago Tribune, both owned by Tronc, as well as the Los Angeles Times, which was owned by Tronc until very recently. Ironically, Wikipedia tells us that the silly-sounding name "Tronc" was derived from "Tribune Online Content"; you'd think a company whoe name includes "online" would grasp the illogic of blocking 500 million literate readers. Nieman Lab also notes that Tronc is for sale, so I guess the company has more urgent problems.

Also apparently unable to cope with remediating its systems, despite years of notice, is Lee Enterprises, which owns numerous newspapers including the Carlisle, PA Sentinel and the Arizona Daily Star; these return "Error 451: Unavailable due to legal reasons", and blame GDPR as the reason "access cannot be granted at this time". Even the giant retail chain Williams-Sonoma has decided GDPR is just too hard, redirecting would-be shoppers to a UK partner site that is almost, but not quite, entirely unlike Williams-Sonoma - and useless if you want to ship a gift to someone in the US.

If you're reading this in the US, and you want to see what we see, try any of those URLs in a free proxy such as Hide Me, setting set your location to Amsterdam. Fun!

Less humorously, shortly after GDPR came into force a major publisher issued new freelance contracts that shift the liability for violations onto freelances. That is, if I do something that gets the company sued for GDPR violations, in their world I indemnify them.

And then there are the absurd and continuing shenanigans of ICANN, which is supposed to be a global multi-stakeholder modeling a new type of international governance, but seems so unable to shake its American origins that it can't conceive of laws it can't bend to its will.

Years ago, I recall that the New York Times, which now embraces being global, paywalled non-US readers because we were of no interest to their advertisers. For that reason, it seems likely that Tronc and the others see little profit in a European audience. They're struggling already; it may be hard to justify the expenditure on changing their systems for a group of foreign deadbeats. At the same time, though, their subscribers are annoyed that they can't access their home paper while traveling.

On the good news side, the 144 local daily newspapers and hundreds of other publications belonging to GateHouse Media seem to function perfectly well. The most fun was NPR, which briefly offered two alternatives: accept cookies or view in plain text. As someone commented on Twitter, it was like time-traveling back to 1996.

The intended consequence has been to change a lot of data practices. The Reuters Institute finds that the use of third-party cookies is down 22% on European news sites in the three months GDPR has been in force - and 45% on UK news sites. A couple of days after GDPR came into force, web developer Marcel Freinbichler did a US-vs-EU comparison on USA Today: load time dropped from 45 seconds to three, from 124 JavaScript files to zero, and a more than 500 requests to 34.

gdpr-unbalanced-cookingsite.jpgBut many (and not just US sites) are still not getting the message, or are mangling it. For example, numerous sites now display boxes displaying the many types of cookies they use and offering chances to opt in or out. A very few of these are actually well-designed, so you can quickly opt out of whole classes of cookies (advertising, tracking...) and get on with reading whatever you came to the site for. Others are clearly designed to make it as difficult as possible to opt out; these sites want you to visit a half-dozen other sites to set controls. Still others say that if you click the button or continue using the site your consent will be presumed. Another group say here's the policy ("we collect your data"), click to continue, and offer no alternative other than to go away. Not a lawyer - but sites are supposed to obtain explicit consent for collecting data on an opt-in basis, not assume consent on an an opt-out basis while making it onerous to object.

The reality is that it is far, far easier to install ad blockers - such as EFF's Privacy Badger - than to navigate these terrible user interfaces. In six months, I expect to see surveys coming from American site owners saying that most people agree to accept advertising tracking, and what they will mean is that people clicked OK, trusting their ad blockers would protect them.

None of this is what GDPR was meant to do. The intended consequence is to protect citizens and redress the balance of power; exposing exploitative advertising practices and companies' dependence on "surveillance capitalism" is a good thing. Unfortunately, many Americans seem to be taking the view that if they just refuse service the law will go away. That approach hasn't worked since Usenet.


Illustrations: Personally collected screenshots.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 24, 2018

Cinema surveillant

Dragonfly-Eyes_poster_3-web-460.jpgThe image is so low-resolution that it could be old animation. The walking near-cartoon figure has dark, shoulder-length hair and a shape that suggests: young woman. She? stares at a dark oblong in one hand while wandering ever-closer to a dark area. A swimming pool? A concrete river edge? She wavers away, and briefly it looks like all will be well. Then another change of direction, and in she falls, with a splash.

This scene opens Dragonfly Eyes, which played this week at London's Institute of Contemporary Arts. All I knew going in was that the movie had been assembled from fragments of imagery gathered from Chinese surveillance cameras. The scene described above wasn't *quite* the beginning - first, the filmmaker, Chinese artist Xu Bing, provides a preamble explaining that he originally got the idea of telling a story through surveillance camera footage in 2013, but it was only in 2015, when the cameras began streaming live to the cloud, that it became a realistic possibility. There was also, if I remember correctly, a series of random images and noise that in retrospect seem like an orchestra tuning up before launching into the main event, but at the time were rather alarming. Alarming as in, "They're not going to do this for an hour and a half, are they?"

They were not. It was when the cacophony briefly paused to watch a bare-midriffed young woman wriggle suggestively on a chair, pushing down on the top of her jeans (I think) that I first thought, "Hey, did these guys get these people's permission?" A few minutes later, watching the phone?-absorbed woman ambling along the poolside seemed less disturbing, as her back was turned to the camera. Until: after she fell the splashing became fainter and fainter, and after a little while she did not reappear and the water calmed. Did we just watch the recording of a live drowning?

Apparently so. At various times during the rest of the movie we return to a police control room where officers puzzle over that same footage much the way we in the audience were puzzling over Xu's film. Was it suicide? the police ponder while replaying the footage.

Following the plot was sufficiently confusing that I'm grateful that Variety explains it. Ke Fan, an agricultural technician, meets a former Buddhist-in-training, Qing Ting, while they bare both working at a dairy farm and follows her when she moves to a new city. There, she gets fired from her job at a dry cleaner's for failing to be sufficiently servile to an unpleasant, but wealthy and valuable customer. Angered by the situation, Ke Fan repeatedly rams the unpleasant customer's car; this footage is taken from inside the car being rammed, so he appears to be attacking you directly. Three years later, when he gets out of prison, he finds (or possibly just believes he finds) that Qing Ting has had plastic surgery and under a new name is now a singing webcam celebrity who makes her living by soliciting gifts and compliments from her viewers, who turn nasty when she insults a more popular rival...

The characters and narration are voiced by Chinese actors, but the pictures, as one sees from the long list of camera locations and GPS coordinates included in the credits, are taken from 10,000 hours of real-world found imagery, which Xu and his assistants edited down to 81 minutes. Given this patchwork, it's understandably hard to reliably follow the characters through the storyline; the cues we usually rely on - actors and locations that become familiar - simply aren't clear. Some sequences are tagged with the results of image recognition and numbering; very Person of Interest. About a third of the way through, however, the closer analogue that occurred to me is Woody Allen's 1966 movie What's Up, Tiger Lily?, which Allen constructed by marrying the footage from a Japanese spy film to his own unrelated dialogue. It was funny, in 1966.

While Variety calls the storyline "run-of-the-mill melodramatic", in reality the plot is supererogatory. Much more to the point - and indicated in the director's preamble - is that all this real-life surveillance footage can be edited into any "reality" you want. We sort of knew this from reality TV, but the casts of those shows signed up to perform, even if they didn't quite expect the extent to which they'd be exploited. The people captured on Xu's extracts from China's estimated 200 million surveillance cameras, are...just living. The sense of that dissonance never leaves you at any time during the movie.

I can't spoil the movie's ending by telling you whether Ke Fan finds Qing Ting because it matters so little that I don't remember. The important spoiler is this: the filmmaker has managed to obtain permission from 90% of the people who appear in the fragments of footage that make up the film (how he found them would be a fascinating story in itself), and advertises a contact address for the rest to seek him out. In one sense, whew! But then: this is the opt-out, "ask forgiveness, not permission" approach we're so fed up with from Silicon Valley. The fact that Chinese culture is different and the camera streams were accessible via the Internet doesn't make it less disturbing. Yes, that is the point.


Illustrations: Dragonfly Eyes poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 17, 2018

Redefinition

Robber-barons2-bosses-senate.pngOnce upon a nearly-forgotten time, the UK charged for all phone calls via a metered system that added up frighteningly fast when you started dialing up to access the Internet. The upshot was that early Internet services like the now-defunct Demon Internet could charge a modest amount (£10) per month, secure that the consciousness of escalating phone bills would drive subscribers to keep their sessions short. The success of Demon's business model, therefore, depended on the rapaciousness of strangers.

I was reminded of this sort of tradeoff by a discussion in the LA Times (proxied for EU visitors) of cable-cutters. Weary of paying upwards of $100 a month for large bundles of TV channels they never watch, Americans are increasingly dumping them in favor of cheaper streaming subscriptions. As a result, ISPs that depend on TV package revenues are raising their broadband prices to compensate, claiming that the money is needed to pay for infrastructure upgrades. In the absence of network neutrality requirements, those raised prices could well be complemented by throttling competitors' services.

They can do this, of course, because so many areas of the US are lucky if they have two choices of Internet supplier. That minimalist approach to competition means that Americans pay more to access the Internet than many other countries - for slower speeds. It's easy to raise prices when your customers have no choice.

The LA Times holds out hope that technology will save them; that is, the introduction of 5G, which promises better speeds and easier build-out, will enable additional competition from AT&T, Verizon, and Sprint - or, writer David Lazarus adds, Google, Facebook, and Amazon. In the sense of increasing competition, this may be the good news Lazarus thinks it is, even though he highlights AT&T's and Verizon's past broken promises. I'm less sure: physics dictates that despite its greater convenience the fastest wireless will never be as fast as the fastest wireline.

5G has been an unformed mirage on the horizon for years now, but apparently no longer: CNBC says Verizon's 5G service will begin late this year in Houston, Indianapolis, Los Angeles, and Sacramento and give subscribers TV content in the form of an Apple TV and a YouTube subscription. A wireless modem will obviate the need for cabling.

The potential, though, is to entirely reshape competition in both broadband and TV content, a redefinition that began with corporate mergers such as Verizon's acquisition of AOL and Yahoo (now gathered into its subsidiary, "Oath") and AT&T's whole-body swallowing of Time Warner, which includes HBO. Since last year's withdrawal of privacy protections passed during the Obama administration, ISPs have greater latitude to collect and exploit their customers' online data trails. Their expansion into online content makes AT&T and Verizon look more like competitors to the online behemoths. For consumers, greater choice in bandwidth provider is likely to be outweighed by the would-you-like-spam-with-that complete lack of choice about data harvesting. If the competition 5G opens up is provided solely by avid data miners who all impose the same terms and conditions...well, which robber baron would you like to pay?

There's a twist. The key element that's enabled Amazon and, especially, Netflix to succeed in content development is being able to mine the data they collect about their subscribers. Their business models differ - for Amazon, TV content is a loss-leader to sell subscriptions to its premium delivery service; for Netflix, TV production is a bulwark against dependence on third-party content creators and their licensing fees - but both rely on knowing what their customers actually watch. Their ambitions, too, are changing. Amazon has canceled much of its niche programming to chase HBO-style blockbusters, while Netflix is building local content around the world. Meanwhile, AT&T wants HBO to expand worldwide and focus less on its pursuit of prestige; Apple is beginning TV production; and Disney is pulling its content from Netflix to set up its own streaming service.

The idea that many of these companies will be directly competing in all these areas is intriguing, and its impact will be felt outside the US. It hardly matters to someone in London or Siberia how much Internet users in Indianapolis pay for their broadband service or how good it is. But this reconfiguration may well end the last decade's golden age of US TV production, particularly but not solely for drama. All the new streaming services began by mining the back catalogue to build and understand an audience and then using creative freedom to attract talent frustrated by the legacy TV networks' micromanagement of every last detail, a process the veteran screenwriter Ken Levine has compared to being eaten to death by moths.

However, one last factor could provide an impediment to the formation of this landscape: on June 28, California adopted the Consumer Privacy Act, which will come into force in 2020. As Nick Confessore recounts in the New York Times Magazine, this "overnight success" required years of work. Many companies opposed the bill: Amazon, Google, Microsoft, Uber, Comcast, AT&T, Cox, Verizon, and several advertising lobbying groups; Facebook withdrew its initial opposition.. EFF calls it "well-intentioned but flawed", and is proposing changes. ISPs and technology companies also want (somewhat different) changes. EPIC's Mark Rotenberg called the bill's passage a "milestone moment". It could well be.


Illustrations: Robber barons overseeing the US Congress (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 20, 2018

Competing dangerously

Thumbnail image for Conversation_with_Margrethe_Vestager,_European_Commissioner_for_Competition_(17222242662).jpgIt is just over a year since the EU fined Google what seemed a huge amount, and here we are again: this week the EU commissioner for competition Margrethe Vestager levied an even bigger €4.34 billion fine over "serious illegal behavior". At issue was Google's licensing terms for its Android apps and services, which essentially leveraged its ownership of the operating system to ensure its continued market dominance in search as the world moved to mobile. Google has said it will appeal; it is also appealing the 2017 fine. The present ruling gives the company 90 days to change behaviour or face further fines of up to 5% of daily worldwide turnover.

Google's response is to say that Google's rules have enabled it not to charge manufacturers to use Android, made Android phones easier to use, and are efficient for both developers and consumers. The ruling, writes CEO Sundar Pichai, will "upset the balance of the Android ecosystem".

Google's claim that users are free to install other browsers and search engines and are used to downloading apps is true but specious. It's widely known that 95% of users never change default settings. Defaults *matter*, and Google certainly knows this. When you reach a certain size - Android holds 80% of European and worldwide smart mobile devices, and 95% of the licensable mobile market outside of China - the decisions you make about choice architecture determine the behavior of large populations.

Also, the EU's ruling isn't about a user's specific choice on their individual smartphone. Instead, it's based on three findings: 1) Google's licensing terms made access to the Play Store contingent on pre-installing Google's search app and Chrome; 2) Google paid some large manufacturers and network operators to exclusively pre-install Google's search app; 3) Google prevented manufacturers that pre-install Google apps from selling *any* devices using non-Google-approved ("forked") versions of Android. It puts the starting date at 2011, "when Google became dominant".

There are significant similarities here to the US's 1998 ruling against Microsoft over tying Internet Explorer to Windows. Back then, Microsoft was the Big Evil on the block, and there were serious concerns that it would use Internet Explorer as a vector for turning the web into a proprietary system under its control. For a good account, see Charles H. Ferguson's 1999 book, High St@kes, No Prisoners. Ferguson would know: his web page design start-up, Vermeer, was the subject of an acquisition battle between Microsoft and Netscape. Google, which was founded in 1998, ultimately benefited from this ruling, because it helped keep the way open for "alternative" browsers such as Google's own Chrome.

There are also similarities to the EU's 2004 ruling against Microsoft, which required the company to stop bundling its media player with Windows and to disclose the information manufacturers needed to integrate non-Microsoft networking and streaming software. The EU's fine was the largest-ever at the time: €497 million. At that point, media players seemed like important gateways to content. The significant gateway drug turned out to be Web browsers; either way, Microsoft and streaming have both prospered.

Since 1998, however, in another example of EU/US divergence, the US has largely abandoned enforcing anti-competition law. As Lina M. Khan pointed out last year, it's no longer the case that waiting will produce two guys in a garage with a new technology that up-ends the market and its biggest players. The EU explains carefully in its announcement that Android is different from Apple's iOS or Blackberry because as vertically integrated companies that do not license their products they are not part of the same market. In the Android market, however, it says, "...it was Google - and not users, app developers, and the market - that effectively determined which operating systems could prosper."

Too little, too late, some are complaining, and more or less correctly: the time for this action was 2009; even better, says the New York Times, block in advance the mergers that are creating these giants. Antitrust actions against technology companies are almost always a decade late. Others buy Google's argument that consumers will suffer, but Google is a smart company full of smart engineers who are entirely capable of figuring out well-designed yet neutral ways to present choices, just as Microsoft did before it.

There's additional speculation that Google might have to recoup lost revenues by charging licensing fees; that Samsung might be the big winner, since it already has its own full competitive suite of apps; and that the EU should fine Apple, too, on the basis that the company's closed system bars users from making *any* unapproved choices.

Personally, I wish the EU had applied more attention to the ways Google leverages the operating system to enable user tracking to fuel its advertising business. The requirement to tie every phone to a Gmail address is an obvious candidate for regulatory disruption; so is the requirement to use it to access the Play Store. The difficulty of operating a phone without being signed into Google has ratcheted up over time - and it seems wholly unnecessary *unless* the purpose is to make it easier to do user tracking. This issue may yet find focus under GDPR.

Illustrations: Margrethe Vestager.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 6, 2018

This is us

Thumbnail image for ACTA_Protest_Crowd_in_London.JPGAfter months of anxiety among digital rights campaigners such as the Open Rights Group and the Electronic Frontier Foundation, the European Parliament has voted 318-278 against fast-tracking a particularly damaging set of proposed changes to copyright law.

There will be a further vote on September 10, so as a number of commentators are reminding us on Twitter, it's not over yet.

The details of the European Commission's alarmingly wrong-headed approach have been thoroughly hashed out for the last year by Glyn Moody. The two main bones of contention are euphoniously known as Article 11 and Article 13. Article 11 (the "link tax") would give publishers the right to require licenses (that is, payment) for the text accompanying links shared on social media, and Article 13 (the "upload filter") would require sites hosting user content to block uploads of copyrighted material.

In a Billboard interview with MEP Helga Trüpel, Muffett quite rightly points out the astonishing characterization of the objections to Articles 11 and 13 as "pro-Google". There's a sudden outburst of people making a similar error: Even the Guardian's initial report saw the vote as letting tech giants (specifically, YouTube) off the hook for sharing their revenues. Paul McCartney's last-minute plea hasn't helped this perception. What was an argument about the open internet is now being characterized as a tussle over revenue share between a much-loved billionaire singer/songwriter and a greedy tech giant that exploits artists.

Yet, the opposition was never about Google. In fact, probably most of the active opponents to this expansion of copyright and liability would be lobbying *against* Google on subjects like privacy, data protection, tax avoidance, and market power, We just happen to agree with Google on this particular topic because we are aware that forcing all sites to assume liability for the content their users post will damage the internet for everyone *else*. Google - and its YouTube subsidiary - has both the technology and the financing to play the licensing game.

But licensing and royalties are a separate issue from mandating that all sites block unauthorized uploads. The former is about sharing revenues; the latter is about copyright enforcement, and conflating them helps no one. The preventive "copyright filter" that appears essential for compliance with Article 13 would fail the "prior restraint" test of the US First Amendment - not that the EU needs to care about that. As copyright-and-technology consultant Bill Rosenblatt writes, licensing is a mess that this law will do nothing to fix. If artists and their rights holders want a better share of revenues, they could make it a *lot* easier for people to license their work. This is a problem they have to fix themselves, rather than requiring lawmakers to solve it for them by placing the burden on the rest of us. The laws are what they are because for generations they made them.

Article 11, which is or is not a link tax depending who you listen to, is another matter. Germany (2013) and Spain (2014) have already tried something similar, and in both cases it was widely acknowledged to have been a mistake. So much so that one of the opponents to this new attempt is the Spanish newspaper El País.

My guess is that those who want these laws passed are focusing on Google's role in lobbying against them - for example, Digital Music News reports that Google spent more than $36 million on opposing Article 13 - is preparation for the next round in September. Google and Facebook are increasingly the targets people focus on when they're thinking about internet regulation. Therefore, if you can recast the battle as being one between deserving artists and a couple of greedy American big businesses, they think it will be an easier sell to legislators.

But there are two of them and billions of us, and the opposition to Articles 11 and 13 was never about them. The 2012 SOPA and PIPA protests and the street protests against ACTA were certainly not about protecting Google or any other large technology company. No one goes out on the street or dresses up their website in protest banners in order to advocate for *Google*. They do it because what's been proposed threatens to affect them personally.

There's even a sound economic argument: had these proposed laws been in place in 1998, when Sergey Brin and Larry Page were meeting in dorm rooms, Google would not exist. Nor would thousands of other big businesses. Granted, most of these have not originated in the EU, but that's not a reason to wreck the open internet. Instead, that's a reason to find ways to make the internet hospitable to newcomers with bright ideas.

This debate is about the rest of us and our access to the internet. We - for some definition of "we" - were against these kinds of measures when they first surfaced in the early 1990s, when there were no tech giants to oppose them, and for the same reasons: the internet should be open to all of us.

Let the amendments begin.

Illustrations: Protesters against ACTA in London, 2012 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 6, 2018

Leverage

Facebook-76536_640.pngWell, what's 37 million or 2 billion scraped accounts more or less among friends? The exploding hairball of the Facebook/Cambridge Analytica scandal keeps getting bigger. And, as Rana Dasgubta writes in the Guardian, we are complaining now because it's happening to us, but we did not notice when these techniques were tried out first in third-world countries. Dasgupta has much to say about how nation-states will have to adapt to these conditions.

Given that we will probably never pin down every detail of how much data and where it went, it's safest to assume that all of us have been compromised in some way. The smug "I've never used Facebook" population should remember that they almost certainly exist in the dataset, by either reference (your sister posts pictures of "my brother's birthday") or inference (like deducing the existence, size, and orbit of an unseen planet based on its gravitational pull on already-known objects).

Downloading our archives tells us far less than people recognize. My own archive had no real surprises (my account dates in 2007, but I post little and adblock the hell out of everything). The shock many people have experienced of seeing years of messages and photographs laid out in front of them, plus the SMS messages and call records that Facebook shouldn't have been retaining in the first place, hides the fact that these archives are a very limited picture of what Facebook knows about us. It shows us nothing about information posted about us by others, photos others have posted and tagged, or comments made in response to things we've posted.

The "me-ness" of the way Facebook and other social media present themselves was called out by Christian Fuchs in launching his book Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter. "Twitter is a me-centred medium. 'Social media' is the wrong term, because it's actually anti-social, Me media. It's all about individual profiles, accumulating reputation, followers, likes, and so on."

Saying that, however, plays into Facebook's own public mythology about itself. Facebook's actual and most significant holdings about us are far more extensive, and the company derives its real power from the complex social graphs it has built and the insights that can be gleaned from them. None of that is clear from the long list of friends. Even more significant is how Facebook matches up user profiles to other public records and social media services and with other brokers' datasets - but the archives give us no sense of that either. Facebook's knowledge of you is also greatly enhanced - as is its ability to lock you in as a user - if you, like many people, have opted to use Facebook credentials to log into third-party sites. Undoing that is about as easy and as much fun as undoing all your direct debit payments in order to move your bank account.

Facebook and the other tech companies are only the beginning. There's a few people out there trying to suggest Google is better, but Zeynep Tufekci discovered it had gone on retaining her YouTube history even though she had withdrawn permission to do so. As Tufekci then writes, if a person with a technical background whose job it is to study such things could fail to protect her data, how could others hope to do so?

But what about publishers and the others dependent on that same ecosystem? As Doc Searls writes, the investigative outrage on display in many media outlets glosses over the fact that they, too, are compromised. Third party trackers, social media buttons, Google analytics, and so on all deliver up readers to advertisers in increasing detail, feeding the business plans of thousands of companies all aimed at improving precision and targeting.

And why stop with publishers? At least they have the defense of needing to make a living. Government sites, libraries, and other public services do the same thing, without that justification. The Richmond Council website shows no ads - but it still uses Google Analytics, which means sending a steady stream of user data Google's way. Eventbrite, which everyone now uses for event sign-ups, is constantly exhorting me to post my attendance to Facebook. What benefit does Eventbrite get from my complying? It never says.

Meanwhile, every club, member organization, and creative endeavor begs its adherents to "like my page on Facebook" or "follow me on Twitter". While they see that as building audience and engagement, the reality is that they are acting as propagandists for those companies. When you try to argue against doing this, people will say they know, but then shrug helplessly and say they have to go where the audience is. If the audience is on Facebook, and it takes page likes to make Facebook highlight your existence, then what choice is there? Very few people are willing to contemplate the hard work of building community without shortcuts, and many seem to have come to believe that social media engagement as measured in ticks of approval is community, like Mark Zuckerberg tried to say last year.

For all these reasons, it's not enough to "fix Facebook". We must undo its leverage.


Illustrations: Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 23, 2018

Aspirational intelligence

2001-hal.png"All commandments are ideals," he said. He - Steven Croft, the Bishop of Oxford - had just finished reading out to the attendees of Westminster Forum's seminar (PDF) his proposed ten commandments for artificial intelligence. He's been thinking about this on our behalf: Croft malware writers not to adopt AI enhancements. Hence the reply.

The first problem is: what counts as AI? Anders Sandberg has quipped that it's only called AI until it starts working, and then it's called automation. Right now, though, to many people "AI" seems to mean "any technology I don't understand".

Croft's commandment number nine seems particularly ironic: this week saw the first pedestrian killed by a self-driving car. Early guesses are that the likely weakest links were the underemployed human backup driver and the vehicle's faulty LIDAR interpretation of a person walking a bicycle. Whatever the jaywalking laws are in Arizona, most of us instinctively believe that in a cage match between a two-ton automobile and an unprotected pedestrian the car is always the one at fault.

Thinking locally, self-driving cars ought to be the most ethics-dominated use of AI, if only because people don't like being killed by machines. Globally, however, you could argue that AI might be better turned to finding the best ways to phase out cars entirely.

We may have better luck at persuading criminal justice systems to either require transparency, fairness, and accountability in machine learning systems that predict recidivism and who can be helped or drop them entirely.

The less-tractable issues with AI are on display in the still-developing Facebook and Cambridge Analytica scandals. You may argue that Facebook is not AI, but the platform certainly uses AI in fraud detection and to determine what we see and decide which of our data parts to use on behalf of advertisers. All on its own, Facebook is a perfect exemplar of all the problems Australian privacy advocate foresaw in 2004 after examining the first social networks. In 2012, Clark wrote, "From its beginnings and onward throughout its life, Facebook and its founder have demonstrated privacy-insensitivity and downright privacy-hostility." The same could be said of other actors throughout the tech industry.

Yonatan Zunger is undoubtedly right when he argues in the Boston Globe that computer science has an ethics crisis. However, just fixing computer scientists isn't enough if we don't fix the business and regulatory environment built on "ask forgiveness, not permission". Matthew Stoll writes in the Atlantic about the decline since the 1970s of American political interest in supporting small, independent players and limiting monopoly power. The tech giants have widely exported this approach; now, the only other government big enough to counter it is the EU.

The meetings I've attended of academic researchers considering ethics issues with respect to big data have demonstrated all the careful thoughtfulness you could wish for. The November 2017 meeting of the Research Institute in Science of Cyber Security provided numerous worked examples in talks from Kat Hadjimatheou at the University of Warwick, C Marc Taylor from the the UK Research Integrity Office, and Paul Iganski the Centre for Research and Evidence on Security Threats (CREST). Their explanations of the decisions they've had to make about the practical applications and cases that have come their way are particularly valuable.

On the industry side, the problem is not just that Facebook has piles of data on all of us but that the feedback loop from us to the company is indirect. Since the Cambridge Analytica scandal broke, some commenters have indicated that being able to do without Facebook is a luxury many can't afford and that in some countries Facebook *is* the internet. That in itself is a global problem.

Croft's is one of at least a dozen efforts to come up with an ethics code for AI. The Open Data Institute has its Data Ethics Canvas framework to help people working with open data identify ethical issues. The IEEE has published some proposed standards (PDF) that focus on various aspects of inclusion - language, cultures, non-Western principles. Before all that, in 2011, Danah Boyd and Kate Crawford penned Six Provocations for Big Data, which included a discussion of the need for transparency, accountability, and consent. The World Economic Forum published its top ten ethical issues in AI in 2016. Also in 2016, a Stanford University Group published a report trying to fend off regulation by saying it was impossible.

If the industry proves to be right and regulation really is impossible, it won't be because of the technology itself but because of the ecosystem that nourishes amoral owners. "Ethics of AI", as badly as we need it, will be meaningless if the necessary large piles of data to train it are all owned by just a few very large organizations and well-financed criminals; it's equivalent to talking about "ethics of agriculture" when all the seeds and land are owned by a child's handful of global players. The pre-emptive antitrust movement of 2018 would find a way to separate ownership of data from ownership of the AI, algorithms, and machine learning systems that work on them.


Illustrations: HAL.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 9, 2018

Signaling intelligence

smithsam-ASIdemo-slides.pngLast month, the British Home Office announced that it had a tool that can automatically detect 94% of Daesh propaganda with 99.995% accuracy. Sophos summarizes the press release to say that only 50 out of 1 million videos would require human review.

"It works by spotting subtle patterns in the extremist videso that distinguish them from normal content..." Mark Werner, CEO of London-based ASI Data Science, the company that developed the classifier, told Buzzfeed.

Yesterday, ASI, which numbers Skype co-founder Jaan Tallinn among its investors, presented its latest demo day in front of a packed house. Most of the lightning presentations focused on various projects its Fellows have led using its tools in collaboration with outside organizations such as Rolls Royce and the Financial Conduct Authority. Warner gave a short presentation of the Home Office extremism project that included little more detail than the press reports a month ago, to which my first reaction was: it sounds impossible.

That reaction is partly due to the many problems with AI, machine learning, and big data that have surfaced over the last couple of years. Either there are hidden biases, or the media reports are badly flawed, or the system appears to be telling us only things we already know.

Plus, it's so easy - and so much fun! - to mock the flawed technology. This week, for example, neural network trainer Janelle Shane showed off the results of some of her pranks. After confusing image classifiers with sheep that don't exist, goats in trees (birds! or giraffes!) and sheep painted orange (flowers!), she concludes, "...even top-notch algorithms are relying on probability and luck." Even more than humans, it appears that automated classifiers decide what they see based on what they expect to see and apply probability. If a human is holding it, it's probably a cat or dog; if it's in a tree it's not going to be a goat. And so on. The experience leads Shane to surmise that surrealism might be the way to sneak something past a neural net.

Some of this approach appears to be what ASI's classifier probably also does (we were shown no details). As Sophos suggests, a lot of the signals ASI's algorithm is likely to use have nothing to do with the computer "seeing" or "interpreting" the images. Instead, it likely looks for known elements such as logos and facial images matched against known terrorism photos or videos. In addition it can assess the cluster of friends surrounding the account that's posted the video and look for profile information that shows the source is one that has been known to post such material in the past. And some will be based on analyzing the language used in the video. From what ASI was saying, it appears that the claim the company is making is fairly specific: the algorithm is supposed to be able to detect (specifically) Daesh videos, with a false positive rate of 0.005%, and 94% of true positives.

These numbers - assuming they're not artifacts of computerish misunderstanding about what it's looking for - of course represent tradeoffs, as Patrick Ball explained to us last year. Do we want the algorithm to block all possible Daesh videos? Or are we willing to allow some through in the interests of honoring the value of freedom of expression and not blocking masses of perfectly legal and innocent material? That policy decision is not ASI's job.

What was more confusing in the original reports is that the training dataset was said to have been "over 1,000 videos". That seems an incredibly small sample for testing a classifier that's going to be turned loose on a dataset of millions. At the demonstration, Warner's one new piece of information is that because that training set was indeed small, the project developed "synthetic data" to enlarge the training set to sufficient size. As gaming-the-system as that sounds, creating synthetic data to augment training data is a known technique. Without knowing more about the techniques ASI used to create its synthetic data it's hard to assess that work.

We would feel a lot more certain of all of these claims if the classifier had been through an independent peer review. The sensitivity of the material involved makes this tricky; and if there has been an outside review we haven't been told about it.

But beyond that, the project to remove this material rests on certain assumptions. As speakers noted at the first conference run by VOX-Pol, an academic research network studying violent online political extremism, the "lone wolf" theory posits that individuals can be radicalized at home by viewing material on the internet. The assumption that this is true underpins the UK's censorship efforts. Yet this theory is contested: humans are highly social animals. Radicalization seems unlikely to take place in a vacuum. What - if any - is the pathway from viewing Daesh videos to becoming a terrorist attacker?

All these questions are beyond ASI's purview to answer. They'd probably be the first to say: they're only a hill of technology beans being asked to solve a mountain of social problems.

Illustrations: Slides from the demonstration (Sam Smith).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 2, 2018

In sync

Discarding images-King David music.jpgUntil Wednesday, I was not familiar with the use of "sync" to stand for a music synchronization license - that is, a license to use a piece of music in a visual setting such as a movie, video game, or commercial. The negotiations involved can be Byzantine and very, very slow, in part because the music's metadata is so often wrong or missing. In one such case, described at Music 4.5's seminar on developing new deals and business models for sync (Flash), it took ten years to get the wrong answer from a label to the apparently simple question: who owns the rights to this track on this compilation album?

The surprise: this portion of the music business is just as frustrated as activists with the state of online copyright enforcement. They don't love the Digital Millennium Copyright Act (2000) any more than we do. We worry about unfair takedowns of non-infringing material and bans on circumvention tools; they hate that the Act's Safe Harbor grants YouTube and Facebook protection from liability as long as they remove content when told it's infringing. Google's automated infringement detection software, ContentID, I heard Wednesday, enables the "value gap", which the music industry has been fretting about for several years now because the sites have no motivation to create licensing systems. There is some logic there.

However, where activists want to loosen copyright, enable fair use, and restore the public domain, they want to dump Safe Harbor, either by developing a technological bypass; or change the law; or by getting FaceTube to devise a fairer, more transparent revenue split. "Instagram," said one, "has never paid the music industry but is infringing copyright every day."

To most of us, "online music" means subscription-based streaming services like Spotify or download services like Amazon and iTunes. For many younger people, especially Americans though, YouTube is their jukebox. Pex estimates that 84% of YouTube videos contain at least ten seconds of music. Google says ContentID matches 99.5% of those, and then they are either removed or monetized. But, Pex argues, 65% of those videos remain unclaimed and therefore provide no revenue. Worse, as streaming grows, downloads are crashing. There's a detectable attitude that if they can fix licensing on YouTube they will have cracked it for all sites hosting "creator-generated content".

It's a fair complaint that ContentID was built to protect YouTube from liability, not to enable revenues to flow to rights holders. We can also all agree that the present system means millions of small-time creators are locked out of using most commercial music. The dancing baby case took eight years to decide that the background existence of a Prince song in a 29-second home video of a toddler dancing was fair use. But sync, too, was designed for businesses negotiating with businesses. Most creators might indeed be willing to pay to legally use commercial music if licensing were quick, simple, and cheap.

There is also a question of whether today's ad revenues are sustainable; a graphic I can't find showed that the payout per view is shrinking. Bloomberg finds that increasingly winning YouTubers are taking all with little left for the very long tail.

The twist in the tale is this. MP3 players unbundled albums into songs as separate marketable items. Many artists were frustrated by the loss of control inherent in enabling mix tapes at scale. Wednesday's discussion heralded the next step: unbundling the music itself, breaking it apart into individual beats, phrases and bars, each licensable.

One speaker suggested scenarios. The "content" you want to enjoy is 42 minutes long but your commute is only 38 minutes. You might trim some "unnecessary dialogue" and rearrange the rest so now it fits! My reaction: try saying "unnecessary dialogue" to Aaron Sorkin and let's see how that goes.

I have other doubts. I bet "rearranging" will take longer than watching the four minutes. Speeding up the player slightly achieves the same result, and you can do that *now* for free (try really blown it. More useful was the suggestion that hearing-impaired people could benefit from being able to tweak the mix to fade the background noise and music in a pub scene to make the actors easier to understand. But there, too, we actually already have closed captions. It's clear, however, that the scenarios may be wrong, but the unbundling probably isn't.

In this world, we won't be talking about music, but "music objects". Many will be very low-value...but the value of the total catalogue might rise. The BBC has an experiment up already: The Mermaid's Tears, an "object-based radio drama" in which you can choose to follow any one of the three characters to experience the story.

Smash these things together, and you see a very odd world coming at us. It's hard to see how fair use survives a system that aims to license "music objects" rather than "music". In 1990, Pamela Samuelson warned about copyright maximlism. That agenda does not appear to have gone away.


Illustrations: King David dancing before the Ark of the Covenant, 'Maciejowski Bible', Paris ca. 1240 (via Discarding Images.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 8, 2017

Plastures of plenty

Thumbnail image for windows-xp-hilltop.jpegIt was while I was listening to Isabella Henriques talk about children and consumerism at this week's Children's Global Media Summit that it occurred to me that where most people see life happening advertisers see empty space.

Henriques, like Kathryn Montgomery earlier this year, is concerned about abusive advertising practices aimed at children. So much UK rhetoric around children and the internet focuses on pornography and extremism - see, for example, this week's Digital Childhood report calling for a digital environment that is "fit for childhood" - that it's refreshing to hear someone talk about other harms. Such as: teaching kids "consumerism". Under 12, Henriques said, children do not understand the persuasiveness and complexity of advertising. Under six, they don't identify ads (like the toddler who watched 12 minutes of Geico commercials). And even things that are *effectively* ads aren't necessarily easily identifiable as such, even by adults: unboxing videos, product placement, YouTube kids playing with branded toys, and in-app "opportunities" to buy stuff. Henriques' research finds that children influence family purchases by up to 80%. That's not a baby you're expecting; it's a sales promoter.

When we talk about the advertising arms race, we usually mean the expanding presence and intrusiveness of ads in places where we're already used to seeing them. That escalation has been astonishing.

To take one example: a half-hour sitcom episode on US network television in 1965 - specifically, the deservedly famous Coast to Coast Big Mouth episode of The Dick Van Dyke Show - was 25:30 minutes long. A 2017 episode of the top-rated US comedy, The Big Bang Theory, barely ekes out 18. That's over a third less content, double the percentage of time watching ads, or simply seven and a half extra minutes. No wonder people realized automatic ad marking and fast-forwarding would sell.

The internet kicked this into high gear. The lack of regulation and the uncertainty about business models led to legitimate experimentation. But it also led to today's complaints, both about maximally intrusive and attention-demanding ads and the data mining advertisers and their agencies use to target us, and also to increasingly powerful ad blockers - and ad blocker blockers.

The second, more subtle version of the arms race is the one where advertisers see every open space where people congregate as theirs to target. This was summed up for me once at a lunchtime seminar run by the UK's Internet Advertising Bureau in 2003, when a speaker gave an enthusiastic tutorial on marketing via viral email: "It gets us into the office. We've never been able to go there before." You could immediately see what office inboxes looked like to them: vast green fields just waiting to be cultivated. You know, the space we thought of as "work". And we were going to be grateful.

Childhood, as listening to Henriques, Montgomery, and the Campaign for a Commercial-Free Childhood makes plain, is one of those green fields advertisers have long fought to cultivate. On broadcast media, regulators were able to exercise some control. Even online, the Childhood Online Privacy Protection Act has been of some use.

Thumbnail image for isabella-henriques.jpegAdvertisers, like some religions, aim to capture children's affections young, on the basis that the tastes and habits you acquire in childhood are the hardest for an interloper to disrupt. The food industry has long been notorious unhealthy foods into finding ways around regulations that limit how they target children on broadcast and physical-world media. But the internet offers new options: "Smart" toys are one set of examples; Facebook's new Messenger Kids app is another. This arms race variant will escalate as the Internet of Things offers advertisers access to new areas of our lives.

Part of this story is the vastly increased quantities of data that will be available to sell to advertisers for data mining. On the web, "free" has long meant "pay with data". With the Internet of Things, no device will be free, but we will pay with data anyway. The cases we wrote about last week are early examples. As hardware becomes software, replacement life cycles become the manufacturer's choice, not yours. "My" mobile phone is as much mine as "my library book" - and a Tesla is a mobile phone with a chassis and wheels. Think of the advertising opportunities when drivers are superfluous to requirements, , beginning with the self-driving car;s dashboard and windshield. The voice-operated Echo/Home/Dot/whatever is clearly intended to turn homes into marketplaces.

A more important part is the risk of turning our homes into walled gardens, as Geoffrey A. Fowler writes in the Washington Post of his trial of Amazon Key. During the experiment, Fowler found strangers entering his house less disturbing than his sense of being "locked into an all-Amazon world". The Key experiment is, in Fowler's estimation, the first stab at Amazon's goal of becoming "the operating system for your home". Will Amazon, Google, and Apple homes be interoperable?

Henriques is calling for global regulation to limit the targeting of children for food and other advertising. It makes sense: every country is dealing with the same multinational companies, and most of us can agree on what "abusive advertising" means. But then you have to ask: why do they get a pass on the rest of us?


Illustrations: Windows XP start-up screen

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017

Counterfactuals

Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.


Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 13, 2017

Cost basis

Thumbnail image for Social_Network_Diagram_(segment).svg.pngThere's plenty to fret about in the green paper released this week outlining the government's Internet Safety Strategy (PDF) under the Digital Economy Act (2017). The technical working group is predominantly made up of child protection folks, with just one technical expert and no representatives of civil society or consumer groups. It lacks definitions: what qualifies as "social media"? And issues discussed here before persist, such as age verification and the mechanisms to implement it. Plus there's picky details, like requiring parental consent for the use of information services by children under 13, which apparently fails to recognize how often parents help their kids lie about their ages. However.

The attention-getting item we hadn't noticed before is the proposal of an "industry-wide levy which could in the future be underpinned with legislation" in order to "combat online harms". This levy is not, the paper says, "a new tax on social media" but instead "a way of improving online safety that helps businesses grow in a sustainable way while serving the wider public good".

The manifesto commitment on which this proposal is based compares this levy to those in the gambling and alcohol industries. The The Gambling Act 2005 provides for legislation to support such a levy, though to date the industry's contributions, most of which go to GambleAware to help problem gamblers, are still voluntary. Similarly, the alcohol industry funds the Drinkaware Trust.

The problem is that these industries aren't comparable in business model terms. Alcohol producers and retailers make and sell a physical product. The gambling industry's licensed retailers also sell a product, whether it's physical (lottery tickets or slot machine rolls) or virtual (online poker). Either way, people pay up front and the businesses pay their costs out of revenues. When the government raises taxes or adds a levy or new restriction that has to be implemented, the costs are passed on directly to consumers.

No such business model applies in social media. Granted, the profits accruing to Facebook and Google (that is, Alphabet) look enormous to us, especially given the comparatively small amounts of tax they pay to the UK - 5% of UK profits for Facebook and a controversial but unclear percentage for Alphabet. But no public company adds costs without planning how to recoup them, so then the question is: how do companies that offer consumers a pay-with-data service do that, given that they can't raise prices?

The first alternative is to reduce costs. The problem is how. Reducing staff won't help with the kinds of problems we're complaining about, such as fake news and bad behavior, which require humans to solve. Machine learning and AI are not likely to improve enough to provide a substitute in the near term, though no doubt the companies hope they will in the longer term.

The second is to increase revenues, which would mean either raising prices to advertisers or finding new ways to exploit our data. The need to police user behavior doesn't seem like a hot selling point to convince advertisers that it's worth paying more. That leaves the likelihood that applying a levy will create a perverse incentive to gather and crunch yet more user data. That does not represent a win; nor does it represent "taking back control" in any sense.

It's even more unclear who would be paying the levy. The green paper says the intention is to make it "proportionate" and ensure that it "does not stifle growth or innovation, particularly for smaller companies and start-ups". It's not clear, however, that the government understands just how vast and varied "social media" are. The term includes everything from the services people feel they have little choice about using (primarily Facebook, but also Google to some extent) to the web boards on news and niche sites, to the comments pages on personal blogs, to long-forgotten precursors of the web like Usenet and IRC. Designing a levy to take account of all business models and none while not causing collateral damage is complex.

Overall, there's sense in the principle that industries should pay for the wider social damage they cause to others. It's a long-standing approach for polluters, for example, and some have suggested there's a useful comparison to make between privacy and the environment. The Equifax breach will be polluting the privacy waters for years to come as the leaked data feeds into more sophisticated phishing attacks, identity fraud, and other widespread security problems. Treating Equifax the way we treat polluters makes sense.

It's less clear how to apply that principle to sites that vary from self-expression to publisher to broadcaster to giant data miners. Since the dawn of the internet any time someone's created a space for free expression someone else has come along and colonized a corner of it where people could vent and be mean and unacceptable; 4chan has many ancestors. In 1994, Wired captured an early example: The War Between alt.tasteless and rec.pets.cats. Those Usenet newsgroups created revenue for no one, while Facebook and Google have enough money to be the envy of major governments.

Nonetheless, that doesn't make them fair targets for every social problem the government would like to dump off onto someone else. What the green paper needs most is a clear threat model, because it's only after you have one that you can determine the right tools for solving it.


Illustrations:: Social network diagram.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 24, 2015

When content wanted to be free

A long-running motif on the TV show Mad Men has been the conflict between the numbers guys - Harry Crane (Rich Sommer) and Jim Cutler (Harry Hamlin) - and the creative folks - Don Draper (Jon Hamm) and Peggy Olson (Elisabeth Moss) - who want to do inventive work that inspires emotional connection. As the discussion on the WELL concluded, the success of Google's all-text contextual ads says the numbers guys have won. For now.

This week, two German publishers lost in court against the creator of the browser plug-in Adblock Plus, which, like you'd think, blocks web ads for an increasing number of users worldwide. The publishers' contention: that Adblock Plus is "illegal" and "anti-competitive". Adblock Plus's project manager, Ben Willliams, welcomed the precedent on his blog, hoping it will help his company avoid future expense and resource drain "defending what we feel is an obvious consumer right: giving people the ability to control their own screens by letting them block annoying ads and protect their privacy".

Williams concludes by suggesting that publishers should work with Adblock Plus to develop non-intrusive forms of advertising and "create a more sustainable Internet ecosystem for everyone". Adblock Plus implements this by whitelisting sites (the largest of which pay for the privilege) that run acceptable ads. Cue the arms race: the fork Adblock Edge still removes all ads. As of June 2014, PageFair counted 150 million ad blocker users (PDF), up 69% from 2013.

I have to admit to some inner conflict here, because those who argue that blocking ads is theft have a point. I am indeed accessing content whose existence (and whose writers) is being financed by advertisers without the quid pro quo of my attention. If everyone does this, the whole shebang - including a chunk of how I make my own living - is unsustainable. I should be wracked with guilt. It's just that the ads make me hate the companies that pay for them, and I can't read a web page full of fine print with animations in my face. Similarly, it's hard to enjoy - or even follow - a US television show when it's interrupted by eight minutes of ads per half hour and each one is delivered at a volume easily 1/3 higher than the program I'm there to see. I plead in return that I buy DVDs, magazine subscriptions, and books, and contribute my own share of free content to the web, but that doesn't pay the same content providers. What seems particularly unreasonable to me is double-dipping: ads in situations where we already pay for admission. That would include DVDs; movie theaters; premium TV channels; the Transport for London phone app; sports stadia during live events; and on purchased clothing.

So the question remains: for the large chunk of the web that is financed solely by advertising, do we want professional content or not? If we do, how do we propose to pay people to create it?

It turns out that this question was considered in 2012 by Tim Hwang (last seen at We Robot 2015) and Adi Kamdar in their paper: Peak Advertising (PDF). The paper makes the explicit analogy between the diminishing effectiveness of online advertising and the diminishing returns after peak oil, if the energy required for extraction is greater than we can retrieve. The authors consider four indications that we might have reached the point of diminishing returns, and go on to speculate about how content on the Internet would have to evolve if it can no longer rely on advertising support as its dominant financial model. I found it a few months ago when I had the same thought: for many quarters now Google's revenues per click have been dropping (its latest results, released yesterday, continue the trend), and overall it seems impossible that there can be enough advertising in the world to pay for all the things people want to support that way.

Hwang and Kamdar highlighted three problems with the status quo in addition to the constant rise in ad blocking: demographics - advertising tends to reach the oldest (read: least desirable) customers; the click fraud; and escalating ad density (the kind of saturation that sends Americans to fast-forwarding DVRs rather than watch eight minutes of ads per TV half hour). Hwang and Kamdar predicted that over the next decade falling revenues will encourage consolidation and monopolistic markets for online services because only the largest vendors will have sufficient inventory to remain profitable. In addition, they predicted an increasing interest on the part of advertisers in collecting more and more (and more privacy-invasive) data about users. Finally, they predicted a rise in essentially unblockable content - that is, "sponsored" stories and product placement. As evidence they were on the right track, I offer the UK Internet Advertising Bureau's discussion of "native ads" ("make advertising part [of] the content experience").

web-firstbanner-1994-10-27.jpg"The end of the Internet as we know it," they said on Usenet when the first ad went up. Recalcitrant users: a disruptive technology.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2012

Democracy theater

So Facebook is the latest to discover that it's hard to come up with a governance structure online that functions in any meaningful way. This week, the company announced plans to disband the system of voting on privacy changes that it put in place in 2009. To be honest, I'm surprised it took this long.

Techcrunch explains the official reasons. First, with 1 billion users, it's now too easy to hit the threshold of 7,000 comments that triggers a vote on proposed changes. Second, with 1 billion users, amassing the 30 percent of the user base necessary to make the vote count has become...pretty much impossible. (Look, if you hate Facebook's policy changes, it's easier to simply stop using the system. Voting requires engagement.) The company also complained that the system as designed encourages comments' "quantity over quality". Really, it would be hard to come up with an online system that didn't unless it was so hard to use that no one would bother anyway.

The fundamental problem for any kind of online governance is that no one except some lawyers thinks governmance is fun. (For an example of tedious meetings producing embarrassing results, see this week's General Synod.) Even online, where no one can tell you're a dog watching the Outdoor Channel while typing screeds of debate, it takes strong motivation to stay engaged. That in turn means that ultimately the people who participate, once the novelty has worn off, are either paid, obsessed, or awash in free time.

The people who are paid - either because they work for the company running the service or because they work for governments or NGOs whose job it is to protect consumers or enforce the law - can and do talk directly to each other. They already know each other, and they don't need fancy online governmental structures to make themselves heard.

The obsessed can be divided into two categories: people with a cause and troublemakers - trolls. Trolls can be incredibly disruptive, but they do eventually get bored and go away, IF you can get everyone else to starve them of the oxygen of attention by just ignoring them.

That leaves two groups: those with time (and patience) and those with a cause. Both tend to fall into the category Mark Twain neatly summed up in: "Never argue with a man who buys his ink by the barrelful." Don't get me wrong: I'm not knocking either group. The cause may be good and righteous and deserving of having enormous amounts of time spent on it. The people with time on their hands may be smart, experienced, and expert. Nonetheless, they will tend to drown out opposing views with sheer volume and relentlessness.

All of which is to say that I don't blame Facebook if it found the comments process tedious and time-consuming, and as much of a black hole for its resources as the help desk for a company with impenetrable password policies. Others are less tolerant of the decision. History, however, is on Facebook's side: democratic governance of online communities does not work.

Even without the generic problems of online communities which have been replicated mutatis mutandem since the first modem uploaded the first bit, Facebook was always going to face problems of scale if it kept growing. As several stories have pointed out, how do you get 300 million people to care enough to vote? As a strategy, it's understandable why the company set a minimum percentage: so a small but vocal minority could not hijack the process. But scale matters, and that's why every democracy of any size has representative government rather than direct voting, like Greek citizens in the Acropolis. (Pause to imagine the complexities of deciding how to divvy up Facebook into tribes: would the basic unit of membership be nation, family, or circle of friends, or should people be allocated into groups based on when they joined or perhaps their average posting rate?)

The 2009 decision to allow votes came a time when Facebook was under recurring and frequent pressure over a multitude of changes to its privacy policies, all going one way: toward greater openness. That was the year, in fact, that the system effectively turned itself inside out. EFF has a helpful timeline of the changes from 2005 to 2010. Putting the voting system in place was certainly good PR: it made the company look like it was serious about listening to its users. But, as the Europe vs Facebook site says, the choice was always constrained to old policy or new policy, not new policy, old policy, or an entirely different policy proposed by users.

Even without all that, the underlying issue is this: what company would want democratic governance to succeed? The fact is that, as Roger Clarke observed before Facebook even existed, social networks have only one business model: to monetize their users. The pressure to do that has only increased since Facebook's IPO, even though founder Mark Zuckerberg created a dual-class structure that means his decisions cannot be effectively challenged. A commercial company- especially a *public* commercial company - cannot be run as a democracy. It's as simple as that. No matter how much their engagement makes them feel they own the place, the users are never in charge of the asylum. Not even on the WELL.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

August 3, 2012

Social advertising

It only takes two words to sum up Facebook's sponsored stories, the program under which you click the "Like" button on a brand's page and the system picks up your name and photograph and includes it in ads seen by your friends. The two words: social engineering.

The cooption of that phrase into the common language and the workings of time mean that the origins of that phrase are beginning to be lost. In fact, it came from 1980s computer hacking, and was, to the best of my knowledge, created by Kevin Mitnick in the days when he was the New York Times's most dangerous hacker. (Compared to today's genuinely criminal hacking enterprises, Mitnick was almost absurdly harmless; but he scared the wrong people at the wrong time.) The thing itself, of course, is basically the confidence game that is probably as old as consciousness: you, the con man, get the mark to trust you so you can then manipulate that trust to your benefit. By the time the mark figures out the game, you yourself expect to be long gone and out of reach. Trust can be abruptly severed, but the results of having granted it in the first place can't be so easily undone.

Where Facebook messed up was in that last bit: it's hard for a company to leave town, opening the way for the inevitable litigation. Naturally, there was litigation, and now there's a settlement under consideration that would require the company to pay millions to privacy advocacy organisations.

This hasn't, of course, been a good week for Facebook for other reasons: it released its first post-IPO financial statements last week. And, for the same reasons we gave when the IPO failed to impress us, as predicted, those earnings were disappointing, At the same time, the company admitted that 83 million of its user accounts are fakes or duplicates (so the service's user base is maybe 912 million instead of 995 million). And, a music company complains that it was paying for ads clicked on by bots, a claim Facebook says it can't substantiate. Small wonder the shares have halved in price since the IPO - and I'd say they're still too expensive.

The comment that individuals whose faces and names were used were being used as spokespeople without being paid, however, sparks some interesting thoughts about the democratization of celebrity endorsements and product placement. Ever since I first encountered MIT's work on wearable computing in the mid 1990s, I've wondered when we would start seeing people wearing clothing that's not just branded but displaying video ads. In the early 2000s, I recall attending an Internet Advertising Bureau event, where one of the speakers talked baldly about the desirability of getting messages into the workplace, which until then had been a no-go area. Well, I say no-go; to them I think it seemed more like a green field or an unbroken pasture of fresh snow.

Spammers were way ahead on this one, invading people's email inboxes and instant messaging and then, when filtering got good, spoofing the return addresses of people you know and trust in order to get you to click on the bad stuff. It's hard not to see Facebook's sponsored stories as the corporate version of this.

But what if they did pay, as that blog posting suggested? What if instead of casually telling your friends how great Lethal Police Hogwarts XXII is, you could get paid to do so? You wouldn't get much, true, but if sports stars can be paid millions of dollars to endorse tennis racquets (which are then customized to the point where they bear little resemblance to the mass market product sold to the rest of us) why shouldn't we be paid a few cents? Of course, after a while you wouldn't be able to trust your friends' opinions any more, but is that too high a price?

Recently, I've spent some time corresponding with a couple of people from Premiumlinkadvertising.com, who contacted me with the offer to pay me to insert a link to Musician's Friend into one of the music pages on my Web site. Once I realized that the deal was that the link could not be identified in any way as a paid link - it couldn't be put in a box, or a different font, or include the text, paid for, or anything like that - I bailed. They then offered more money. Last offer was $250 for a year, I think. I do allow ads on my site - a few pages have AdSense, and in the past a couple had paid-for text ads clearly labeled as such - but not masquerading as personal recommendations. I imagine there's some price at which I could be bought, but $250 is several orders of magnitude too low.


Week links:

- Excellent debunking of the "cybercrime costs $1 trillion" urban legend (is that including Facebook's vanishing market cap?)

- The Federated Artists Coalition has an interesting proposal to give artists and creators some rights in the proposed Universal/EMI merger.

- Wouldn't you think people would test their software before unleashing it on an unsuspecting stock market?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


July 27, 2012

Retcons and reversals

Reversals - in which a twist of plot or dialogue reverses what's gone before it - make for great moments in fiction, both comic and tragic. Retcons, in which the known history of a character or event is rewritten or ignored are typically a sign of writer panic: they're out of ideas and are desperate enough to betray the characters and enrage the fans.

This week real-life Internet-related news has seen so many of both that if it were a TV series the showrunner would demand that the writers slow the pace. To recap:

Reversal: Paul Chambers' acquittal on appeal in the so-called Twitter joke trial is good news for everyone: common sense has finally prevailed, albeit at great cost to Chambers, whose life was (we hope temporarily) wrecked by the original arrest and guilty verdict. The decision should go a long way toward establishing that context matters; that what is said online and in public may still be intended only for a relatively small audience who give it its correct meaning; and that when the personnel responsible for airport security, the police, and everyone else up the chain understand there was no threat intended the Crown Prosecution Service should pay attention. What we're trying to stop is people blowing up airports, not people expressing frustration on Twitter. The good news is that everyone except the CPS and the original judge could accurately tell the difference.

Retcon: The rewrite of British laws to close streets and control street signs, retailers, individual behavior, and other public displays for the next month, all to make the International Olympic Committee happy is both wrong and ironic. While the athletes are required to appear to be amateurs who participate purely for the love of sport (no matter what failed drug tests indicate), the IOC and its London delegate, LOCOG, are trying to please their corporate masters by behaving like bullies. This should not have been a surprise, given both the list of high-level corporate sponsors and the terms of the 2006 Act the British Parliament passed in their shameful eagerness to *get* the Olympics. No sporting event, no matter how prominent, no matter how much politicians hope it will bring luster to their country and keep them in office, should override national laws, norms, and standards.

In 1997 I predicted for Salon.com the top ten new jobs for 2002. Number one was copyright protection officer, which I imagined as someone who visited schools to ensure that children complied with trademark, copyright, and other intellectual property requirements. Today, according to CNN and the New York Times, 280 "brand police" are scouring London for marketers who are violating the London Olympic Games and Paralympic Games Act 2006 by using words that might conjure up an association with the Olympics in people's minds. Even Michael Payne, the marketing director who formulated the IOC's branding strategy, complains that LOCOG has gone too far. The Olympics of Discontent, indeed.

Reversal: Eleven-year-old Liam Corcoran managed to get through security and onto a plane, all without a ticket, boarding pass, or passport, apparently more or less by accident. The story probably shouldn't be the occasion for too much hand-wringing about security. The fixes are simple and cheap. And it's not as if the boy got through with 3D printer and enough material to make a functioning gun. (Doubtless to be banned from Olympic events in 2016, alongside wireless hubs.

Retcon: If you're going to (let's call it) reinterpret history to suit an agenda, you should probably stick with events far enough back that the people are all dead. There is by now plenty of high-quality debunking of Gordon Crovitz's claim in the Wall Street Journal that government involvement in the invention of the Internet is a "myth". Ha. Not only was the development of the Internet largely supported by the US government (and championed by Al Gore), so was that of the rest of the computer industry. That conservatives would argue this wasn't true is baffling; isn't the military supposed to be the one part of government anti-big-government people actually like? Another data point left out of the (largely American) discussion: the US government wasn't the only one involved. Much of the early work on internetworking involved international teamwork. The term "packet" in "packet switching", the fundamental way the Internet transmits data, came from the British efforts; its inventor was the Welsh computer scientist Donald Davies at the UK's National Physical Laboratory. Not that Mitt Romney will want to know this.

For good historical accounts of the building of the Internet, see Katie Hafner and Matthew Lyon's Where Wizards Stay Up Late: The Origins of the Internet (1998) and (especially for a more international view) Janet Abbate's Inventing the Internet. As for the Romney/Obama spat over who built what, I suspect that what President Obama was trying to get across was a point similar to that made by the writer Paulina Borsook in 1996: that without good roads, clean water, good schools, and all the other infrastructure First Worlders take for granted, big, new companies have a hard time emerging.

It's all part of that open, free infrastructure we so often like to talk about that's necessary for the commons to thrive. And for that, you need governments to do the right things.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 20, 2012

In the country of the free

About a year and a half ago, I suddenly noticed that The Atlantic was posting a steady stream of interesting articles to Twitter (@theatlantic) and realized it was time to resubscribe. In fact, I would argue that the magazine is doing a lot of what Wired used to do in its digital coverage.

I don't, overall, regret it. But this month's issue is severely marred by this gem, from Elizabeth Wurtzel (the woman who got famous for taking Prozac and writing about it:

Of the Founders' genius ideas, few trump intellectual-property rights. At a time when Barbary pirates still concerned them, the Framers penned an intellectual-property clause--the world's first constitutional protection for copyrights and patents. In so doing, they spawned Hollywood, Silicon Valley, Motown, and so on. Today, we foolishly flirt with undoing that. In a future where all art is free (the future as pined for by Internet pirates and Creative Commons zealots), books, songs, and films would still get made. But with nobody paying for them, they'd be terrible. Only people who do lousy work do it for free.

Wurtzel's piece, entitled "Charge for Your Ideas", is part of a larger section on innovative ideas; other than hers, most of them are at least reasonable suggestions. I hate to make the editors happy by giving additional attention to something that should have been scrapped, but still: there are so many errors in that one short paragraph that need rebuttal.

Very, very few people - the filmmaker Nina Paley being the only one who springs rapidly to mind (do check out her fabulous film Sita Sings the Blues) - actually want to do away with copyright. And even most of those would like to be paid for their work. Paley turned Sita over to her audience to distribute freely because the deals she was being offered by distributors were so terrible and demanded so much lock-in that she thought she could do better. And she has, including fees for TV and theatrical showings and sales of DVDs and other items. More important from her perspective, she's built an audience for the film that it probably never would have found through traditional channels and that will support and appreciate her future work. As so many of us have said, obscurity is a bigger threat to most artists than loss of revenues.

Neither Creative Commons, nor its founder, Larry Lessig, nor the Open Rights Group, nor the Electronic Frontier Foundation, nor anyone else I can think of among digital rights campaigners has ever said that copyright should be abolished. The Pirate Party, probably the most radical among politically active groups pushing for copyright reform, wants to cut it way back, true - but not to abolish it. Even free software diehard Richard Stallman finds copyright useful as a way of blocking people from placing restrictions on free software.

Creative Commons' purpose in life is to make it easy for anyone who creates online content to attach to it a simple, easy-to-understand license that makes clear what rights to the content are reserved and which are available. One of those licenses blocks all uses without permission; others allow modification, redistribution, or commercial use, or require attribution.

Wurtzel fails to grasp that one may wish to reform something without wishing to terminate its existence. It was radical to campaign for copyright reform 20 years ago; today even the British government agrees copyright reform is needed (though we may all disagree about the extent and form that reform should take).

The Framers did not invent copyright. It was that pesky country they left, Britain, that enacted the first copyright law, the Statute of Anne, in 1710. We will, however, allow the "first constitutional" bit to stand. That still does not mean that the copyright status of Mickey Mouse should dictate national law.

As for pirates - the seafaring kind, not the evil downloader with broadband - they are far from obsolete. In fact, piracy is on the increase, and 1 major concern to both governments and shipping businesses. In May, the New York Times highlighted the growing problem of Somali pirates off the Horn of Africa.

Her final claim, that "Only people who do lousy work do it for free" was the one that got me enraged enough to write this. It's an insult to every volunteer, every generous podcaster, every veteran artist who blogs to teach others, every beginning artist finding their voice, every intern, and every person who has a passion for something and pursues it for love, whether they're an athlete in an unpopular sport or an amateur musician who plays only for his friends because he doesn't want his relationship with music to be damaged by making it his job. It is certainly true that much of what we imagine is "free" is paid for in other ways: bloggers whose blogs are part of the output their employer pays for, free/open source software writers who like the credit and stature their contributions give them, and so on. But imagine the miserable, miserly, misanthropic society we'd be living in if her claim were true? We'd need that Prozac.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 29, 2012

Artificial scarcity

A couple of weeks ago, while covering the tennis at Eastbourne for Daily Tennis, I learned that there is an ongoing battle between the International Tennis Writers Association and the sport at large over the practice of posting interview transcripts online.

What happens is this. Tournaments - the top few layers of the men's (ATP) and women's (WTA) tours - pay stenographers from ASAP Sports to attend players' press conferences and produce transcripts, which are distributed to the journalists on-site to help them produce accurate copy. It's a fast service; the PR folks come around the press room with hard copies of the transcript perhaps 10-15 minutes after the press session ends.

Who gives press conferences? At Eastbourne, like most smaller events, the top four seeds all are required to do media on the first day. After that, every day's match winners are required to oblige if the press asks for them; losers have more discretion but the top players generally understand that with their status and success level comes greater responsibility to publicize the game by showing up to answer questions. The stenographer at Eastbourne was a highly trained court reporter who travels the golf and tennis worlds taking down these questions and answers verbatim on a chord keyboard.

It turns out the transcripts particular battle has been going on for a while; witness this unhappy blogger's comment from June, 2011, after discovering that the French Open had bowed to pressure and stopped publishing interviews on its Web site. The same blogger had earlier posted ITWA's response to the complaints.

ITWA's arguments are fairly simple. It's a substantial investment to travel the tour (true; per year full-time you're talking at least $50,000). If interview transcripts are posted on the Web before journalists have had a chance to write their stories, it won't be worth spending that money because anyone can write stories based on them (true). Newspapers are in dire straits as it is (true). The questions journalists ask the players are informed by their experience and professional expertise; surely they should have the opportunity to exploit the responses they generate before everyone else does - all those pesky bloggers, for example, who read the transcripts and compare them to the journalists' reports and spot the elisions and changes of context.

Now, I don't believe for a second that there will be no coverage of tennis if the press stop traveling the tour. What there won't be is *independent* coverage. Except for the very biggest events, the players will be interviewed by the tours' PR people, and everything published about them will be as sanitized as their Wimbledon whites. Plus some local press, asking things like, "Talk about how much you like Eastbourne." The result will be like the TV stations now that provide their live match commentary by dropping a couple of people in a remote studio. No matter how knowledgeable those people are, their lack of intimate contact with the players and local conditions deadens their commentary and turns it into a recital of their pet peeves. (Note to Eurosport: any time a commentator says, "We talk so often about..." that commentator needs to shut up..)

This is the same argument they used to have about TV: if people can see the match on TV they won't bother to travel to it (and sometimes you do still find TV blackouts of local games). That hasn't really turned out to be true - TV has indeed changed this and every other sport, but by creating international stars and bringing in a lot of money in both payment for TV rights and sponsorship.

My response to the person who told me about this issue was that I didn't think basing your business model on artificial scarcity was going to work, the way the world is going. But this is not the only example of such restrictions; a number of US tournaments do not allow fans to carry professional-quality cameras onto the ground (to protect the interests of professional photographers).

What intrigued me about the argument - which at heart is merely a variant of the copyright wars - is that it pits the interests of fans and bloggers against those of the journalists who cover them. For the tournaments and tours themselves it's an inner conflict: they want both newspaper and magazine coverage *and* fan engagement. "Personal" contact with the players is a key part of that - and it is precisely what has diminished. Veteran tennis journalists will tell you that 20 years ago they got to know the players because they'd all be traveling the circuit together and staying in the same hotels. Today, the barriers are up; the players' lounge is carefully sited well away from the media centre.

Yet this little spat reflects the reality that the difference between writing a fan blog and working for a major media outlet is access. There is only so much time the stars in any profession - TV, sports, technology, business - can give to answering outsiders' questions before it eats into their real work. So this isn't really a story of artificial scarcity, though there's no lack of people who want to write about tennis. It's a story of real scarcity - but scarcity that one day soon is going to be differently distributed.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


Artificial scarcity

A couple of weeks ago, while covering the tennis at Eastbourne for Daily Tennis, I learned that there is an ongoing battle between the International Tennis Writers Association and the sport at large over the practice of posting interview transcripts online.

What happens is this. Tournaments - the top few layers of the men's (ATP) and women's (WTA) tours - pay stenographers from ASAP Sports to attend players' press conferences and produce transcripts, which are distributed to the journalists on-site to help them produce accurate copy. It's a fast service; the PR folks come around the press room with hard copies of the transcript perhaps 10-15 minutes after the press session ends.

Who gives press conferences? At Eastbourne, like most smaller events, the top four seeds all are required to do media on the first day. After that, every day's match winners are required to oblige if the press asks for them; losers have more discretion but the top players generally understand that with their status and success level comes greater responsibility to publicize the game by showing up to answer questions. The stenographer at Eastbourne was a highly trained court reporter who travels the golf and tennis worlds taking down these questions and answers verbatim on a chord keyboard.

It turns out the transcripts particular battle has been going on for a while; witness this unhappy blogger's comment from June, 2011, after discovering that the French Open had bowed to pressure and stopped publishing interviews on its Web site. The same blogger had earlier posted ITWA's response to the complaints.

ITWA's arguments are fairly simple. It's a substantial investment to travel the tour (true; per year full-time you're talking at least $50,000). If interview transcripts are posted on the Web before journalists have had a chance to write their stories, it won't be worth spending that money because anyone can write stories based on them (true). Newspapers are in dire straits as it is (true). The questions journalists ask the players are informed by their experience and professional expertise; surely they should have the opportunity to exploit the responses they generate before everyone else does - all those pesky bloggers, for example, who read the transcripts and compare them to the journalists' reports and spot the elisions and changes of context.

Now, I don't believe for a second that there will be no coverage of tennis if the press stop traveling the tour. What there won't be is *independent* coverage. Except for the very biggest events, the players will be interviewed by the tours' PR people, and everything published about them will be as sanitized as their Wimbledon whites. Plus some local press, asking things like, "Talk about how much you like Eastbourne." The result will be like the TV stations now that provide their live match commentary by dropping a couple of people in a remote studio. No matter how knowledgeable those people are, their lack of intimate contact with the players and local conditions deadens their commentary and turns it into a recital of their pet peeves. (Note to Eurosport: any time a commentator says, "We talk so often about..." that commentator needs to shut up..)

This is the same argument they used to have about TV: if people can see the match on TV they won't bother to travel to it (and sometimes you do still find TV blackouts of local games). That hasn't really turned out to be true - TV has indeed changed this and every other sport, but by creating international stars and bringing in a lot of money in both payment for TV rights and sponsorship.

My response to the person who told me about this issue was that I didn't think basing your business model on artificial scarcity was going to work, the way the world is going. But this is not the only example of such restrictions; a number of US tournaments do not allow fans to carry professional-quality cameras onto the ground (to protect the interests of professional photographers).

What intrigued me about the argument - which at heart is merely a variant of the copyright wars - is that it pits the interests of fans and bloggers against those of the journalists who cover them. For the tournaments and tours themselves it's an inner conflict: they want both newspaper and magazine coverage *and* fan engagement. "Personal" contact with the players is a key part of that - and it is precisely what has diminished. Veteran tennis journalists will tell you that 20 years ago they got to know the players because they'd all be traveling the circuit together and staying in the same hotels. Today, the barriers are up; the players' lounge is carefully sited well away from the media centre.

Yet this little spat reflects the reality that the difference between writing a fan blog and working for a major media outlet is access. There is only so much time the stars in any profession - TV, sports, technology, business - can give to answering outsiders' questions before it eats into their real work. So this isn't really a story of artificial scarcity, though there's no lack of people who want to write about tennis. It's a story of real scarcity - but scarcity that one day soon is going to be differently distributed.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 28, 2012

Interview with Lawrence Lessig

This interview was originally intended for a different publication; I only discovered recently that it hadn't run. Lessig and I spoke in late January, while the fate of the Research Works Act was still unknown (it's since been killed.

"This will be the grossest money election we've seen since Nixon," says the law professor Lawrence Lessig, looking ahead to the US Presidential election in November. "As John McCain said, this kind of spending level is certain to inspire a kind of scandal. What's needed is scandals."

It's not that Lessig wants electoral disaster; it's that scandals are what he thinks it might take to wake Americans up to the co-option of the country's political system. The key is the vast, escalating sums of money politicians need to stay in the game. In his latest book, Republic, Lost, Lessig charts this: in 1982 aggregate campaign spending for all House and Senate candidates was $343 million; in 2008 it was $1.8 billion. Another big bump upward is expected this year: the McCain quote he references was in response to the 2010 Supreme Court decision in Citizens United legalising Super-PACs. These can raise unlimited campaign funds as long as they have no official contact with the candidates. But as Lessig details in Republic, Lost, money-hungry politicians don't need things spelled out.

Anyone campaigning against the seemingly endless stream of anti-open Internet, pro-copyright-tightening policies and legislation in the US, EU, and UK - think the recent protests against the US's Stop Internet Piracy (SOPA) and Protect Intellectual Property (PIPA) Acts and the controversy over the Digital Economy Act and the just-signed Anti-Counterfeiting Trade Agreement (ACTA) treaty - has experienced the blinkered conviction among many politicians that there is only one point of view on these issues. Years of trying to teach them otherwise helped convince Lessig that it was vital to get at the root cause, at least in the US: the constant, relentless need to raise escalating sums of money to fund their election campaigns.

"The anti-open access bill is such a great example of the money story," he says, referring to the Research Works Act (H.R. 3699), which would bar government agencies from mandating that the results of publicly funded research be made accessible to the public. The target is the National Institutes of Health, which adopted such a policy in 2008; the backers are journal publishers.

"It was introduced by a Democrat from New York and a Republican from California and the single most important thing explaining what they're doing is the money. Forty percent of the contributions that Elsevier and its senior executives have made have gone to this one Democrat." There is also, he adds, "a lot to be done to document the way money is blocking community broadband projects".

Lessig, a constitutional scholar, came to public attention in 1998, when he briefly served as a special master in Microsoft's antitrust case. In 2000, he wrote the frequently cited book Code and Other Laws of Cyberspace, following up by founding Creative Commons to provide a simple way to licence work on the Internet. In 2002, he argued Eldred v. Ashcroft against copyright term extension in front of the Supreme Court, a loss that still haunts him. Several books later - The Future of Ideas, Free Culture, and Remix - in 2008, at the Emerging Technology conference, he changed course into his present direction, "coding against corruption". The discovery that he was writing a book about corruption led Harvard to invite him to run the Edmond J. Safra Foundation Center for Ethics, where he fosters RootStrikers, a network of activists.

Of the Harvard centre, he says, "It's a bigger project than just being focused on Congress. It's a pretty general frame for thinking about corruption and trying to think in many different contexts." Given the amount of energy and research, "I hope we will be able to demonstrate something useful for people trying to remedy it." And yet, as he admits, although corruption - and similar copyright policies - can be found everywhere his book and research are resolutely limited to the US: "I don't know enough about different political environments."

Lessig sees his own role as a purveyor of ideas rather than an activist.

"A division of labour is sensible," he says. "Others are better at organising and creating a movement." For similar reasons, despite a brief flirtation with the notion in early 2008, he rules out running for office.

"It's very hard to be a reformer with idealistic ideas about how the system should change while trying to be part of the system," he says. "You have to raise money to be part of the system and engage in the behaviour you're trying to attack."

Getting others - distinguished non-politicians - to run on a platform of campaign finance reform is one of four strategies he proposes for reclaiming the republic for the people.

"I've had a bunch of people contact me about becoming super-candidates, but I don't have the infrastructure to support them. We're talking about how to build that infrastructure." Lessig is about to publish a short book mapping out strategy; later this year he will update incorporating contributions made on a related wiki.

The failure of Obama, a colleague at the University of Illinois at Chicago in the mid-1990s, to fulfil his campaign promises in this area is a significant disappointment.

"I thought he had a chance to correct it and the fact that he seemed not to pay attention to it at all made me despair," he says.

Discussion is also growing around the most radical of the four proposals, a constitutional convention under Article V to force through an amendment; to make it happen 34 state legislatures would have to apply.

"The hard problem is how you motivate a political movement that could actually be strong enough to respond to this corruption," he says. "I'm doing everything I can to try to do that. We'll see if I can succeed. That's the objective."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this seriesand one of other interviews.


February 17, 2012

Foul play

You could have been excused for thinking you'd woken up in a foreign country on Wednesday, when the news broke about a new and deliberately terrifying notice replacing the front page of a previously little-known music site, RnBXclusive.

ZDNet has a nice screenshot of it; it's gone from the RnBXclusive site now, replaced by a more modest advisory.

It will be a while before the whole story is pieced together - and tested in court - but the gist so far seems to be that the takedown of this particular music site was under the fraud laws rather than the copyright laws. As far as I'm aware - and I don't say this often - this is the first time in the history of the Net that the owner of a music site has been arrested on suspicion of conspiracy to defraud (instead of copyright infringement ). It seems to me this is a marked escalation of the copyright wars.

Bearing in mind that at this stage these are only allegations, it's still possible to do some thinking about the principles involved.

The site is accused of making available, without the permission of the artists or recording companies, pre-release versions of new music. I have argued for years that file-sharing is not the economic enemy of the music industry and that the proper answer to it is legal, fast, reliable download services. (And there is increasing evidence bearing this out.) But material that has not yet been officially released is a different matter.

The notion that artists and creators should control the first publication of new material is a long-held principle and intuitively correct (unlike much else in copyright law). This was the stated purpose of copyright: to grant artists and creators a period of exclusivity in which to exploit their ideas. Absolutely fundamental to that is time in which to complete those ideas and shape them into their final form. So if the site was in fact distributing unreleased music as claimed, especially if, as is also alleged, the site's copies of that music were acquired by illegally hacking into servers, no one is going to defend either the site or its owner.

That said, I still think artists are missing a good bet here. The kind of rabid fan who can't wait for the official release of new music is exactly the kind of rabid fan who would be interested in subscribing to a feed from the studio while that music is being recorded. They would also, as a friend commented a few years ago, be willing to subscribe to a live feed from the musicians' rehearsal studio. Imagine, for example, being able to listen to great guitarists practice. How do they learn to play with such confidence and authority? What do they find hard? How long does it take to work out and learn something like Dave van Ronk's rendition, on guitar, of Scott Joplin rags with the original piano scoring intact?

I know why this doesn't happen: an artist learning a piece is like a dog with a wound (or maybe a bone): you want to go off in a forest by yourself until it's fixed. (Plus, it drives everyone around you mad.) The whole point of practicing is that it isn't performance. But musicians aren't magicians, and I find it hard to believe that showing the nuts and bolts of how the trick of playing music is worked would ruin the effect. For other types of artists - well, writers with works in progress really don't do much worth watching, but sculptors and painters surely do, as do dance troupes and theatrical companies.

However, none of that excuses the site if the allegations are true: artists and creators control the first release.

But also clearly wrong was the notice SOCA placed on the site, which displayed visitors' IP address, warned that downloading music from the site was a crime bearing a maximum penaltde y of up to ten years in prison, and claimed that SOCA has the capacity to monitor and investigate you with no mention of due process or court orders. Copyright infringement is a civil offense, not a criminal one; fraud is a criminal offense, but it's hard to see how the claim that downloading music is part of a conspiracy to commit fraud could be made to stick. (A day later, SOCA replaced the notice.) Someone browsing to The Pirate Bay and clicking on a magnet link is not conspiring to steal TV shows any more than someone buying a plane ticket is conspiring to destroy the ozone layer. That millions of people do both things is a contributing factor to the existence of the site and the airline, but if you accuse millions of people the term "organized crime" loses all meaning.

This was a bad, bad blunder on the part of authorities wishing to eliminate file-sharing. Today's unworkable laws against file-sharing are bringing the law into contempt already. Trying to scare people by misrepresenting what the law actually says at the behest of a single industry simply exacerbates the effect. First they're scared, then they're mad, and then they ignore you. Not a winning strategy - for anyone.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 10, 2012

Media cop

The behavior of The Times in the 2009 NightJack case, in which the paper outed an anonymous policeman blogging about his job, was always baffling since one of the key freedoms of the press is protecting sources. On occasion, journalists have gone to jail rather than give up a source's name, although it happens rarely enough that when it does, as in the Judith Miller case linked above, Hollywood makes movies about it. The principle at work here, writes NPR reporter David Folkenflik, who covered that case, is that, "You have to protect all of your sources if you want any of them to speak to you again."

Briefly, the background. In 2009, the first winner of the prestigious Orwell Prize for political blogging was a unidentified policeman. Blogging under the soubriquet of "NightJack", the blogger declined all interviews (I am not a media cop, he wrote ), sent a friend to deliver his acceptance speech, and had his prize money sent directly to charity. Shortly afterwards, he took The Times to court to prevent it from publishing his real-life identity. Controversially, Justice David Eady ruled for The Times on the basis that NightJack had no expectation of privacy - and freedom of expression was important. Ironic, since the upshot was to stifle NightJack's speech: his real-life alter ego, Richard Horton, was speedily reprimanded by his supervisor and the blog was deleted.

This is the case that has been reinvestigated this week by the Leveson inquiry into media phone hacking in the media. Justice Eady's decision seems to have rested on two prongs: first, that the Times had identified Horton from public sources, and second, that publication was in the public interest because Horton's blog posts disclosed confidential details about his police work. It seems clear from Times editor James Harding's testimony (PDF) that the first of these prongs was bent. The second seems to have been also: David Allen Green, who has followed this case closely, is arguing over at New Statesman (see the comments) that The Times's court testimony is the only source of the allegations that Horton's blog posts gave enough information that the real people in the cases he talked about could be identified. (In fact, I'd expect the cases are much more identifiable *after* his Times identification than before it.)

So Justice Eady's decision was not animated by research into the difficulty of real online anonymity. Instead, he was badly misled by incomplete, false evidence. Small wonder that Horton is suing.

One of the tools journalists use to get sources to disclose information they don't want tracked back to them is the concept of off-the-record background. When you are being briefed "on background", the rule is that you can't use what you're told unless you can find other sources to tell you the same thing on the record for publication. This is entirely logical because once you know what you're looking for you have a better chance of finding it. You now know where to start looking and what questions to ask.

But there should be every difference in an editor's mind between information willingly supplied under a promise not to publish and information obtained illegally. We can argue about whether NightJack's belief that he could remain anonymous was well-founded and whether he, like many people, did a poor job at securing his email account, but few would think he should have been outed as the result of a crime.

Once Foster knew Horton's name he couldn't un-know it - and, as noted, it's a lot easier to find evidence backing up things you already know. What should have happened is that Foster's managers should have barred him from pursuing or talking about the story. The paper should then either have dropped it or, if the editors really thought it sufficiently importance, assigned a different, uncontaminated reporter to start over with no prior knowledge and try to find the name from legal sources. Sounds too much like hard work? Yes. That this did not happen says a lot about the newsroom's culture: a focus on cheap, easy, quick, attention-getting stories acquired by whatever means. "I now see it was wrong" suggests that Harding and his editorial colleagues had lost all perspective.

Horton was, of course, not a source giving confidential information to one or more Times reporters. But it's so easy to imagine the Times - or any other newspaper - deciding to run a column written by "D.C. Plod" to give an intimate insight into how the police work. A newspaper running such a column would boast about it, especially if it won the Orwell Prize. And likely the only reason a rival paper would expose the columnist's real identity was if the columnist was a fraud.

Imagine Watergate if it had been investigated by this newsroom instead of that of the 1972 Washington Post. Instead of the President's malfeasance in seeking re-election, the story would be the identity of Deep Throat. Mark Felt would have gone to jail and Richard Milhous Nixon would have gone down in history as an honest man.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 27, 2012

Principle failure

The right to access, correct, and delete personal information held about you and the right to bar data collected for one purpose from being reused for another are basic principles of the data protection laws that have been the norm in Europe since the EU adopted the Privacy Directive in 1995. This is the Privacy Directive that is currently being updated; the European Commission's proposals seem, inevitably, to please no one. Businesses are already complaining compliance will be unworkable or too expensive (hey, fines of up to 2 percent of global income!). I'm not sure consumers should be all that happy either; I'd rather have the right to be anonymous than to be forgotten (which I believe will prove technically unworkable), and the jurisdiction for legal disputes with a company to be set to my country rather than theirs. Much debate lies ahead.

In the meantime, the importance of the data protection laws has been enhanced by Google's announcement this week that it will revise and consolidate the more than 60 privacy policies covering its various services "to create one beautifully simple and intuitive experience across Google". It will, the press release continues, be "Tailored for you". Not the privacy policy, of course, which is a one-size-fits-all piece of corporate lawyer ass-covering, but the services you use, which, after the fragmented data Google holds about you has been pooled into one giant liquid metal Terminator, will be transformed into so-much-more personal helpfulness. Which would sound better if 2011 hadn't seen loud warnings about the danger that personalization will disappear stuff we really need to know: see Eli Pariser's filter bubble and Jeff Chester's worries about the future of democracy.

Google is right that streamlining and consolidating its myriad privacy policies is a user-friendly thing to do. Yes, let's have a single policy we can read once and understand. We hate reading even one privacy policy, let alone 60 of them.

But the furore isn't about that, it's about the single pool of data. People do not use Google Docs in order to improve their search results; they don't put up Google+ pages and join circles in order to improve the targeting of ads on YouTube. This is everything privacy advocates worried about when Gmail was launched.

Australian privacy campaigner Roger Clarke's discussion document sets out the principles that the decision violates: no consultation, retroactive application; no opt out.

Are we evil yet?

In his 2011 book, In the Plex, Steven Levy traces the beginnings of a shift in Google's views on how and when it implements advertising to the company's controversial purchase of the DoubleClick advertising network, which relied on cookies and tracking to create targeted ads based on Net users' browsing history. This $3.1 billion purchase was huge enough to set off anti-trust alarms. Rightly so. Levy writes, "...sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match." Between DoubleClick's dominance in display advertising on large, commercial Web sites and Google AdSense's presence on millions of smaller sites, the company could track pretty much all Web users. "No law prevented it from combining all that information into one file," Levy writes, adding that Google imposed limits, in that it didn't use blog postings, email, or search behavior in building those cookies.

Levy notes that Google spends a lot of time thinking about privacy, but quotes founder Larry Page as saying that the particular issues the public chooses to get upset about seem randomly chosen, the reaction determined most often by the first published headline about a particular product. This could well be true - or it may also be a sign that Page and Brin, like Facebook's Mark Zuckberg and some other Silicon Valley technology company leaders, are simply out of step with the public. Maybe the reactions only seem random because Page and Brin can't identify the underlying principles.

In blending its services, the issue isn't solely privacy, but also the long-simmering complaint that Google is increasingly favoring its own services in its search results - which would be a clear anti-trust violation. There, the traditional principle is that dominance in one market (search engines) should not be leveraged to achieve dominance in another (social networking, video watching, cloud services, email).

SearchEngineLand has a great analysis of why Google's Search Plus is such a departure for the company and what it could have done had it chosen to be consistent with its historical approach to search results. Building on the "Don't Be Evil" tool built by Twitter, Facebook, and MySpace, among others, SEL demonstrates the gaps that result from Google's choices here, and also how the company could have vastly improved its service to its search customers.

What really strikes me in all this is that the answer to both the EU issues and the Google problem may be the same: the personal data store that William Heath has been proposing for three years. Data portability and interoperability, check; user control, check. But that is as far from the Web 2.0 business model as file-sharing is from that of the entertainment industry.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


November 18, 2011

The write stuff

The tenth anniversary of the first net.wars column slid by quietly on November 2. This column wasn't born of 9/11 - net.wars-the-book was published in 1998 - but it did grow out of anger over the way the grief and shock over 9/11 was being hijacked to justify policies that were unacceptable in calmer times Ever since, the column has covered the various border wars between cyberspace and real life, with occasional digressions. This week's column is a digression. I feel I've earned it.

A few weeks ago I had this conversation with a friend:

wg: My friend's son is a writer on The Daily Show.
Friend, puzzled: Jon Stewart needs writers? I thought he did his own jokes.

For the record, Stewart has 12 to 14 staff writers. For a simple reason: comedy is hard, and even the vaudeville-honed joke machine that was Morey Amsterdam would struggle to devise two hours of original material every week.

Which is how we arrive at the enduring mystery of the sitcom. Although people may disagree about exactly when that is, when the form works, says the veteran sitcom writer and showrunner Ken Levine, it is TV's most profitable money machine. Sitcom writing requires not only a substantial joke machine but the ability to create an underlying storyline scaffold of recognizably human reality. And you must do all that under pressure, besieged by conflicting notes from the commissioning network and studio, and conforming to constraints as complex and specific as those of a sonnet: budgets, timing, and your actors' abilities. It takes a village. Or, since today most US sitcoms are written by a roomful of writers working together, a "gang-banging" village.

It is this experience that Levine decided, five years ago. to emulate. The ability to thrive in that environment is an essential skill, but beginning writers work alone until they are thrown in at the deep end on their first job. He calls his packed weekend event The Sitcom Room, and, having spent last weekend taking part in the fifth of the series, I can say the description is accurate. After a few hours of introduction about the inner workings of writers' rooms, scripts, and comedy in general, four teams of five people watch a group of actors perform a Levine-written scene with some obvious and some not-so-obvious things wrong with it. Each team then goes off to fix the scene in its designated room, which comes appropriately equipped with junk food, sodas, and a whiteboard. You have 12 hours (more if you're willing to make your own copies). Go.

After five seminars and 20 teams, Levine says every rewritten script has been different, a reminder that sitcom writing is a treasure hunt where the object of the search is unknown. Levine kindly describes each result as "magical"; attendees were more critical of other groups' efforts. (I liked ours best, although the ending still needed some work.)

I felt lucky: my group were all professionals used to meeting deadlines and working to specification, and all displayed a remarkable lack of ego in pitching and listening to ideas. We packed up around 1am, feeling that any changes we made after that point were unlikely to be improvements. On the other hand, if the point was to experience a writers' room, we failed utterly: both Levine and Sunday panelist Jane Espenson (see her new Web series, Husbands) talked about the brutally competitive environment of many of the real-life versions. Others were less blessed by chemistry: one team wrangled until 3am before agreeing on a strategy, then spent the rest of the night writing their script and getting their copies made. Glassy-eyed, on Sunday they disagreed when asked individually about what went wrong: publicly, their appointed "showrunner" blamed himself for not leading effectively. I imagine them indelibly bonded by their shared suffering.

What happens at this event is catalysis. "You will learn a lot about yourselves," Levine said on that first morning. How do you respond when your best ideas are not good enough to be accepted? How do you take to the discipline of delivering jokes and breaking stories on deadline? How do you function under pressure as part of a team creative effort? Less personally, can you watch a performance and see, instead of the actors' skills, the successes and flaws in your script? Can you stay calm when the "studio executive" (played by Levine's business partner, Dan O'Day) produces a laundry list of complaints and winds up with, "Except for a couple of things I wouldn't change anything"? And, not in the syllabus, can you help Dan play practical jokes on Ken? By the end of the weekend, everyone is on a giddy adrenaline high, exacerbated in our case by the gigantic anime convention happening all around us at the same hotel. (Yes. The human-sized fluffy yellow chick getting on the elevator is real. You're not hallucinating from lack of sleep. Check.)

I found Levine's blog earlier this year after he got into cross-fire with the former sitcom star Roseanne Barr over Charlie Sheen's meltdown. His blog reminds me of William Goldman's books on screenwriting: the same combination of entertainment and education. I think of Goldman's advice every day in everything I write. Now, I will think of Levine's, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 14, 2011

Think of the children

Give me smut and nothing but! - Tom Lehrer

Sex always sells, which is presumably why this week's British headlines have been dominated by the news that the UK's ISPs are to operate an opt-in system for porn. The imaginary sales conversations alone are worth any amount of flawed reporting:

ISP Customer service: Would you like porn with that?

Customer: Supersize me!

Sadly, the reporting was indeed flawed. Cameron, it turns out was merely saying that new customers signing up with the four major consumer ISPs would be asked if they want parental filtering. So much less embarrassing. So much less fun.

Even so, it gave reporters such as Violet Blue, at ZDNet UK, a chance to complain about the lack of transparency and accountability of filtering systems.

Still, the fact that so many people could imagine that it's technically possible to turn "Internet porn" on and off as if operated a switch is alarming. If it were that easy, someone would have a nice business by now selling strap-on subscriptions the way cable operators do for "adult" TV channels. Instead, filtering is just one of several options for which ISPs, Web sites, and mobile phone operators do not charge.

One of the great myths of our time is that it's easy to stumble accidentally upon porn on the Internet. That, again, is television, where idly changing channels on a set-top box can indeed land you on the kind of smut that pleased Tom Lehrer. On the Internet, even with safe search turned off, it's relatively difficult to find porn accidentally - though very easy to find on purpose. (Especially since the advent of the .xxx top-level domain.)

It is, however, very easy for filtering systems to remove non-porn sites from view, which is why I generally turn off filters like "Safe search" or anything else that will interfere with my unfettered access to the Internet. I need to know that legitimate sources of information aren't being hidden by overactive filters. Plus, if it's easy to stumble over pornography accidentally I think that as a journalist writing about the Net and in general opposing censorship I think I should know that. I am better than average at constraining my searches so that they will retrieve only the information I really want, which is a definite bias in this minuscule sample of one. But I can safely say that the only time I encounter unwanted anything-like-porn is in display ads on some sites that assume their primary audience is young men.

Eli Pariser, whose The Filter Bubble: What the Internet is Hiding From You I reviewed recently for ZDNet UK, does not talk in his book about filtering systems intended to block "inappropriate" material. But surely porn filtering is a broad-brush subcase of exactly what he's talking about: automated systems that personalize the Net based on your known preferences by displaying content they already "think" you like at the expense of content they think you don't want. If the technology companies were as good at this as the filtering people would like us to think, this weekend's Singularity Summit would be celebrating the success of artificial intelligence instead of still looking 20 to 40 years out.

If I had kids now, would I want "parental controls"? No, for a variety of reasons. For one thing, I don't really believe the controls keep them safe. What keeps them safe is knowing they can ask their parents about material and people's behavior that upsets them so they can learn how to deal with it. The real world they will inhabit someday will not obligingly hide everything that might disturb their equanimity.

But more important, our children's survival in the future will depend on being able to find the choices and information that are hidden from view. Just as the children of 25 years ago should have been taught touch typing, today's children should be learning the intricacies of using search to find the unknown. If today's filters have any usefulness at all, it's as a way of testing kids' ability to think ingeniously about how to bypass them.

Because: although it's very hard to filter out only *exactly* the material that matches your individual definition of "inappropriate", it's very easy to block indiscriminately according to an agenda that cares only about what doesn't appear. Pariser worries about the control that can be exercised over us as consumers, citizens, voters, and taxpayers if the Internet is the main source of news and personalization removes the less popular but more important stories of the day from view. I worry that as people read and access only the material they already agree with our societies will grow more and more polarized with little agreement even on basic facts. Northern Ireland, where for a long time children went to Catholic or Protestant-owned schools and were taught that the other group was inevitably going to Hell, is a good example of the consequences of this kind of intellectual segregation. Or, sadly, today's American political debates, where the right and left have so little common basis for reasoning that the nation seems too polarized to solve any of its very real problems.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 9, 2011

The final countdown

The we-thought-it-was-dead specter of copyright term extension in sound recordings has done a Diabolique maneuver and been voted alive by the European Council. In a few days, the Council of Ministers could make it EU law because, as can happen under the inscrutable government structures of the EU, opposition has melted away.

At stake is the extension of copyright in sound recordings from 50 years to 70, something the Open Rights Group has been fighting since it was born. The push to extend it above 50 years has been with us for at least five years; originally the proposal was to take it to 95 years. An extension from 50 to 70 years is modest by comparison, but given the way these things have been going over the last 50 years, that would buy the recording industry 20 years in which to lobby for the 95 years they originally wanted, and then 25 years to lobby for the line to be moved further. Why now? A great tranche of commercially popular recordings is up for entry into the public domain: Elvis Presley's earliest recordings date to 1956, and The Beatles' first album came out in 1963; their first singles are 50 years old this year. It's not long after that to all the great rock records of the 1970s.

My fellow Open Rights Group advisory council member Paul Sanders, has up a concise little analysis about what's wrong here. Basically, it's never jam today for the artists, but jam yesterday, today, and tomorrow for the recording companies. I have commented frequently on the fact that the more record companies are able to make nearly pure profit on their back catalogues whose sunk costs have long ago been paid, the more new, young artists are required to compete for their attention with an ever-expanding back catalogue. I like Sanders' language on this: "redistributive, from younger artists to older and dead ones".

In recent years, we've heard a lof of the mantra "evidence-based policy" from the UK government. So, in the interests of ensuring this evidence-based policy the UK government is so keen on, here is some. The good news is they commissioned it themselves, so it ought to carry a lot of weight with them. Right? Right.

There have been two major British government reports studying the future of copyright and intellectual property law generally in the last five years: the Gowers Review, published in 2006, and the Hargreaves report was commissioned in November 2010 and released in May 2011.

From Hargreaves:

Economic evidence is clear that the likely deadweight loss to the economy exceeds any additional incentivising effect which might result from the extension of copyright term beyond its present levels.14 This is doubly clear for retrospective extension to copyright term, given the impossibility of incentivising the creation of already existing works, or work from artists already dead.

Despite this, there are frequent proposals to increase term, such as the current proposal to extend protection for sound recordings in Europe from 50 to 70 or even 95 years. The UK Government assessment found it to be economically detrimental. An international study found term extension to have no impact on output.

And further:

Such an extension was opposed by the Gowers Review and by published studies commissioned by the European Commission.

Ah, yes, Gowers and its 54 recommendations, many or most of which have been largely ignored. (Government policy seems to have embraced "strengthening of IP rights, whether through clamping down on piracy" to the exclusion of things like "improving the balance and flexibility of IP rights to allow individuals, businesses, and institutions to use content in ways consistent with the digital age".

To Gowers:

Recommendation 3: The European Commission should retain the length of protection on sound recordings and performers' rights at 50 years.

And:

Recommendation 4: Policy makers should adopt the principle that the term and scope of protection for IP rights should not be altered retrospectively.

I'd use the word "retroactive", myself, but the point is the same. Copyright is a contract with society: you get the right to exploit your intellectual property for some number of years, and in return after that number of years your work belongs to the society whose culture helped produce it. Trying to change an agreed contract retroactively usually requires you to show that the contract was not concluded in good faith, or that someone is in breach. Neither of those situations applies here, and I don't think these large companies with their in-house lawyers, many of whom participated in drafting prior copyright law, can realistically argue that they didn't understand the provisions. Of course, this recommendation cuts both ways: if we can't put Elvis's earliest recordings back into copyright, thereby robbing the public domain, we also can't shorten the copyright protection that applies to recordings created with the promise of 50 years' worth of protection.

This whole mess is a fine example of policy laundering: shopping the thing around until you either wear out the opposition or find sufficient champions. The EU, with its Hampton Court maze of interrelated institutions, could have been deliberately designed to facilitate this. You can write to your MP, or even your MEP - but the sad fact is that the shiny, new EU government is doing all this in old-style backroom deals.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 15, 2011

Dirty digging

The late, great Molly Ivins warns (in Molly Ivins Can't Say That, Can She?) about the risk to journalists of becoming "power groupies" who identify more with the people they cover than with their readers. In the culture being exposed by the escalating phone hacking scandals the opposite happened: politicians and police became "publicity groupies" who feared tabloid wrath to such an extent that they identified with the interests of press barons more than those of the constituents they are sworn to protect. I put the apparent inconsistency between politicians' former acquiescence and their current baying for blood down to Stockholm syndrome: this is what happens when you hold people hostage through fear and intimidation for a few decades. When they can break free, oh, do they want revenge.

The consequences are many and varied, and won't be entirely clear for a decade or two. But surely one casualty must have been the balanced view of copyright frequently argued for in this column. Murdoch's media interests are broad-ranging. What kind of copyright regime do you suppose he'd like?

But the desire for revenge is a really bad way to plan the future, as I said (briefly) on Monday at the Westminster Skeptics.

For one thing, it's clearly wrong to focus on News International as if Rupert Murdoch and his hired help were the only contaminating apple. In the 2006 report What price privacy now? the Information Commissioner listed 30 publications caught in the illegal trade in confidential information. News of the World was only fifth; number one, by a considerable way, was the Daily Mail (the Observer was number nine). The ICO wanted jail sentences for those convicted of trading in data illegally, and called on private investigators' professional bodies to revoke or refuse licenses to PIs who breach the rules. Five years later, these are still good proposals.

Changing the culture of the press is another matter.
When I first began visiting Britain in the late 1970s, I found the tabloid press absolutely staggering. I began asking the people I met how the papers could do it.

"That's because *we* have a free press," I was told in multiple locations around the country. "Unlike the US." This was only a few years after The Washington Post backed Bob Woodward and Carl Bernstein's investigation of Watergate, so it was doubly baffling.

Tom Stoppard's 1978 play Night and Day explained a lot. It dropped competing British journalists into an escalating conflict in a fictitious African country. Over the course of the play, Stoppard's characters both attack and defend the tabloid culture.

"Junk journalism is the evidence of a society that has got at least one thing right, that there should be nobody with power to dictate where responsible journalism begins," says the naïve and idealistic new journalist on the block.

"The populace and the popular press. What a grubby symbiosis it is," complains the play's only female character, whose second marriage - "sex, money, and a title, and the parrots didn't harm it, either" - had been tabloid fodder.

The standards of that time now seem almost quaint. In the movie Starsuckers, filmmaker Chris Atkins fed fabricated celebrity stories to a range of tabloids. All were published. That documentary also showed in action illegal methods of obtaining information. In 2009, right around the time The Press Complaints Commission was publishing a report concluding, "there is no evidence that the practice of phone message tapping is ongoing".

Someone on Monday asked why US newspapers are better behaved despite First Amendment protection and less constraint by onerous libel laws. My best guess is fear of lawsuits. Conversely, Time magazine argues that Britain's libel laws have encouraged illegal information gathering: publication requires indisputable evidence. I'm not completely convinced: the libel laws are not new, and economics and new media are forcing change on press culture.

A lot of dangers lurk in the calls for greater press regulation. Phone hacking is illegal. Breaking into other people's computers is illegal. Enforce those laws. Send those responsible to jail. That is likely to be a better deterrent than any regulator could manage.

It is extremely hard to devise press regulations that don't enable cover-ups. For example, on Wednesday's Newsnight, the MP Louise Mensch, head of the DCMS committee conducting the hearings, called for a requirement that politicians disclose all meetings with the press. I get it: expose too-cosy relationships. But whistleblowers depend on confidentiality, and the last thing we want is for politicians to become as difficult to access as tennis stars and have their contact with the press limited to formal press conferences.

Two other lessons can be derived from the last couple of weeks. The first is that you cannot assume that confidential data can be protected simply by access rules. The second is the importance of alternatives to commercial, corporate journalism. Tom Watson has criticized the BBC for not taking the phone hacking allegations seriously. But it's no accident that the trust-owned Guardian was the organization willing to take on the tabloids. There's a lesson there for the US, as the FBI and others prepare to investigate Murdoch and News Corp: keep funding PBS.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 20, 2011

The world we thought we lived in

If one thing is more annoying than another, it's the fantasy technology on display in so many TV shows. "Enhance that for me!" barks an investigator. And, obediently, his subordinate geek/squint/nerd pushes a button or few, a line washes over the blurry image on screen, and now he can read the maker's mark on a pill in the hand of the target subject that was captured by a distant CCTV camera. The show 24 ended for me 15 minutes into season one, episode one, when Kiefer Sutherland's Jack Bauer, trying to find his missing daughter, thrust a piece of paper at an underling and shouted, "Get me all the Internet passwords associated with that telephone number!" Um...

But time has moved on, and screenwriters are more likely to have spent their formative years online and playing computer games, and so we have arrived at The Good Wife, which gloriously wrapped up its second season on Tuesday night (in the US; in the UK the season is still winding to a close on Channel 4). The show is a lot of things: a character study of an archetypal humiliated politician's wife (Alicia Florrick, played by Julianna Margulies) who rebuilds her life after her husband's betrayal and corruption scandal; a legal drama full of moral murk and quirky judges ( Carob chip?); a political drama; and, not least, a romantic comedy. The show is full of interesting, layered men and great, great women - some of them mature, powerful, sexy, brilliant women. It is also the smartest show on television when it comes to life in the time of rapid technological change.

When it was good, in its first season, Gossip Girl cleverly combined high school mean girls with the citizen reportage of TMZ to produce a world in which everyone spied on everyone else by sending tips, photos, and rumors to a Web site, which picks the most damaging moment to publish them and blast them to everyone's mobile phones.

The Good Wife goes further to exploit the fact that most of us, especially those old enough to remember life before CCTV, go on about our lives forgetting that everywhere we leave a trail. Some are, of course, old staples of investigative dramas: phone records, voice messages, ballistics, and the results of a good, old-fashioned break-in-and-search. But some are myth-busting.

One case (S2e15, "Silver Bullet") hinges on the difference between the compressed, digitized video copy and the original analog video footage: dropped frames change everything. A much earlier case (S1e06, "Conjugal") hinges on eyewitness testimony; despite a slightly too-pat resolution (I suspect now, with more confidence, it might have been handled differently), the show does a textbook job of demonstrating the flaws in human memory and their application to police line-ups. In a third case (S1e17, "Heart"), a man faces the loss of his medical insurance because of a single photograph posted to Facebook showing him smoking a cigarette. And the disgraced husband's (Peter Florrick, played by Chris Noth) attempt to clear his own name comes down to a fancy bit of investigative work capped by camera footage from an ATM in the Cayman Islands that the litigator is barely technically able to display in court. As entertaining demonstrations and dramatizations of the stuff net.wars talks about every week and the way technology can be both good and bad - Alicia finds romance in a phone tap! - these could hardly be better. The stuffed lion speaker phone (S2e19, "Wrongful Termination") is just a very satisfying cherry topping of technically clever hilarity.

But there's yet another layer, surrounding the season two campaign mounted to get Florrick elected back into office as State's Attorney: the ways that technology undermines as well as assists today's candidates.

"Do you know what a tracker is?" Peter's campaign manager (Eli Gold, played by Alan Cumming) asks Alicia (S2e01, "Taking Control"). Answer: in this time of cellphones and YouTube, unpaid political operatives follow opposing candidates' family and friends to provoke and then publish anything that might hurt or embarrass the opponent. So now: Peter's daughter (Makenzie Vega) is captured praising his opponent and ham-fistedly trying to defend her father's transgressions ("One prostitute!"). His professor brother-in-law's (Dallas Roberts) in-class joke that the candidate hates gays is live-streamed over the Internet. Peter's son (Graham Phillips) and a manipulative girlfriend (Dreama Walker), unknown to Eli, create embarrassing, fake Facebook pages in the name of the opponent's son. Peter's biggest fan decides to (he thinks) help by posting lame YouTube videos apparently designed to alienate the very voters Eli's polls tell him to attract. (He's going to post one a week; isn't Eli lucky?) Polling is old hat, as are rumors leaked to newspaper reporters; but today's news cycle is 20 minutes and can we have a quote from the candidate? No wonder Eli spends so much time choking and throwing stuff.

All of this fits together because the underlying theme of all parts of the show is control: control of the campaign, the message, the case, the technology, the image, your life. At the beginning of season one, Alicia has lost all control over the life she had; by the end of season two, she's in charge of her new one. Was a camera watching in that elevator? I guess we'll find out next year.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 13, 2011

Lay down the cookie

British Web developers will be spending the next couple of weeks scrambling to meet the May 26 deadline after which new legislation require users to consent before a cookie can be placed on their computers. The Information Commissioner's guidelines allow a narrow exception for cookies that are "strictly necessary for a service requested by the user"; the example given is a cookie used to remember an item the user has chosen to buy so it's there when they go to check out. Won't this be fun?

Normally, net.wars comes down on the side of privacy even when it's inconvenient for companies, but in this case we're prepared to make at least a partial exception. It's always been a little difficult to understand the hatred and fear with which some people regard the cookie. Not the chocolate chip cookie, which of course we know is everything that is good, but the bits of code that reside on your computer to give Web pages the equivalent of memory. Cookies allow a server to assemble a page that remembers what you've looked at, where you've been, and which gewgaw you've put into your shopping basket. At least some of this can be done in other ways such as using a registration scheme. But it's arguably a greater invasion of privacy to require users to form a relationship with a Web site they may only use once.

The single-site use of cookies is, or ought to be, largely uncontroversial. The more contentious usage is third-party cookies, used by advertising agencies to track users from site to site with the goal of serving up targeted, rather than generic, ads. It's this aspect of cookies that has most exercised privacy advocates, and most browsers provide the ability to block cookies - all, third-party, or none, with a provision to make exceptions.

The new rules, however, seem overly broad.

In the EU, the anti-cookie effort began in 2001 (the second-ever net.wars), seemed to go quiet, and then revived in 2009, when I called the legislation "masterfully stupid". That piece goes into some detail about the objections to the anti-cookie legislation, so we won't review that here. At the time, reader email suggested that perhaps making life unpleasant for advertisers would force browser manufacturers to design better privacy controls. 'Tis a consummation devoutly to be wished, but so far it hasn't happened, and in the meantime that legislation has become an EU directive and now UK law.

The chief difference is moving from opt-out to opt-in: users must give consent for cookies to be placed on their machines; the chief flaw is banning a technology instead of regulating undesirable actions and effects. Besides the guidelines above, the ICO refers people to All About Cookies for further information.

Pete Jordan, a Hull-based Web developer, notes that when you focus legislation on a particular technology, "People will find ways around it if they're ingenious enough, and if you ban cookies or make it awkward to use them, then other mechanisms will arise." Besides, he says, "A lot of day-to-day usage is to make users' experience of Web sites easier, more friendly, and more seamless. It's not life-threatening or vital, but from the user's perception it makes a difference if it disappears." Cookies, for example, are what provide the trail of "breadcrumbs" at the top of a Web page to show you the path by which you arrived at that page so you can easily go back to where you were.

"In theory, it should affect everything we do," he says of the legislation. A possible workaround may be to embed tokens in URLs, a strategy he says is difficult to manage and raises the technical barrier for Web developers.

The US, where competing anti-tracking bills are under consideration in both houses of Congress, seems to be taking a somewhat different tack in requiring Web sites to honor the choice if consumers set a "Do Not Track" flag. Expect much more public debate about the US bills than there has been in the EU or UK. See, for example, the strong insistence by What Would Google Do? author Jeff Jarvis that media sites in particular have a right to impose any terms they want in the interests of their own survival. He predicts paywalls everywhere and the collapse of media economics. I think he's wrong.

The thing is, it's not a fair contest between users and Web site owners. It's more or less impossible to browse the Web with all cookies turned off: the complaining pop-ups are just too frequent. But targeting the cookie is not the right approach. There are many other tracking technologies that are invisible to consumers which may have both good and bad effects - even Web bugs are used helpfully some of the time. (The irony is, of course, regulating the cookie but allowing increases in both offline and online surveillance by police and government agencies.)

Requiring companies to behave honestly and transparently toward their customers would have been a better approach for the EU; one hopes it will work better in the US.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 6, 2011

Double exposure

So finally we know. Ever since Wikileaks began releasing diplomatic cables copyright activists have been waiting to see if the trove would expose undue influence on national laws. And this week there it was: a 2005 cable from the US Embassy in New Zealand requesting $386,158 to fund start-up costs and the first year of an industry-backed intellectual property enforcement unit and a 2009 cable offering "help" when New Zealand was considering a "three-strikes" law. Much, much more on this story has been presented and analyzed by the excellent Michael Geist, who also notes similar US lobbying pressure on Canada to "improve" its "lax" copyright laws.

My favorite is this bit, excerpted from the cable recounting an April 2007 meeting between Embassy officials and Geist himself:

His acknowledgement that Canada is a net importer of copyrighted materials helps explain the advantage he would like to hold on to with a weaker Canadian UPR protection regime. His unvoiced bias against the (primarily U.S. based) entertainment industry also reflects deeply ingrained Canadian preferences to protect and nurture homegrown artists.

In other words, Geist's disagreement with US copyright laws is due to nationalist bias, rather than deeply held principles. I wonder how they explain to themselves the very similar views of such diverse Americans as Macarthur award winner Pamela Samuelson, John Perry Barlow, Lawrence Lessig. The latter in fact got so angry over the US's legislative expansion of copyright that he founded a movement for Congressional reform, expanding to a Harvard Law School center to research broader questions of ethics.

It's often said that a significant flaw in the US Constitution is that it didn't - couldn't, because they didn't exist yet - take account of the development of multinational corporations. They have, of course, to answer to financial regulations, legal obligations covering health and safety, and public opinion, but in many areas concerning the practice of democracy there is very little to rein those in. They can limit their employees' freedom of speech, for example, without ever falling afoul of the First Amendment, which, contrary to often-expressed popular belief, limits only the power of Congress in this area.

There is also, as Lessig pointed out in his first book, Code: and Other Laws of Cyberspace, no way to stop private companies from making and implementing technological decisions that may have anti-democratic effects. Lessig's example at the time was AOL, which hard-coded a limit of 23 participants per chat channel; try staging a mass protest under those limits. Today's better example might be Facebook, which last week was accused of unfairly deleting the profiles of 51 anti-cuts groups and activists. (My personal guess is that Facebook's claim to have simply followed its own rules is legitimate; the better question might be who supplied Facebook with the list of profiles and why.) Whether or not Facebook is blameless on this occasion, there remains a legitimate question: at what point does a social network become so vital a part of public life that the rules it implements and the technological decisions it makes become matters of public policy rather than questions for it to consider on its own? Facebook, like almost all of the biggest Internet companies, is a US corporation, with its mores and internal culture largely shaped by its home country.

We have often accused large corporate rights holders of being the reason why we see the same proposals for tightening and extending copyright popping up all over the world in countries whose values differ greatly and whose own national interests are not necessarily best served by passing such laws. More recently written constitutions could consider such influences. To the best of my knowledge they haven't, although arguably this is less of an issue in places that aren't headquarters to so many of them and where they are therefore less likely to spend large amounts backing governments likely to be sympathetic to their interests.

What Wikileaks has exposed instead is the unpleasant specter of the US, which likes to think of itself as spreading democracy around the world, behaving internationally in a profoundly anti-democratic way. I suppose we can only be grateful they haven't sent Geist and other non-US copyright reform campaigners exploding cigars. Change Congress, indeed: what about changing the State Department?

It's my personal belief that the US is being short-sighted in pursuing these copyright policies. Yes, the US is currently the world's biggest exporter of intellectual property, especially in, but not limited to, the area of entertainment. But that doesn't mean it always will be. It is foolish to think that down the echoing corridors of time (to borrow a phrase from Jean Kerr) the US will never become a net importer of intellectual property. It is sheer fantasy - even racism - to imagine that other countries cannot write innovative software that Americans want to use or produce entertainment that Americans want to enjoy. Even if you dispute the arguments made by campaigning organizations such as the Electronic Frontier Foundation and the Open Rights Group that laws like "three strikes" unfairly damage the general public, it seems profoundly stupid to assume that the US will always enjoy the intellectual property hegemony it has now.

One of these days, the US policies exposed in these cables are going to bite it in the ass.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 22, 2011

Applesauce

Modern life is full of so many moments when you see an apparently perfectly normal person doing something that not so long ago was the clear sign of a crazy person. They're walking down the street talking to themselves? They're *on the phone*. They think the inanimate objects in their lives are spying on them? They may be *right*.

Last week's net.wars ("The open zone") talked about the difficulty of finding the balance between usability, on the one hand, and giving users choice, flexibility, and control, on the other. And then, as if to prove this point, along comes Apple and the news that the iPhone has been storing users' location data, perhaps permanently.

The story emerged this week when two researchers presenting at O'Reilly's Where 2.0 conference presented an open-source utility they'd written to allow users to get a look at the data the iPhone was saving. But it really begins last year, when Alex Levinson discovered the stored location data as part of his research on Apple forensics. Based on his months of studying the matter, Levinson contends that it's incorrect to say that Apple is gathering this data: rather, the device is gathering the data, storing it, and backing it up when you sync your phone. Of course, if you sync your phone to Apple's servers, then the data is transferred to your account - and it is also migrated when you purchase a new iPhone or iPad.

So the news is not quite as bad as it first sounded: your device is spying on you, but it's not telling anybody. However: the data is held in unencrypted form and appears never to expire, and this raises a whole new set of risks about the devices that no one had really focused on until now.

A few minutes after the story broke, someone posted on Twitter that they wondered how many lawyers handling divorce cases were suddenly drafting subpoenas for copies of this file from their soon-to-be-exes' iPhones. Good question (although I'd have phrased it instead as how many script ideas the wonderful, tech-savvy writers of The Good Wife are pitching involving forensically recovered location data). That is definitely one sort of risk; another, ZDNet's Adrian Kingsley-Hughes points out is that the geolocation may be wildly inaccurate, creating a false picture that may still be very difficult to explain, either to a spouse or to law enforcement, who, as Declan McCullagh writes know about and are increasingly interested in accessing this data.

There are a bunch of other obvious privacy things to say about this, and Privacy International has helpfully said them in an open letter to Steve Jobs.

"Companies need openness and procedures," PI's executive director, Simon Davies, said yesterday, comparing Apple's position today to Google's a couple of months before the WiFi data-sniffing scandal.

The reason, I suspect, that so many iPhone users feel so shocked and betrayed is that Apple's attention to the details of glossy industrial design and easy-to-understand user interfaces leads consumers to cuddle up to Apple in a way they don't to Microsoft or Google. I doubt Google will get nearly as much anger directed at it for the news that Android phones also collect location data (the Android saves only the last 50 mobile masts and 200 WiFi networks). In either event, the key is transparency: when you post information on Twitter or Facebook about your location or turn on geo-tagging you know you're doing it. In this case, the choice is not clear enough for users to understand what they've agreed to.

The question is: how best can consumers be enabled to make informed decisions? Apple's current method - putting a note saying "Beware of the leopard" at the end of a 15,200-word set of terms and conditions (which are in any case drafted by the company's lawyer to protect the company, not to serve consumers) that users agree to when they sign up for iTunes - is clearly inadequate. It's been shown over and over again that consumers hate reading privacy policies, and you have only to look at Facebook's fumbling attempts to embed these choices in a comprehensible interface to realize that the task is genuinely difficult. This is especially true because, unlike the issue of user-unfriendly sysstems in the early 1990s, it's not particularly in any of these companies' interests to solve this intransigent and therefore expensive problem. Make it easy for consumers to opt out and they will, hardly an appetizing proposition for companies supported in whole or in part by advertising.

The answer to the question, therefore, is going to involve a number of prongs: user interface design, regulation, contract law, and industry standards, both technical and practical. The key notion, however, is that it should be feasible - even easy - for consumers to tell what information gathering they're consenting to. The most transparent way of handling that is to make opting out the default, so that consumers must take a positive action to turn these things on.

You can say - as many have - that this particular scandal is overblown. But we're going to keep seeing dust-ups like this until industry practice changes to reflect our expectations. Apple, so sensitive to the details of industrial design that will compel people to yearn to buy its products, will have to develop equal sensitivity for privacy by design.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 8, 2011

Brought to book

JK Rowling is seriously considering releasing the Harry Potter novels as ebooks, while Amanda Hocking, who's sold a million or so ebooks has signed a $2 million contract with St. Martin's Press. In the same week. It's hard not to conclude that ebooks are finally coming of age.

And in many ways this is a good thing. The economy surrounding the Kindle, Barnes and Noble's Nook, and other such devices is allowing more than one writer to find an audience for works that mainstream publishers might have ignored. I do think hard work and talent will usually out, and it's hard to believe that Hocking would not have found herself a good career as a writer via the usual routine of looking for agents and publishers. She would very likely have many fewer books published at this point, and probably wouldn't be in possession of the $2 million it's estimated she's made from ebook sales.

On the other hand, assuming she had made at least a couple of book sales by now, she might be much more famous: her blog posting explaining her decision notes that a key factor is that she gets a steady stream of complaints from would-be readers that they can't buy her books in stores. She expects to lose money on the St. Martin's deal compared to what she'd make from self-publishing the same titles. To fans of disintermediation, of doing away with gatekeepers and middle men and allowing artists to control their own fates and interact directly with their audiences, Hocking is a self-made hero.

And yet...the future of ebooks may not be so simply rosy.

This might be the moment to stop and suggest reading a little background on book publishing from the smartest author I know on the topic, science fiction writer Charlie Stross. In a series of blog postings he's covered common misconceptions about publishing, why the Kindle's 2009 UK launch was bad news for writers, and misconceptions about ebooks. One of Stross's central points: epublishing platforms are not owned by publishers but by consumer electronics companies - Apple, Sony, Amazon.

If there's one thing we know about the Net and electronic media generally it's that when the audience for any particular new medium - Usenet, email, blogs, social networks - gets to be a certain size it attracts abuse. It's for this reason that every so often I argue that the Internet does not scale well.

In a fascinating posting on Patrick and Theresa Nielsen-Hayden's blog Making Light, Jim Macdonald notes the case of Canadian author S K S Perry, who has been blogging on LiveJournal about his travails with a thief. Perry, having had no luck finding a publisher for his novel Darkside, had posted it for free on his Web site, where a thief copied it and issued a Kindle edition. Macdonald links this sorry tale (which seems now to have reached a happy-enough ending) with postings from Laura Hazard Owen and Mike Essex that predict a near future in which we are awash in recycled ebook...spam. As all three of these writers point out, there is no system in place to do the kind of copyright/plagiarism checking that many schools have implemented. The costs are low; the potential for recycling content vast; and the ease of gaming the ratings system extraordinary. And either way, the ebook retailer makes money.

Macdonald's posting primarily considers this future with respect to the challenge for authors to be successful*: how will good books find audiences if they're tiny islands adrift in a sea of similar-sounding knock-offs and crap? A situation like that could send us all scurrying back into the arms of people who publish on paper. That wouldn't bother Amazon-the-bookseller; Apple and others without a stake in paper publishing are likely to care more (and promising authors and readers due care and diligence might help them build a better, differentiated ebook business).

There is a mythology that those who - like the Electronic Frontier Foundation or the Open Rights Group - oppose the extension and tightening of copyright are against copyright. This is not the case: very few people want to do away with copyright altogether. What most campaigners in this area want is a fairer deal for all concerned.

This week the issue of term extension for sound recordings in the EU revived when Denmark changed tack and announced it would support the proposals. It's long been my contention that musicians would be better served by changes in the law that would eliminate some of the less fair terms of typical contracts, that would provide for the reversion of rights to musicians when their music goes out of commercial availability, and that would alter the balance of power, even if only slightly, in favor of the musicians.

This dystopian projected future for ebooks is a similar case. It is possible to be for paying artists and even publishers and still be against the imposition of DRM and the demonization of new technologies. This moment, where ebooks are starting to kick into high gear, is the time to find better ways to help authors.

*Successful: an author who makes enough money from writing books to continue writing books.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 1, 2011

Equal access

It is very, very difficult to understand the reasoning behind the not-so-secret plan to institute Web blocking. In a http://www.openrightsgroup.org/blog/2011/minister-confirms-voluntary-site-blocking-discussionsletter to the Open Rights Group, Ed Vaizey, the minister for culture, communications, and creative industries, confirmed that such a proposal emerged from a workshop to discuss "developing new ways for people to access content online". (Orwell would be so proud.)

We fire up Yes, Minister once again to remind everyone the four characteristics of proposals ministers like: quick, simple, popular, cheap. Providing the underpinnings of Web site blocking is not likely to be very quick, and it's debatable whether it will be cheap. But it certainly sounds simple, and although it's almost certainly not going to be popular among the 7 million people the government claims engage in illegal file-sharing - a number PC Pro has done a nice job of dissecting - it's likely to be popular with the people Vaizey seems to care most about, rights holders.

The four opposing kiss-of-death words are: lengthy, complicated, expensive, and either courageous or controversial, depending how soon the election is. How to convince Vaizey that it's these four words that apply and not the other four?

Well, for one thing, it's not going to be simple, it's going to be complicated. Web site blocking is essentially a security measure. You have decided that you don't want people to have access to a particular source of data, and so you block their access. Security is, as we know, not easy to implement and not easy to maintain. Security, as Bruce Schneier keeps saying, is a process, not a product. It takes a whole organization to implement the much more narrowly defined IWF system. What kind of infrastructure will be required to support the maintenance and implementation of a block list to cover copyright infringement? Self-regulatory, you say? Where will the block list, currently thought to be about 100 sites come from? Who will maintain it? Who will oversee it to ensure that it doesn't include "innocent" sites? ISPs have other things to do, and other than limiting or charging for the bandwidth consumption of their heaviest users (who are not all file sharers by any stretch) they don't have a dog in this race. Who bears the legal liability for mistakes?

The list is most likely to originate with rights holders, who, because they have shown over most of the last 20 years that they care relatively little if they scoop innocent users and sites into the net alongside infringing ones, no one trusts to be accurate. Don't the courts have better things to do than adjudicate what percentage of a given site's traffic is copyright-infringing and whether it should be on a block list? Is this what we should be spending money on in a time of austerity? Mightn't it be...expensive?

Making the whole thing even more complicated is the obvious (to anyone who knows the Internet) fact that such a block list will - according to Torrentfreak already has - start a new arms race.

And yet another wrinkle: among blocking targets are cyberlockers. And yet this is a service that, like search, is going mainstream: Amazon.com has just launched such a service, which it calls Cloud Drive and for which it retains the right to police rather thoroughly. Encrypted files, here we come.

At least one ISP has already called the whole idea expensive, ineffective, and rife with unintended consequences.

There are other obvious arguments, of course. It opens the way to censorship. It penalizes innocent uses of technology as well as infringing ones; torrent search sites typically have a mass of varied material and there are legitimate reasons to use torrenting technology to distribute large files. It will tend to add to calls to spy on Internet users in more intrusive ways (as Web blocking fails to stop the next generation of file-sharing technologies). It will tend to favor large (often American) services and companies over smaller ones. Google, as IsoHunt told the US Court of Appeals two weeks ago, is the largest torrent search engine. (And, of course, Google has other copyright troubles of its own; last week the court rejected the Google Books settlement.)

But the sad fact is that although these arguments are important they're not a good fit if the main push behind Web blocking is an entrenched belief that only way to secure economic growth is to extend and tighten copyright while restricting access to technologies and sites that might be used for infringement. Instead, we need to show that this entrenched belief is wrong.

We do not block the roads leading to car boot sales just because sometimes people sell things at them whose provenance is cloudy (at best). We do not place levies on the purchase of musical instruments because someone might play copyrighted music on them. We should not remake the Internet - a medium to benefit all of society - to serve the interests of one industrial group. It would make more sense to put the same energy and financial resources into supporting the games industry which, as Tom Watson (Lab - Bromwich) has pointed out has great potential to lift the British economy.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 25, 2011

Return to the red page district

This week's agreement to create a .xxx generic top-level domain (generic in the sense of not being identified with a particular country) seems like a quaint throwback. Ten or 15 years ago it might have made mattered. Now, for all the stories rehashing the old controversies, it seems to be largely irrelevant to anyone except those who think they can make some money out of it. How can it be a vector for censorship if there is no prohibition on registering pornography sites elsewhere? How can it "validate" the porn industry any more than printers and film producers did? Honestly, if it didn't have sex in the title, who would care?

I think it was about 1995 when a geekish friend said, probably at the Computers, Freedom, and Privacy conference, "I think I have the solution. Just create a top-level domain just for porn."

It sounded like a good idea at the time. Many of the best ideas are simple - with a kind of simplicity mathematicians like to praise with the term "elegant". Unfortunately, many of the worst ideas are also simple - with a kind of simplicity we all like to diss with the term "simplistic". Which this is depends to some extent on when you're making the judgement..

In 1995, the sense was that creating a separate pornography domain would provide an effective alternative to broad-brush filtering. It was the era of Time magazine's Cyberporn cover story, which Netheads thoroughly debunked and leading up to the passage of the Communications Decency Act in 1996. The idea that children would innocently stumble upon pornography was entrenched and not wholly wrong. At that time, as PC Magazine points out while outlining the adult entertainment industry's objections to the new domain, a lot of Web surfing was done by guesswork, which is how the domain whitehouse.com became famous.

A year or two later, I heard that one of the problems was that no one wanted to police domain registrations. Sure. Who could afford the legal liability? Besides, limiting who could register what in which domain was not going well: .com, which was intended to be for international commercial organizations, had become the home for all sorts of things that didn't fit under that description, while the .us country code domain had fallen into disuse. Even today, with organizations controlling every top-level domain, the rules keep having to adapt to user behavior. Basically, the fewer people interested in registering under your domain the more likely it is that your rules will continue to work.

No one has ever managed to settle - again - the question of what the domain name system is for, a debate that's as old as the system itself: its inventor, Paul Mockapetris, still carries the scars of the battles over whether to create .com. (If I remember correctly, he was against it, but finally gave on in that basis that: "What harm can it do?") Is the domain name system a directory, a set of mnemonics, a set of brands/labels, a zoning mechanism, or a free-for-all? ICANN began its life, in part, to manage the answers to this particular controversy; many long-time watchers don't understand why it's taken so long to expand the list of generic top-level domains. Fifteen years ago, finding a consensus and expanding the list would have made a difference to the development of the Net. Now it simply does not matter.

I've written before now that the domain name system has faded somewhat in importance as newer technologies - instant messaging, social networks, iPhone/iPad apps - bypass it altogether. And that is true. When the DNS was young, it was a perfect fit for the Internet applications of the day for which it was devised: Usenet, Web, email, FTP, and so on. But the domain name system enables email and the Web, which are typically the gateways through which people make first contact with those services (you download the client via the Web, email your friend for his ID, use email to verify your account).

The rise of search engines - first Altavista, then primarily Google - did away with much of consumers' need for a directory. Also a factor was branding: businesses wanted memorable domain names they could advertise to their customers. By now, though probably most people don't bother to remember more than a tiny handful of domain names now - Google, Facebook, perhaps one or two more. Anything else they either put into a search engine or get from either a bookmark or, more likely, their browser history.

Then came sites like Facebook, which take an approach akin to CompuServe in the old days or mobile networks now: they want to be your gateway to everything online (Facebook is going to stream movies now, in competition with NetFlix!) If they succeed, would it matter if you had - once - to teach your browser a user-unfriendly long, numbered address?

It is in this sense that the domain name system competes with Google and Facebook as the gateway to the Net. Of all the potential gateways, it is the only one that is intended as a public resource rather than a commercial company. That has to matter, and we should take seriously the threat that all the Net's entrances could become owned by giant commercial interests. But .xxx missed its moment to make history.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 28, 2011

Stuffed

"You don't need this old math work," said my eighth grade geography teacher, paging through my loose-leaf notebook while I watched resentfully. It was 1967, the math work was no more than a couple of months old, and she was ahead of her time. She was an early prototype of that strange, new species littering the media these days: the declutterer.

People like her - they say "professional organizer", I say bully - seem to be everywhere. Their sudden visibility is probably due, at least in part, to the success of the US TV series Hoarders, in which mentally disordered people are forced to confront their pathological addiction to keeping and/or acquiring so much stuff that their houses are impassable, often hazardous. Of course, one person's pathological hoarder is another's more-or-less normal slob, packrat, serious collector, or disorganized procrastinator. Still, Newsweek's study of kids who are stuck with the clean-up after their hoarder parents die is decidedly sad.

But much of what I'm reading seems aimed at perfectly normal people who are being targeted with all the zealotry of an early riser insisting that late sleepers and insomniacs are lazy, immoral slugs who need to be reformed.

Some samples. LifeHacker profiles a book to help you estimate how much your clutter is costing you. The latest middle-class fear is that schools' obsession with art work will turn children into hoarders. The New York Times profiles a professional declutterer who has so little sympathy for attachment to stuff that she tosses out her children's party favors after 24 hours. At least she admits she's neurotic, and is just happy she's made it profitable to the tune of $150 an hour (well, Manhattan prices).

But take this comment from LifeHacker:

For example, look in your bedroom and consider the cost of unworn clothes and shoes, unread books, unworn jewelry, or unused makeup.

And this, from the Newsweek piece:

While he's thrown out, recycled, and donated years' worth of clothing, costume jewelry, and obvious trash, he's also kept a lot--including an envelope of clothing tags from items [his mother] bought him in 1972, hundreds of vinyl records, and an outdated tape recorder with corroded batteries leaking out the back.

OK, with her on the corroded batteries. (What does she mean, outdated? If it still functions for its intended purpose it's just old.) Little less sure about the clothing tags, which might evoke memories. But unread books? Unless you're talking 436 copies of The DaVinci Code, unread books aren't clutter. Unread books are mental food. They are promises of unknown worlds on a rainy day when the electricity goes bang. They are cultural heritage. Ditto vinyl records. Not all books and LPs are equally valuable, of course, but they should be presumed innocent until proven to be copies of Jeffrey Archer novels. Books are not shoeboxes marked "Pieces of string - too small to save".

Leaving aside my natural defensiveness at the suggestion that thousands of books, CDs, DVDs, and vinyl LPs are "clutter", it strikes me that one reason for this trend is that there is a generational shift taking place. Anyone born before about 1970 grew up knowing that the things they liked might become unavailable at any time. TV shows were broadcast once, books and records went out of print, and the sweater that sold out while you were saving up for it didn't reappear later on eBay. If you had any intellectual or artistic aspirations, building your own library was practically a necessity.

My generation also grew up making and fixing things: we have tools. (A couple of years ago I asked a pair of 20-somethings for a soldering iron; they stared as if I'd asked for a manual typewriter.) Plus, in the process of rebelling against our parents' largely cautious and thrifty lifestyles, Baby Boomers were the first to really exploit consumer credit. Put it together: endemic belief that the availability of any particular item was only temporary, unprecedented array of goods to choose from, extraordinary access to funding. The result: stuff.

To today's economically stressed-out younger generation, raised on reruns and computer storage, the physical manifestations of intellectual property must seem peculiarly unnecessary. Why bother when you can just go online and click a button? One of my 50-something writer friends loves this new world; he gives away or sells books as soon as he's read them, and buys them back used from Amazon or Alibris if he needs to consult them again. Except for the "buying it used" part, this is a business model the copyright industries ought to love, because you can keep selling the same thing over and over again to the same people. Essentially, it's rental, which means it may eventually be an even better business than changing the media format every decade or two so that people have to buy new copies. When 3D printers really get going, I imagine there will be people arguing that you really don't need to keep furniture around - just print it when you need it. Then the truly modern home environment will be just a bare floor and walls. If you want to live like that, fine, but on behalf of my home libraries, I say: ick.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 31, 2010

Good, bad, ugly...the 2010 that was

Every year deserves its look back, and 2010 is no exception. On the good side, the younger generation beginning to enter politics is bringing with it a little more technical sense than we've had in government before. On the bad side, the year's many privacy scandals reminded us all how big a risk we take in posting as much information online as we do. The ugly...we'd have to say the scary new trends in malware. Happy New Year.

By the numbers:

$5.3 billion: the Google purchase offer that Groupon turned down. Smart? Stupid? Shopping and social networks ought to mix combustibly (and could hit local newspapers and their deal flyers), but it's a labor-intensive business. The publicity didn't hurt: Groupon has now managed to raise half a billion dollars on its own. They aren't selling anything we want to buy, but that doesn't seem to hurt Wal-Mart or McDonalds.

$497 million: the amount Harvard scientists Tyler Moore and Benjamin Edelman estimate that Google is earning from "typosquatting". Pocket change, really: Google's 2009 revenues were $23 billion. But still.

15 million (estimated): number of iPads sold since its launch in May. It took three decades of commercial failures for someone to finally launch a successful tablet computer. In its short life the iPad has been hailed and failed as the savior of print publications, and halved Best Buy's laptop sales. We still don't want one - but we're keyboard addicts, hardly its target market.

250,000: diplomatic cables channeled to Wikileaks. We mention this solely to enter The Economist's take on Bruce Sterling's take into the discussion. Wikileaks isn't at all the crypto-anarchy that physicist Timothy C. May wrote about in 1992. May's essay imagined the dark uses of encrypted secrecy; Wikileaks is, if anything, the opposite of it.

500: airport scanners deployed so far in the US, at an estimated cost of $80 million. For 2011, Obama has asked for another $88 million for the next round of installations. We'd like fewer scanners and the money instead spent on...well, almost anything else, really. Intelligence, perhaps?

65: Percentage of Americans that Pew Internet says have paid for Internet content. Yeah, yeah, including porn. We think it's at least partly good news.

58: Number of investigations (countries and US states) launched into Google's having sniffed approximately 600Gb of data from open WiFi connections, which the company admitted in May. The progress of each investigation is helpfully tallied by SearchEngineLand. Note that the UK's ICO's reaction was sufficiently weak that MPs are complaining.

24: Hours of Skype outage. Why are people writing about this as though it were the end of Skype? It was a lot more shocking when it happened to AT&T in 1990 - in those days, people only had one phone number!

5: number of years I've wished Google would eliminate useless shopping aggregator sites from its search results listings. Or at least label them and kick them to the curb.

2: Facebook privacy scandals that seem to have ebbed leaving less behavorial change than we'd like in their wake. In January, Facebook founder and CEO Mark Zuckerberg opined that privacy is no longer a social norm; in May the revamped its privacy settings to find an uproar in response (and not for the first time). Still, the service had 400 million users at the beginning of 2010 and has more than 500 million now. Resistance requires considerable anti-social effort, though the cool people have, of course, long fled.

1: Stuxnet worm. The first serious infrastructure virus. You knew it had to happen.

In memoriam:

- Kodachrome. The Atlantic reports that December 30, 2010 saw the last-ever delivery of Kodak's famous photographic film. As they note, the specific hues and light-handling of Kodachrome defined the look of many decades of the 20th century. Pause to admire The Atlantic's selection of the 75 best pictures they could find: digital has many wonderful qualities, but these seem to have a three-dimensional roundness you don't see much any more. Or maybe we just forget to look.

- The 3.5in floppy disk. In April, Sony announced it would stop making the 1.4Mb floppy disk that defined the childhoods of today's 20-somethings. The first video clip I ever downloaded, of the exploding whale in Oregon (famed of Web site and Dave Barry column), required 11 floppy disks to hold it. You can see why it's gone.

- Altavista: A leaked internal memo puts Altavista on Yahoo!'s list of services due for closure. Before Google, Altavista was the best search engine by a long way, and if it had focused on continuing to improve its search algorithms instead of cluttering up its front page in line with the 1995 fad for portals it might be still. Google's overwhelming success had as much to do with its clean, fast-loading design as it did with its superior ability to find stuff. Altavista also pioneered online translation with its Babelfish (and don't you have to love a search engine that quotes Douglas Adams?).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 10, 2010

Payback

A new word came my way while I was reviewing the many complaints about the Transportation Security Administration and its new scanner toys and pat-down procedures: "Chertoffed". It's how "security theater" (Bruce Schneier's term) has transformed the US since 2001.

The description isn't entirely fair to Chertoff, who was only the *second* head of the Bush II-created Department of Homeland Security and has now been replaced: he served from 2005-2009. But since he's the guy who began the scanner push and also numbers scanner manufacturers among the clients of his consultancy company, The Chertoff Group - it's not really unfair either.

What do you do after defining the travel experience of a generation? A little over a month ago, Chertoff showed up at London's RSA Data Security conference to talk about what he thought needed to happen in order to secure cyberspace. We need, he said, a doctrine to lay out the rules of the road for dealing with cyber attacks and espionage - the sort of thing that only governments can negotiate. The analogy he chose was to the doctrine that governed nuclear armament, which he said (at the press Q&A) "gave us a very stable, secure environment over the next several decades."

In cyberspace, he argued, such a thing would be valuable because it makes clear to a prospective attacker what the consequences will be. "The greatest stress on security is when you have uncertainty - the attacker doesn't know what the consequences will be and misjudges the risk." The kinds of things he wants a doctrine to include are therefore things like defining what is a proportionate response: if your country is on the receiving end of an attack from another country that's taking out the electrical power to hospitals and air traffic control systems with lives at risk, do you have the right to launch a response to take out the platform they're operating from? Is there a right of self-defence of networks?

"I generally take the view that there ought to be a strong obligation on countries, subject to limitations of practicality and legal restrictions, to police the platforms in their own domains," he said.

Now, there are all sorts of reasons many techies are against government involvement - or interference - in the Internet. First and foremost is time: the World Summit on the Information Society and its successor, the Internet Governance Forum, have taken years to do...no one's quite sure what, while the Internet's technology has gone on racing ahead creating new challenges. But second is a general distrust, especially among activists and civil libertarians. Chertoff even admitted that.

"There's a capability issue," he said, "and a question about whether governments put in that position will move from protecting us from worms and viruses to protecting us from dangerous ideas."

This was, of course, somewhat before everyone suddenly had an opinion about Wikileaks. But what has occurred since makes that distrust entirely reasonable: give powerful people a way to control the Net and they will attempt to use it. And the Net, as in John Gilmore's famous aphorism, "perceives censorship as damage and routes around it". Or, more correctly, the people do.

What is incredibly depressing about all this is watching the situation escalate into the kind of behavior that governments have quite reasonably wanted to outlaw and that will give ammunition to those who oppose allowing the Net to remain an open medium in which anyone can publish. The more Wikileaks defenders organize efforts like this week's distributed denial-of-service attacks, the more Wikileaks and its aftermath will become the justification for passing all kinds of restrictive laws that groups like the Electronic Frontier Foundation and the Open Rights Group have been fighting against all along.

Wikileaks itself is staying neutral on the subject, according to the statement on its (Swiss) Web site: Wikileaks spokesman Kristinn Hrafnsson said: "We neither condemn nor applaud these attacks. We believe they are a reflection of public opinion on the actions of the targets."

Well, that's true up to a point. It would be more correct to say that public opinion is highly polarized, and that the attacks are a reflection of the opinion of a relatively small section of the public: people who are at the angriest end of the spectrum and have enough technical expertise to download and install software to make their machines part of a botnet - and not enough sense to realize that this is a risky, even dangerous, thing to do. Boycotting Amazon.com during its busiest time of year to express your disapproval of its having booted Wikileaks off its servers would be an entirely reasonable protest. Vandalism is not. (In fact the announced attack on Amazon's servers seems not to have succeeded, though others have.

I have written about the Net and what I like to call the border wars between cyberspace and real life for nearly 20 years. Partly because it's fascinating, partly because when something is new you have a real chance to influence its development, and partly because I love the Net and want it to fulfill its promise as a democratic medium. I do not want to have to look back in another 20 years and say it's been "Chertoffed". Governments are already mad about the utterly defensible publication of the cables; do we have to give them the bullets to shoot us with, too?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 3, 2010

Open diplomacy

Probably most people have by now lived through the embarrassment of having a (it was intended to be) private communication made public. The email your fingers oopsishly sent to the entire office instead of your inamorata; the drunken Usenet postings scooped into Google's archive; the direct Tweet that wound up in the public timeline; the close friend your cellphone pocket-dialed while you were trashing them.

Most of these embarrassments are relatively short-lived. The personal relationships that weren't already too badly damaged recover, if slowly. Most of the people who get the misdirected email are kind enough to delete it and never mention it again. Even the stock market learns to forgive those drunken Usenet postings; you may be a CEO now but you were only a frat boy back then.

But the art of government-level diplomacy is creating understanding, tolerance, and some degree of cooperation among people who fundamentally distrust each other and whose countries may have substantial, centuries-old reasons why that is utterly rational. (Sometimes these internecine feuds are carried to extremes: would you buy from a store that filed Greek and Turkish DVDs in the same bin?) It's hardly surprising if diplomats' private conversations resemble those of Hollywood agents, telling each person what they want to hear about the others and maneuvering them carefully to get the desired result. And a large part of that desired result is avoiding mass destruction through warfare.

For that reason, it's hard to simply judge Wikileaks' behavior by the standard of our often-expressed goal of open data, transparency, accountability, and net.freedoms. Is there a line? And where do you draw it?

In the past, it was well-established news organizations who had to make this kind of decision - the New York Times and the Washington Post regarding the Pentagon Papers, for example. Those organizations, rooted in a known city in a single country, knew that mistakes would see them in court; they had reputations, businesses, and personal liberty to lose. As Jay Rosen: the world's first stateless news organization. (culture, laws, norms) - contract with those who have information that can submit - will encrypt to disguise source from us as well as others - and publish - can't subpoena because stateless. Failure of the watchdog press under George Bush and anxiety on part of press derived from denial of their own death.

Wikileaks wasn't *exactly* predicted by Internet pioneers, but it does have its antecedents and precursors. Before collaborative efforts - wikis - became commonplace on the Web there was already the notion of bypassing the nation-state to create stores of data that could not be subjected to subpoenas and other government demands. There was the Sealand data bunker. There was physicist Timothy May's Crypto Anarchist Manifesto, which posited that, "Crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded."

Note, however, that a key element of these ideas was anonymity. Julian Assange has told Guardian readers that in fact he originally envisioned Wikileaks as an anonymous service, but eventually concluded that someone must be responsible to the public.

Curiously, the strand of Internet history that is the closest to the current Wikileaks situation is the 1993-1997 wrangle between the Net and Scientology, which I wrote about for Wired in 1995. This particular net.war did a lot to establish the legal practices still in force with respect to user-generated content: notice and takedown, in particular. Like Wikileaks today, those posting the most closely guarded secrets of Scientology found their servers under attack and their material being taken down and, in response, replicated internationally on mirror sites to keep it available. Eventually, sophisticated systems were developed for locating the secret documents wherever they were hosted on a given day as they bounced from server to server (and they had to do all that without the help of Twitter. Today, much of the gist is on Wikipedia. At the time, however, calling it a "flame war with real bullets" wasn't far wrong: some of Scientology's fiercest online critics had their servers and/or homes raided. When Amazon removed Wikileaks from its servers because of "copyright", it operated according to practices defined in response to those Scientology actions.

The arguments over Wikileaks push at many other boundaries that have been hotly disputed over the last 20 years. Are they journalists, hackers, criminals, or heroes? Is Wikileaks important because, as NYU professor Jay Rosen points out, journalism has surrendered its watchdog role? Or because it is posing, as Techdirt says, the kind of challenge to governments that the music and film industries have already been facing? On a technical level, Wikileaks is showing us the extent to which the Internet can still resist centralised control.

A couple of years ago, Stefan Magdalinski noted the "horse-trading in a fairly raw form" his group of civic hackers discovered when they set out to open up the United Nations proceedings - another example of how people behave when they think no one is watching. Utimately governments will learn to function in a world in which they cannot trust that anything is secret, just as they had to learn to cope with CNN (PDF)

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 12, 2010

Just between ourselves

It is, I'm sure, pure coincidence that a New York revival of Vaclav Havel's wonderfully funny and sad 1965 play The Memorandum was launched while the judge was considering the Paul Chambers "Twitter joke trial" case. "Bureaucracy gone mad," they're billing the play, and they're right, but what that slogan omits is that the bureaucracy in question has gone mad because most of its members don't care and the one who does has been shut out of understanding what's going on. A new language, Ptydepe, has been secretly invented and introduced as a power grab by an underling claiming it will improve the efficiency of intra-office communications. The hero only discovers the shift when he receives a memorandum written in the new language and can't get it translated due to carefully designed circular rules. When these are abruptly changed the translated memorandum restores him to his original position.

It is one of the salient characteristics of Ptydepe that it has a different word for every nuance of the characters' natural language - Czech in the original, but of course English in the translation I read. Ptydepe didn't work for the organization in the play because it was too complicated for anyone to learn, but perhaps something like it that removes all doubt about nuance and context would assist older judges in making sense of modern social interactions over services such as Twitter. Clearly any understanding of how people talk and make casual jokes was completely lacking yesterday when Judge Jacqueline Davies upheld the conviction of Paul Chambers in a Doncaster court.

Chambers' crime, if you blinked and missed those 140 characters, was to post a frustrated message about snowbound Doncaster airport: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your shit together otherwise I'm blowing the airport sky high!" Everyone along the chain of accountability up to the Crown Prosecution Service - the airport duty manager, the airport's security personnel, the Doncaster police - seems to have understood he was venting harmlessly. And yet prosecution proceeded and led, in May, to a conviction that was widely criticized both for its lack of understanding of new media and for its failure to take Chambers' lack of malicious intent into account.

By now, everyone has been thoroughly schooled in the notion that it is unwise to make jokes about bombs, plane crashes, knives, terrorists, or security theater - when you're in an airport hoping to get on a plane. No one thinks any such wartime restraint need apply in a pub or its modern equivalent, the Twitter/Facebook/online forum circle of friends. I particularly like Heresy Corner's complaint that the judgement makes it illegal to be English.

Anyone familiar with online writing style immediately and correctly reads Chambers' Tweet for what it was: a perhaps ill-conceived expression of frustration among friends that happens to also be readable (and searchable) by the rest of the world. By all accounts, the judge seems to have read it as if it were a deliberately written personal telegram sent to the head of airport security. The kind of expert explanation on offer in this open letter apparently failed to reach her.

The whole thing is a perfect example of the growing danger of our data-mining era: that casual remarks are indelibly stored and can be taken out of context to give an utterly false picture. One of the consequences of the Internet's fundamental characteristic of allowing the like-minded and like-behaved to find each other is that tiny subcultures form all over the place, each with its own set of social norms and community standards. Of course, niche subcultures have always existed - probably every local pub had its own set of tropes that were well-known to and well-understood by the regulars. But here's the thing they weren't: permanently visible to outsiders. A regular who, for example, chose to routinely indicate his departure for the Gents with the statement, "I'm going out to piss on the church next door" could be well-known in context never to do any such thing. But if all outsiders saw was a ten-second clip of that statement and the others' relaxed reaction that had been posted to YouTube they might legitimately assume that pub was a shocking hotbed of anti-religiou slobs. Context is everything.

The good news is that the people on the ground whose job it was to protect the airport read the message, understood it correctly, and did not overreact. The bad news is that when the CPS and courts did not follow their lead it opened up a number of possibilities for the future, all bad. One, as so many have said, is that anyone who now posts anything online while drunk, angry, stupid, or sloppy-fingered is at risk of prosecution - with the consequence of wasting huge amounts of police and judicial time that would be better spent spotting and stopping actual terrorists. The other is that everyone up the chain felt required to cover their ass in case they were wrong.

Chambers still may appeal to the High Court; Stephen Fry is offering to pay his fine (the Yorkshire Post puts his legal bill at £3,000), and there's a fund accepting donations.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 5, 2010

Suicidal economics

Toxic sludge is GOOD for you, observed John Stauber and Sheldon Rampton in their 1995 book by the same name (or, more completely, Toxic Sludge is Good For You!: Lies, Damn Lies, and the Public Relations Industry). In that brilliantly researched, carefully reasoned, and humorous tome they laid out for inspection the inner workings of the PR industry. After reading it, you never look at the news the same way again.

Including, as we are not the first to say, this week's news that Rupert Murdoch's News International sees extracting subscription money from 105,000 readers of the online versions of the Times and Sunday Times as a success. Nieman Labs' round-up shows how much this particular characterization was greeted by skepticism elsewhere in the media. (My personal favorite is the analogy to >Spinal Tap's manager's defense of the band when it's suggested that its popularity is waning: "I just think...their appeal is becoming more selective.") If any of a few million blogs had 105,000 paying readers they'd be in fabulous shape; but given the uncertainty surrounding the numbers, for an organization the size of the Times it seems like pocket change.

I'm not sure that the huge drop in readership online is the worst news. Everyone predicted that, even Murdoch's own people (although it is interesting that the guy who is thought to have launched this scheme has left before the long-term results are in). The really bad news is that the paper's print circulation has declined in line with everyone else's since the paywall went up. It might have turned out, for example, that faced with paying £1 for a day's access a number of people might decide they'd just as soon have the nicely printed version that is, after all, still easier to read. Instead, what seems likely from these (unclear and incomplete) numbers is that online readers don't care nearly as much as offline ones about news sources. And in many cases they're right not to: it hardly matters which news site or RSS feed supplies you with the day's Reuters stories or which journalist dutifully copies down the quotes at the press briefing.

Today's younger generation also has - again, rightfully - a much deeper cynicism about "MSM" (mainstream media) than previous ones, who had less choice. They trust Jon Stewart and Stephen Colbert far than CNN (or the Onion more than the Times). They don't have to have read Stauber's and Rampton's detailed analysis to have absorbed the message: PR distortion is everywhere. If that's the case, why bother with the middleman? Why not just read the transparently biased source - a company's own spin - rather than the obscurely biased one? Or pick the opinion-former whose take on things is the most fun?

As Michael Wolff (who himself famously burned through many of someone else's millions in the dot-com boom) correctly points out, Murdoch's history online has been a persistent effort to recreate the traditional one-to-many publishing model. He likes satellite television and print newspapers - things where you control what's published and have to deal only with a handful of competitors and a back channel composed only of the great and the good. That desire is I think a fundamental mismatch with the Internet as we currently know it and it's not about free! information but about the two-way, many-to-many nature of the medium.

Not so long ago - 2002 - Murdoch's then COO insisted that you can't make money from content on the Internet; more recently, Times editor James Harding called giving away journalism for free a quite suicidal form of economics In a similar vein, this week Bruce Eisen, the US's Dish Network vice-president of online content development and strategy complained that the online streaming service Hulu is killing the TV industry.

Back in 2002, I argued that you can make money from online content but it needs to be some combination of a) low overheads, b) necessary, c) unusual if not unique, d) timely, and e) correctly priced. From what Slate is saying, it appears that Netflix is getting c, d, and e right and that the mix is giving the company enough of an advantage to let it compete successfully with free-as-in-file-sharing. But is the Times getting enough of those things right? And does it need to?

As Emily Bell points out, Murdoch's interest in the newspapers was more for their influence than their profitability, and that this influence and therefore their importance has largely waned. "Internationally, it has no voice," she writes. But therein lies a key difference between the Times and, say, the Guardian or the BBC: enlarging the international audience for and importance of the Times means competing with his own overseas titles. The Guardian has no such internal conflict of interest, and is therefore free to pursue its mission to become the world's leading liberal voice.

Of course, who knows? In a year's time maybe we'll all be writing the astonishing story of rising paid subscriber numbers and lauding Murdoch's prescience. But if we are, I'll bet that the big winner won't be the Times but Apple.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 1, 2010

Duty of care

"Anyone who realizes how important the Web is," Tim Berners-Lee said on Tuesday, "has a duty of care." He was wrapping up a two-day discussion meeting at the Royal Society. The subject: Web science.

What is Web science? Even after two days, it's difficult to grasp, in part because defining it is a work in progress. Here are some of the disciplines that contributed: mathematics, philosophy, sociology, network science, and law, plus a bunch of much more directly Webby things that don't fit easily into categories. Which of course is the point: Web science has to cover much more than just the physical underpinnings of computers and network wires. Computer science or network science can use the principles of mathematics and physics to develop better and faster machines and study architectures and connections. But the Web doesn't exist without the people putting content and applications on it, and so Web science must be as much about human behaviour as about physics.

"If we are to anticipate how the Web will develop, we will require insight into our own nature," Nigel Shadbolt, one of the event's convenors, said on Monday. Co-convenor Wendy Hall has said, similarly, "What creates the Web is us who put things on it, and that's not natural or engineered.". Neither natural (biological systems) or engineered (planned build-out like the telecommunications networks), but something new. If we can understand it better, we can not only protect it better, but guide it better toward the most productive outcomes, just as farmers don't haphazardly interbreed species of corn but use their understanding to select for desirable traits.

The simplest parts of the discussions to understand, therefore, were (ironically) the mathematicians. Particularly intriguing was the former chief scientist Robert May, whose approach to removing nodes from the network to make it non-functional applied equally to the Web, epidemiology, and banking risk.

This is all happening despite the recent Wired cover claiming the "Web is dead". Dead? Facebook is a Web site; Skype, the app store, IM clients, Twitter, and the New York Times all reach users first via the Web even if they use their iPhones for subsequent visits (and how exactly did they buy those iPhones, hey?) Saying it's dead is almost exactly the old joke about how no one goes to a particular restaurant any more because it's too crowded.

People who think the Web is dead have stopped seeing it. But the point of Web science is that for 20 years we've been turning what started as an academic playground into a critical infrastructure, and for government, finance, education, and social interaction to all depend on the Web it must have solid underpinnings. And it has to keep scaling - in a presentation on the state of deployment of IPv6 in China, Jianping Wu noted that Internet penetration in China is expected to jump from 30 percent to 70 percent in the next ten to 20 years. That means adding 400-900 million users. The Chinese will have to design, manage, and operate the largest infrastructure in the world - and finance it.

But that's the straightforward kind of scaling. IBMer Philip Tetlow, author of The Web's Awake (a kind of Web version of the Gaia hypothesis), pointed out that all the links in the world are a finite set; all the eyeballs in the world looking at them are a finite set...but all the contexts surrounding them...well, it's probably finite but it's not calculable (despite Pierre Levy's rather fanciful construct that seemed to suggest it might be possible to assign a URI to every human thought). At that level, Tetlow believes some of the neat mathematical tools, like Jennifer Chayes' graph theory, will break down.

"We're the equivalent of precision engineers," he said, when what's needed are the equivalent of town planners and urban developers. "And we can't build these things out of watches."

We may not be able to build them at all, at least not immediately. Helen Margetts outlined the constraints on the development of egovernment in times of austerity. "Web science needs to map, understand, and develop government just as for other social phenomena, and export back to mainstream," she said.

Other speakers highlighted gaps between popular mythology and reality. MIT's David Carter noted that, "The Web is often associated with the national and international but not the local - but the Web is really good at fostering local initiatives - that's something for Web science to ponder." Noshir Contractor, similarly, called out The Economist over the "death of distance": "More and more research shows we use the Web to have connections with proximate people."

Other topics will be far more familiar to net.wars readers: Jonathan Zittrain explored the ways the Web can be broken by copyright law, increasing corporate control (there was a lovely moment when he morphed the iPhone's screen into the old CompuServe main menu), the loss of uniformity so that the content a URL points to changes by geographic location. These and others are emerging points of failure.

We'll leave it to an unidentified audience question to sum up the state of Web science: "Nobody knows what it is. But we are doing it."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 10, 2010

Google, I want a divorce


Jamie: You're dating your mailman?
Lisa: Why not? He comes to see me every day. He's always bringing me things.
Jamie: Mail. He brings you mail.
Lisa: Don't judge him!

- from Mad About You, Season 3, Episode 1, "Escape From New York".

Two years ago, when Google turned ten years old I was called into a BBC studio to talk about the company. Why, I was asked, did people hate Microsoft so much? Would people ever hate Google, too? I said, I think, that because we're only aware of Microsoft when its software fails, our primary impression of the company is frustration: why does this software hate me?

Whereas, I went on to say, to most people Google is like the mailman: it's a nice Web site that keeps bringing you things you really want. Yes, Street View (privacy), Google Books (copyright), and other controversies, but search results! Right out of the oven!

This week I can actually say it: I hate Google. There was the annoying animated Buckyball. There was the enraging exploding animation. And now there's Google Instant - which I can turn off, to be sure, now I can't turn off Google's suggestions. Pause to scream.

I know life is different for normal people, and that people who can't touch type maybe actually like Google's behaving like a long-time spouse who finishes all their sentences, especially if they cannot spell correctly. But neither Instant nor suggestions is a help when your typical search is a weird mix of constraints intended to prod Google into tossing out hits on obscure topics. And you know what else isn't a help? Having stuff change before your eyes and disrupt the brain-fingers continuum. Changing displays, animations, word suggestions all distract you from what you're typing and make it hard to concentrate.

A different problem is the one posed by personalized results: journalists need to find the stuff they - and lots of other people - don't know about. Predictive and personalized results typically will show you the stuff you already do know about, which is fine if you're trying to find that guy who fixed your garage door that time but terrible if what you're trying to do is put together new information in new ways (like focus groups, as Don Draper's said in the recent Mad Men episode "The Rejected".)

There are a lot of things Google could do that would save me - and millions of other people - more time than Instant. The company could get expunge more of the link farms and useless aggregator shopping sites from its results. Intelligence could be better deployed for disaggregation - this Wendy Grossman or that one? I'd benefit from having the fade-in go away; it always costs me a few seconds.

There are some other small nuisances that also waste my time. On the News and some other pages, for example, you can't right-click on a URL and copy/paste it into a story because a few years ago doing that started returning an enormously long Google-adulterated URL. Simply highlighting and copying the URL into Word puts it in weird fonts you have to change. So the least slow way is to go to the page - which is very nice for the page but you're on deadline. And why can't Google read the page's date of last alteration (at least on static pages) and include that in the search listing? The biggest time-waster for me is having to plough through acres of old stuff because there's no way to differentiate it from the recent material. I also don't like the way the new Images search pages load. You would be this fussy, too, if you spent an hour or two a day on the site.

Lauren Weinstein has turned up some other, more serious, problems with Google Instant and the way it "thinks". Of course, it's still in beta, we all know this. Even though Yahoo! says hey, we had that back in 2005. (And does anyone else think the mention of "intellectual property" in that blog post sounds ominous?) Search Engine Watch has more detail (and a step-by-step critique; it's SEW's commentators' opinions that Yahoo! did not go ahead with its live offering because it had insufficient appetite for product risk - and insufficient infrastructure to support it.

So, for me personally the upshot is that I'm finally, after 11 years, in the market for a replacement search engine. Yahoo! is too cluttered. Ask.com's "question of the day" annoys me because, again, it's distracting. Altavista I abandoned gratefully (clutter!) in 1998 even though it invented the Babelfish. Dogpile has a stupid name, is hideous, and has a horoscope button on the front page. Webcrawler doesn't quick-glance differentiate its sponsored links. Cuil has too few results on a page and no option to increase them. Of course, mostly I want not to have to change.

Perhaps the most likely option is the one I saw recommended on Slashdot: Google near-clone DuckDuckGo, which seems to have a good attitude toward privacy and a lot of nifty shortcuts. I don't really love the shading in and out as you mouse over results, but I love that you can click anywhere in the shading to go to the page. I don't like having to wait for most of the listings to load; I like to skim all 100 listings on a page quickly before choosing anything. But I have to use something. I search to live.
So many options, yet none are really right. It may just be that as the main search engines increasingly compete for the mass-market they will be increasingly less fit for real research. There's an important niche here, folks.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 3, 2010

Beyond the zipline

When Aaron Sorkin (The West Wing, Sports Night) was signed to write the screenplay for a movie about Facebook, I think the general reaction was one of more or less bafflement. Sorkin has a great track record, sure, but how do you make a movie about a Web site, even if it's a social network? What are you going to show? People typing to each other?

Now that the movie is closer coming out (October 1 in the US) that we're beginning to see sneak peak trailers, and we can tell a lot more from the draft screenplay that's been floating around the Net. The copy I found is dated March 2009, and you can immediately tell it's the real thing: quality dialogue and construction, and the feel of real screenwriting expertise. Turns out, the way you write a screenplay about Facebook is to read the books, primarily the novelistic, not-so-admired Accidental Billionaires by Ben Mezrich, along with other published material and look for the most dramatic bit of the story: the lawsuits eventually launched by the characters you're portraying. Through which, as a framing device, you can tell the story of the little social network that exploded. Or rather, Sorkin can. The script is a compelling read. (It's actually not clear to me that it can be improved by actually filming it.)

Judging from other commentaries, everyone seems to agree it's genuine, though there's no telling where in the production process that script was, how many later drafts there were, or how much it changed in filming and post-production. There's also no telling who leaked it or why: if it was intentional it was a brilliant marketing move, since you could hardly ask for more word-of-mouth buzz.

If anyone wanted to design a moral lesson for the guy who keeps saying privacy is dead, it might be this: turn out your deepest secrets to portray you as a jerk who steals other people's ideas and codes them into the basis for a billion-dollar company, all because you want to stand out at Harvard and, most important, win the admiration of the girl who dumped you. Think the lonely pathos of the socially ostracized, often overlooked Jenny Humphrey in Gossip Girl crossed with the arrogant, obsessive intelligence of Sheldon Cooper in The Big Bang Theory. (Two characters I actually like, but they shouldn't breed.)

Neither the book nor the script is that: they're about as factual as 1978's The Buddy Holly Story or any other Hollywood biopic. Mezrich, who likes to write books about young guys who get rich fast (you can see why; he's gotten several bestsellers out of this approach), had no help from Facebook founder and CEO Mark Zuckerberg, What dialogue there is has been "re-created", and sources other than disaffected co-founder Eduardo Saverin are anonymous. Lacking sourcing (although of course the court testimony is public information), it's unclear how fictional the dramatization is. I'd have no problem with that if the characters weren't real people identified by their real names.

Places, too. Probably the real-life person/place/thing that comes off worst is Harvard, which in the book especially is practically a caricature of the way popular culture likes to depict it: filled with the rich, the dysfunctional, and the terminally arrogant who vie to join secretive, elite clubs that force them to take part in unsavoury hazing rituals. So much so that it was almost a surprise to read in Wikipedia that Mezrich actually went to Harvard.

Journalists and privacy advocates have written extensively about the consequences for today's teens of having their adolescent stupidities recorded permanently on Facebook or elsewhere, but Zuckerberg is already living with having his frat-boy early days of 2004 documented and endlessly repeated. Of course one way to avoid having stupid teenaged shenanigans reported is not to engage in them, but let's face it: how many of us don't have something in our pasts we'd just as soon keep out of the public eye? And if you're that rich that young, you have more opportunities than most people to be a jerk.

But if the only stories people can come up with about Zuckerberg date from before he turned 21, two thoughts occur. First, that Zuckerberg has as much right as anybody to grow up into a mature human being whose early bad judgement should be forgiven. To cite two examples: the tennis player Andre Agassi was an obnoxious little snert at 18 and a statesman of the game at 30; at 30 Bill Gates was criticized for not doing enough for charity but now at 54 is one of the world's most generous philanthropists. It is, therefore, somewhat hypocritical to demand that Zuckerberg protect today's teens from their own online idiocy while constantly republishing his follies.

Second, that outsized, hyperspeed business success might actually have forced him to grow up rather quickly. Let's face it, it's hard to make an interesting movie out of the hard work of coding and building a company.

And a third: by joining the 500 million and counting who are using Facebook we are collectively giving Zuckerberg enough money not to care either way.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 13, 2010

Pirate flags

Wednesday's Future Human - The Piracy Panacea event missed out on a few topics, among them network neutrality, an issue I think underlies many net.wars debates: content control, privacy, security. The Google-Verizon proposals sparked much online discussion this week. I can only reiterate my belief that net neutrality should be seen as an anti-trust issue. A basic principle of anti-trust law (Standard Oil, the movie studios) is that content owners should not be allowed to own the means of distribution, and I think this readily applies to cable companies that own TV stations and telephone companies that are carriers for other people's voice services.

But the Future Human event was extraordinary enough without that. Imagine: more than 150 people squished into a hot, noisy pub, all passionately interested in...copyright! It's only a few years ago that entire intellectual property law school classes would fit inside a broom cupboard. The event's key question: does today's "piracy" point the way to future innovation?

The basis of that notion seemed to be that historically pirates have forced large imperial powers to change and weren't just criminals. The event's light-speed introduction whizzed through functionally democratic pirate communities and pirate radio, and a potted history of authorship from Shakespeare and Newton to Lady Gaga. There followed mock trials of a series of escalating copyright infringements in which it became clear that the audience was polarized and more or less evenly divided.

There followed our panel: me, theoretically representing the Open Rights Group; Graham Linehan, creator of Father Ted and The IT Crowd; Jamie King, writer and director of Steal This Film; and economist Thierry Rayna. Challenged, of course, by arguers from the audience, one of whom declined to give her affiliation on the grounds that she'd get lynched (I doubt this). Partway through the panel someone complained on Twitter that we weren't answering the question the event had promised to tackle: how can the creative industries build on file-sharing and social networks to create the business models of the future?

It seems worth trying to answer that now.

First, though, I think it's important to point out that I don't think there's much that's innovative about downloading a TV show or MP3. The people engaged in downloading unauthorized copies of mainstream video/audio, I think, are not doing anything particularly brave. The people on the front lines are the ones running search engines and services. These people are indeed innovators, and some of them are doing it at substantial personal risk. And they cannot, in general, get legal licenses from rights holders, a situation that could be easily changed by the rights holders. Napster, which kicked the copyright wars into high gear and made digital downloads a mainstream distribution method, is now ten years ago. Yet rights holders are still trying to implement artificial scarcity (to replace real scarcity) and artificial geography (to replace real geography). The death of distance, as Economist writer Frances Cairncross called it in 1997, changes everything, and trying to pretend it doesn't is absurd. The download market has been created by everyone *but* the record companies, who should have benefited most.

Social networks - including the much-demonized P2P networks - provide the greatest mechanism for word of mouth in the history of human culture. And, as we all know, word of mouth is the most successful marketing available, at least for entertainment.

It also seems obvious that P2P and social networks are a way for companies to gauge the audience better before investing huge sums. It was obvious from day one, for example, that despite early low official ratings and mixed reviews, Gossip Girl was a hit. Why? Because tens of thousands of people were downloading it the instant it came online after broadcast. Shouldn't production company accountants be all over this? Use these things as a testbed instead of having the fall pilots guessed on by a handful of the geniuses who commissioned Cavemen and the US version of Coupling and cancelled Better Off Ted. They could have a lot clearer picture of what kind of audience a show might find and how quickly.

Trying to kill P2P and other technologies just makes them respawn like the Hydra. The death of Napster (central server) begat Gnutella and eDonkey (central indexes), lawsuits against whose software developers begat the even more decentralized BitTorrent. When millions and tens of millions of people are flocking to a new technology rights holders should be there, too.

The real threat is always going to be artists taking their business into their own hands. For every Lady Gaga there are thousands of artists who, given some basic help can turn their work into the kind of living wage that allows them to pursue their art full-time and professionally. I would think there is a real business in providing these artists with services - folksingers, who've never had this kind of help, have produced their own recordings for decades, and having done it myself I can tell you it's not easy. This was the impulse behind the foundation of CDBaby, and now of Jamie King's VoDo. In the long run, things like this are the real game-changers.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 4, 2010

Return to the hacker crackdown

Probably many people had forgotten about the Gary McKinnon case until the new government reversed their decision to intervene in his extradition. Legal analysis is beyond our expertise, but we can outline some of the historical factors at work.

By 2001, when McKinnon did his breaking and entering into US military computers, hacking had been illegal in the UK for just over ten years - the Computer Misuse Act was passed in 1990 after the overturned conviction of Robert Schifreen and Steve Gold for accessing Prince Philip's Prestel mailbox.

Early 1990s hacking (earlier, the word meant technological cleverness) was far more benign than today's flat-out crimes of identity fraud, money laundering, and raiding bank accounts. The hackers of the era - most famously Kevin Mitnick were more the cyberspace equivalent of teenaged joyriders: they wandered around the Net rattling doorknobs and playing tricks to get passwords, and occasionally copied some bit of trophy software for bragging rights. Mitnick, despite spending four and a half years in jail awaiting trial, was not known to profit from his forays.

McKinnon's claim that he was looking for evidence that the US government was covering up information about alternative energy and alien visitations seems to me wholly credible. There was and is a definite streak of conspiracy theorists - particularly about UFOs - among the hacker community.

People seemed more alarmed by those early-stage hackers than they are by today's cybercriminals: the fear of new technology was projected onto those who seemed to be its masters. The series of 1990 "Operation Sundown" raids in the US, documented in Bruce Sterling's book , inspired the creation of the Electronic Frontier Foundation. Among other egregious confusions, law enforcement seized game manuals from Steven Jackson Games in Austin, Texas, calling them hacking instruction books.

The raids came alongside a controversial push to make hacking illegal around the world. It didn't help when police burst in at the crack of dawn to arrest bright teenagers and hold them and their families (including younger children) at gunpoint while their computers and notebooks were seized and their homes ransacked for evidence.

"I think that in the years to come this will be recognized as the time of a witch hunt approximately equivalent to McCarthyism - that some of our best and brightest were made to suffer this kind of persecution for the fact that they dared to be creative in a way that society didn't understand," 21-year-old convicted hacker Mark Abene ("Phiber Optik") told filmmaker Annaliza Savage for her 1994 documentary, Unauthorized Access (YouTube).

Phiber Optik was an early 1990s cause célèbre. A member of the hacker groups Legion of Doom and Masters of Deception, he had an exceptionally high media profile. In January 1990, he and other MoD members were raided on suspicion of having caused the AT&T crash of January 15, 1990, when more than half of the telephone network ceased functioning for nine hours. Abene and others were eventually charged in 1991, with law enforcement demanding $2.5 million in fines and 59 years in jail. Plea agreements reduced that a year in prison and 600 hours of community service. The company eventually admitted the crash was due to its own flawed software upgrade.

There are many parallels between these early days of hacking and today's copyright wars. Entrenched large businesses (then AT&T; now RIAA, MPAA, BPI, et al) perceive mostly young, smart Net users as dangerous enemies and pursue them with the full force of the law claiming exaggeratedly large-figure sums in damages. Isolated, often young, targets were threatened with jail and/or huge sums in damages to make examples of them to deter others. The upshot in the 1990s was an entrenched distrust of and contempt for law enforcement on the part of the hacker community, exacerbated by the fact that back then so few law enforcement officers understood anything about the technology they were dealing with. The equivalent now may be a permanent contempt for copyright law.

In his 1990 essay Crime and Puzzlement examining the issues raised by hacking, EFF co-founder John Perry Barlow wrote of Phiber Optik, whom he met on the WELL: "His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves."

When McKinnon was first arrested in March 2002 and then indicted in a Virginia court in October 2002 for cracking into various US military computers - with damage estimated at $800,000 - all this history will still fresh. Meanwhile, the sympathy and good will toward the US engendered by the 9/11 attacks had been dissipated by the Bush administration's reaction: the PATRIOT Act (passed October 2001) expanded US government powers to detain and deport foreign citizens, and the first prisoners arrived at Guantanamo in January 2002. Since then, the US has begun fingerprinting all foreign visitors and has seen many erosions to civil liberties. The 2005 changes to British law that made hacking into an extraditable offense were controversial for precisely these reasons.

As McKinnon's case has dragged on through extradition appeals this emotional background has not changed. McKinnon's diagnosis with Asperger's Syndrome in 2008 made him into a more fragile and sympathetic figure. Meanwhile, the really dangerous cybercriminals continue committing fraud, theft, and real damage, apparently safe from prosecution.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 21, 2010

Trial by innocence

I don't think I ever chose a side on the subject of whether Floyd Landis was guilty or innocent. raised some legitimate issues about the anti-doping industry (as it's becoming). Given the considerable evidence that doping is endemic in cycling, it's hard to believe any winner in that sport is drug-free whether he's ever failed an anti-doping control or not. On the other hand, I really do believe in the presumption of innocence, and one must always allow for the possibility of technical, logistical, and personal errors. It would have been churlish to proclaim Landis's guilt before the tribunal hearing his case did. The blog Steroid Nation was always skeptical, but not condemning, of Landis's cries of innocence.

But I know how I'd feel if I'd believed in his innocence and contributed to the Floyd Fairness Fund that was set up to accept donations from fans to pay his legal fees: hella angry and betrayed. Of all the athletes who have protested their innocence down the years of anti-doping, Landis was the most vocal, the most insistent, and the most public. Landis even published a book, 2007's Positively False: The Real Story of How I Won the Tour de France that loudly proclaimed his innocence ("My case should never have happened"), laying out much the arguments and evidence (which he is accused of having by hacking the lab's computer system) he made on the Floyd Fairness Web site. It seems all but certain he'll "write" another, this one telling the blockbuster story of how he fooled family, fans, drug testers, and media for all those years.

I'll make sure to buy it used, so I don't help him profit from his crime.

By "crime" I don't mean his doping - although under the law it is in fact a crime, and it's an example of our cultural double-think on this issue that athletes are not prosecuted for doping the way crack, heroin, or even marijuana users are in most countries. I mean effectively defrauding his fans out of their hard-earned money to help him defend against charges that he now admits were true. If that's not a con trick, what is?

I also know how I'd feel if I were a non-doping athlete wrongfully accused - and however few of these there may be on the planet, the law of truly large numbers says there must be some somewhere. I would be absolutely enraged. High-profile cases like this - see also Marion Jones, Mark McGwire - make it impossible for any athlete to believed. And, as Agatha Christie wrote long ago in Ordeal by Innocence, "It's not the guilty who matter, it's the innocent." In her example, the innocent servant suffered the most when an expensive bit of jewelry was stolen from her employer's home. In sports, even if there are no false positives (which seems impossible), athletes suffer when they must regard all foods, supplements, and medical treatment with fear.

You may remember that late last year the tennis player Andre Agassi published Open, in which among other revelations (he wore a wig in the early 1990s, he hated tennis) he revealed that the Association of Tennis Professionals had accepted his utterly meretricious explanation of how he came to test positive for crystal meth and let him off any punishment. This humane behavior, although utterly against the rules and deplored by Agassi's competitors, most notably Marat Safin, arguably saved Agassi's career. Frightened out of his wits by his close brush with suspension and endorsement death, Agassi cleaned up his act, got to work, and over the next year or two raised his ranking from the depths of 140 to 1. Had the ATP followed the rules and suspended him, Agassi might now be in the record books as a huge but flaky talent that flamed out after three Slam wins and a gold medal. Instead, he's arguably the most versatile player in tennis history and member of a tiny, elite handful of players who won everything of significance in the game on every surface at least once.

Crystal meth, of course, was not a performance-enhancing drug; it was a performance-destroying drug. Agassi's ranking plummeted under its influence, and it's arguable that they had no business testing for it. But Safin's key point was that having successfully lied to the ATP, Agassi should now reward the ATP's confidence by keeping his mouth shut.

I'm not entirely sure I agree with that in Agassi's case; at least he produced a rare example of an athlete taking drugs and losing because of them. Also, the ATP is no longer in charge of the tennis tour's doping controls and the people who dealt with Agassi's positive test in 1997 have likely moved on.

But most of these cases, including Landis's, just keep repeating the same old lesson, and it's not the one the anti-doping authorities would like: winners dope. Then they lie about it for fame and glory. If and when they're caught, they lie some more. And then, when people are beginning to forget about them, they 'fess up and justify themselves by accusing their rivals and beginning the cycle anew. Something is badly broken here. Bring on undetectable gene doping.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 2, 2010

Not bogus!


"If I lose £1 million it's worth it for libel law reform," the science writer Simon Singh was widely reported as saying this week. That was even before yesterday's ruling in the libel case brought against him by the British Chiropractic Association.

Going through litigation, I was told once, is like having cancer. It is a grim, grueling, rollercoaster process that takes over your life and may leave you permanently damaged. In the first gleeful WE-WON! moments following yesterday's ruling it's easy to forget that. It's also easy to forget that this is only one stage in a complex series.

Yesterday's judgment was the ruling in Singh's appeal (heard on February 22) against the ruling of Justice David Eady last May, which itself was only a preliminary ruling on the meaning of the passage in dispute, with the dispute itself to be resolved in a later trial. In October Singh won leave to appeal Eady's ruling; February's hearing and today's judgment constituted that appeal and its results. It is now two years since the original article appeared, and the real case is yet to be tried. Are we at the beginning of Jarndyce and Jarndyce or SCO versus Everyone?

The time and costs of all this are why we need libel law reform. English libel cases, as Singh frequently reminds us, cost 144 times as much as similar cases in the rest of the EU.

But the most likely scenario is that Singh will lose more than that million pounds. It's not just that he will have to pay the costs of both sides if he loses whatever the final round of this case eventually turns out to be (even if he wins the costs awarded will not cover all his expenses). We must also count what businesses call "opportunity costs".

A couple of weeks ago, Singh resigned from his Guardian column because the libel case is consuming all his time. And, he says, he should have started writing his next book a year ago but can't develop a proposal and make commitments to publishers because of the uncertainty. These withdrawals are not just his loss; we all lose by not getting to read what he'd write next. At a time when politicians can be confused enough to worry that an island can tip over and capsize, we need our best popular science educators to be working. Today's adults can wait, perhaps; but I did some of my best science reading as a teenager: The Microbe Hunters; The Double Helix (despite its treatment of Rosalind Franklin); Isaac Asimov's The Human Body: Its Structure and Operation; and the pre-House true medical detection stories of Berton Roueché. If Singh v BCA takes five years that's an entire generation of teenagers.

Still, yesterday's ruling, in which three of the most powerful judicial figures in the land agreed - eloquently! - with what we all thought from the beginning deserves to be celebrated, not least for its respect for scientific evidence,

Some favorite quotes from the judgment, which makes fine reading:

Accordingly this litigation has almost certainly had a chilling effect on public debate which might otherwise have assisted potential patients to make informed choices about the possible use of chiropractic.

A similar situation, of course, applies to two other recent cases that pitted libel law against the public interest in scientific criticism. First, Swedish academic Francisco Lacerda, who criticized the voice risk analysis principles embedded in lie detector systems (including one bought by the Department of Work and Pensions at a cost of £2.4 million). Second, British cardiologist Peter Wilmshurst is defending charges of libel and slander over comments he made regarding a clinical trial in which he served as a principal investigator. In all three cases, the public interest is suffering. Ensuring that there is a public interest defense is accordingly a key element of the libel law reform campaign's platform.

The opinion may be mistaken, but to allow the party which has been denounced on the basis of it to compel its author to prove in court what he has asserted by way of argument is to invite the court to become an Orwellian ministry of truth.

This was in fact the gist of Eady's ruling: he categorized Singh's words as fact rather than comment and would have required Singh to defend a meaning his article went on to say explicitly was not what he was saying. We must leave it for someone more English than I am to say whether that is a judicial rebuke.

We would respectfully adopt what Judge Easterbrook, now Chief Judge of the US Seventh Circuit Court of Appeals, said in a libel a2ction over a scientific controversy, Underwager v Salter: "[Plaintiffs] cannot, by simply filing suit and crying 'character assassination!', silence those who hold divergent views, no matter how adverse those views may be to plaintiffs' interests. Scientific controversies must be settled by the methods of science rather than by the methods of litigation.

What they said.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 26, 2010

The community delusion

The court clerk - if that's the right term - seemed slightly baffled by the number of people who showed up for Tuesday's hearing in Simon Singh v. British Chiropractic Association. There was much rearrangement, as the principals asked permission to move forward a row to make an extra row of public seating and then someone magically produced eight or ten folding chairs to line up along the side. Standing was not allowed. (I'm not sure why, but I guess something to do with keeping order and control.)

It was impossible to listen to the arguments without feeling a part of history. Someday - ten, 50, 150 years from now - a different group of litigants will be sitting in that same court room or one very like it in the same building and will cite "our" case, just as counsel cited precedents such as Reynolds and Branson v Bower. If Singh's books don't survive, his legal case will, as may the effects of the campaign to reform libel law (sign the petition!) it has inspired and the Culture, Media, and Sport report (Scribd) that was published on Wednesday. And the sheer stature of the three judges listening to the appeal - Lord Chief Justice Lord Judge (to Americans: I am not making this up!), Master of the Rolls Lord Neuberger, and Lord Justice Sedley - ensures it will be taken seriously.

There are plenty of write-ups of what happened in court and better-informed analyses than I can muster to explain what it means. The gist, however: it's too soon to tell which pieces of law will be the crucial bits on which the judges make their decision. They certainly seemed to me to be sympathetic to the arguments Singh's counsel, Adrienne Page QC, made and much less so to the arguments the BCA's counsel, Heather Rogers QC. But the case will not be decided on the basis of sympathy; it will be decided on the basis of legal analysis. "You can't read judges," David Allen Green (aka jackofkent) said to me over lunch. So we wait.
But the interesting thing about the case is that this may be the first important British legal case to be socially networked: here is a libel case featuring no pop stars or movie idols, and yet they had to turn some 20 or 30 people away from the courtroom. Do judges read Twitter?

Beginning with Howard Rheingold's 1993 book The Virtual Community, it was clear that the Net's defining characteristic as a medium is its enablement of many-to-many communication. Television, publishing, and radio are all one-to-many (if you can consider a broadcaster/publisher a single gatekeeper voice). Telephones and letters are one-to-one, by and large. By 1997, business minds, most notably John Hagel III and Arthur Armstrong in net.gain, had begun saying that the networked future of businesses would require them to build communities around themselves. I doubt that Singh thinks of his libel case in that light, but today's social networks (which are a reworking of earlier systems such as Usenet and online conferencing systems) are enabling him to do just that. The leverage he's gained from that support is what is really behind both the challenge to English libel law and the increasing demand for chiropractors generally to provide better evidence or shut up.

Given the value everyone else, from businesses to cause organizations to individual writers and artists, places on building an energetic, dedicated, and active fan base, it's surprising to see Richard Dawkins, whose supporters have apparently spent thousands of unpaid hours curating his forums for him, toss away what by all accounts was an extraordinarily successful community supporting his ideas and his work. The more so because apparently Dawkins has managed to attract that community without ever noticing what it meant to the participants. He also apparently has failed to notice that some people on the Net, some of the time, are just the teeniest bit rude and abusive to each other. He must lead a very sheltered life, and, of course, never have moderated his own forums.

What anyone who builds, attracts, or aspires to such a community has to understand from the outset is that if you are successful your users will believe they own it. In some cases, they will be right. It sounds - without having spend a lot of time poring over Dawkins' forums myself - as though in this case in fact the users, or at least the moderators, had every right to feel they owned the place because they did all the (unpaid) work. This situation is as old as the Net - in the days of per-minute connection charges CompuServe's most successful (and economically rewarding to their owners) forums were built on the backs of volunteers who traded their time for free access. And it's always tough when users rediscover the fact that in each individual virtual community, unlike real-world ones, there is always a god who can pull the plug without notice.

Fortunately for the causes of libel law reform and requiring better evidence, Singh's support base is not a single community; instead, it's a group of communities who share the same goals. And, thankfully, those goals are bigger than all of us.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. I would love to hear (net.wars@skeptic.demon.co.uk) from someone who could help me figure out why this blog vapes all non-spam comments without posting them.

February 12, 2010

Light year

This year is going to be the first British general election in which blogging is going to be a factor, someone said on Monday night at the event organized by the Westminster Skeptics on the subject of political blogging: does it make any difference? I had to stop and think: really? Things like the Daily Kos have been part of the American political scene for so long now - Kos was founded in 2002 - that they've been through two national elections already.

But there it was: "2005 was my big break," said Paul Staines, who blogs as Guido Fawkes. "I was the only one covering it. 2010 is going to be much tougher." To stand out, he went on to say, you're going to need a good story. That's what they used to tell journalists.

Due to the wonders of the Net, you can experience the debate for yourself. The other participants were Sunny Hundal (Liberal Conspiracy), Mick Fealty (Slugger O'Toole), Jonathan Isaby (Conservative Home), and the Observer journalist Nick Cohen, there to act as the token nay-sayer. (I won't use skeptic, because although the popular press like to see a "skeptic" as someone who's just there to throw brickbats, I use the term rather differently: skepticism is inquiry and skeptics ask questions and examine evidence.)

All four of political bloggers have a precise idea of what they're trying to do and who they're writing for. Jonathan Isaby, who claims he's the first British journalist to leave a full-time newspaper job (at the Telegraph) for new media, said he's read almost universally among Conservative candidates. Paul Staines aims Guido Fawkes at "the Westminster bubble". Mick Fealty uses Slugger O'Toole to address a "differentiated audience" that is too small for TV, radio, and newspapers. Finally, Sunny Hundal uses Liberal Conspiracy to try to "get the left wing to become a more coherent force".

Despite their various successes, Cohen's basic platform defended newspapers. Blogging, he said, is not replacing the essential core of journalism: investigation and reporting. He's right up to a point. But some do exactly that. Westminster Skeptics convenor David Allen Green, then standing approximately eight inches away, is one example. But it's probably true that for every blogger with sufficient curiosity and commitment to pick up a phone or bang on someone's door there are a couple of hundred more who write blog postings by draping a couple of hundred words of opinion around a link to a story that appeared in the mainstream media.

Of course, as Cohen didn't say, plenty of journalists\, through lack of funding, lack of time, or lack of training, find themselves writing news stories by draping a couple of hundred words of rewritten press release around the PR-provided quotes - and soul-destroying work it is, too. My answer to Cohen, therefore, is to say that commercial publishers have contributed to their own problems, and that one reason blogs have become such an entrenched medium is that they cover things that no newspaper will allow you to write about in any detail. And it's hard to argue with Cohen's claim that almost any blogger finding a really big story will do the sensible thing and sell it to a newspaper.

If you can. Arguably the biggest political story of 2009 was MPs' expenses. That material was released because of the relentless efforts of Heather Brooke, who took up the 2005 arrival into force of the UK's Freedom of Information Act as a golden opportunity. It took her nearly five years to force the disclosure of MPs' expenses - and when she finally succeeded the Telegraph wrote its own stories after poring over the details that were disclosed.

The fact is that political blogging has been with us for far longer than one five-year general election cycle. It's just that most of it does not take the same form as the "inside politics" blogs of the US or the traditional Parliamentary sketches in the British newspapers. The push for Libel reform began with Jack of Kent (David Allen Green); the push to get the public more engaged with their MPs began with MySociety's Fax Your MP. It was clear as long ago as 2006 that MPs were expert users of They Work For You: it's how they keep tabs on each other. MySociety's sites are not blogs - but they are the source material without which political blogging would be much harder work.

I don't find it encouraging to hear Isaby predict that in the upcoming election (expected in May) blogging "will keep candidates on their toes" because "gaffes will be more quickly reported". Isn't this the problem with US elections? That everyone gets hung up on calumnies such as that Al Gore claimed to have invented the Internet. Serious issues fall by the wayside, and good candidates can be severely damaged by biased reporting that happens to feed an eminently quotable sarcastic joke. Still: anything for a little light into the smoke-filled back rooms where British politics is still made. Even with smoking now banned, it's murky back there.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 5, 2010

Getting run down on the infobahn

It's not going out on much of a limb to predict that 2010 is, finally, the year of the ebook. A lot of electrons are going to be spilled trying to predict the winners on this frontier; the most likely, I think, are Apple (iPhone, iPad), Amazon (Kindle), Google (Books), and Ray Kurzweil (Blio). Note something about all those guys? Yes: none of them are publishers. Just like the music industry, publishers have left it to technology companies to invent their new medium for them.

Note something else about what those guys are not? Authors. Almost everything that's created in this world - books, newspapers, magazines, movies, games, advertising, music, even some industrially designed products - eventually goes back to one person sitting in a room with a blank sheet of paper trying to think up a compelling story.

Authors - and writers generally - used to have a hard but easy job: deliver a steady stream of publishable work, and remuneration will probably happen. Publishers sold books; authors just wrote them. One of my friends, a science fiction writer contractually bound to HarperCollins, used to refer to Rupert Murdoch as "the little man who publishes my books for me". That happy division of labor did not, of course, provide all, or even most writers with a full-time living. But the most important thing authors want is for their work to be noticed; publishers could make that happen.

Things have been changing for some time. It's fifteen years since authors of my acquaintance began talking about the need to hire your own publicist because unless you had a very large (six figures and up) advance most mainstream publishers would not consider your book worth spending money and effort to market it much beyond sending out a press release. Even copy-editing is falling by the wayside, as a manuscript submitted electronically can now feed straight into a typesetting system without the human intervention that gave pause for rethought.

"Everyone's been seeing their royalty statements shrink," a friend observed gloomily last week. He made, 20 years ago, what then seemed an intelligent career decision: to focus on writing reference books because they had a consistent market among people who really needed them, and they would have a continuing market in regular updates. And that worked great until along came Wikipedia online dictionaries and translation engines and government agency Web sites and blogs and picture galleries, and now, he says, "People don't buy reference books any more." I am no exception: all the reference books on the shelves behind my desk are at least 15 years old. About 10 percent are books I'd buy today if I didn't already have them.

So this is also the year in which the more far-seeing authors get to figure out what their future business models are going to be. An author with a business plan? Who ever heard of such a thing? The nearest thing to that in my acquaintance is the science fiction writer Charles Stross; he is smarter about the economic and legal workings of publisher than anyone I've ever met or heard speak at a conference. And even he is asking for suggestions.

First of all, there's the Google Books settlement, which is so complicated that I imagine hardly any of the authors whose works the settlement is a settlement of can stand to read the whole thing. The legal scholar and MacArthur award winner Pamela Samuelson has written a fine explanation of the problems; authors had until January 28 to opt out or object. This isn't over yet: the US Justice Department still doesn't like the terms.

We can also expect more demarcation disputes like this week's spat between Amazon and Macmillan, discussed intelligently by Stross here, here, and here, with an analysis of the scary economics of the Kindle here. The short version: Macmillan wants Amazon to pay more for the Kindle versions of its books, and Amazon threw Macmillan's books out of its .com pram. Caught in the middle are a bunch of very pissed-off authors, who are exercising their rights in the only way they can: by removing links to Amazon and substituting links to the competition: Barnes and Noble and independent booksellers including the wonderful Portland, Oregon stalwart, Powells.

To be fair, removing the "buy new" button from all of the Macmillan listings on Amazon.com (Amazon.co.uk seems to be unaffected) doesn't mean you can't buy the books. In general, you simply click on a different link and buy the book from a marketplace seller rather than Amazon itself. Amazon doesn't care: according to its SEC filings, the company makes roughly the same profit whoever sells the book via its site.

It's times like these when you want to remember the Nobel Laureate author Doris Lessing's advice to all writers: "And it does no harm to repeat, as often as you can, 'Without me, the literary industry would not exist: the publishers, the agents, the sub-agents, the sub-sub agents, the accountants, the libel lawyers, the departments of literature, the professors, the theses, the books of criticism, the reviewers, the book pages - all this vast and proliferating edifice is because of this small, patronized, put-down, and underpaid person.'"

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

January 22, 2010

Music night

Most corporate annual reports seek to paint a glowing picture of the business's doings for the previous year. By law they have to disclose anything really unfortunate - financial losses, management malfeasance, a change in the regulatory landscape. The International Federation of the Phonographic Industry was caught in a bind writing its Digital Music Report 2010 (PDF) (or see the press release). Paint too glowing a picture of the music business, and politicians might conclude no further legislation is needed to bolster the sector. Paint too gloomy a picture, and ministers might conclude theirs is a lost cause, and better to let dying business models die.

So IFPI's annual report veers between complaining about "competing in a rigged market" (by which they mean a market in which file-sharing exists) and stressing the popularity of music and the burgeoning success of legally sanctioned services. Yay, Spotify! Yay, Sky Songs! Yay, iTunes! You would have to be the most curmudgeonly of commentators to point out that none of these are services begun by music companies; they are services begun by others that music companies have been grudgingly persuaded to make deals with. (I say grudgingly; naturally, I was not present at contract negotiations. Perhaps the music companies were hopping up and down like Easter bunnies in their eagerness to have their product included. If they were, I'd argue that the existence of free file-sharing drove them to it. Without file-sharing there would very likely be no paid subscription services now; the music industry would still be selling everyone CDs and insisting that this was the consumer's choice.)

The basic numbers showed that song downloads increased by 10 percent - but total revenue including CDs fell by 12 percent in the first half of 2009. The top song download: Lady Gaga's "Poker Face".

All this is fair enough - an industry's gotta eat! - and it's just possible to read it without becoming unreasonable. And then you hit this gem:

Illegal file-sharing has also had a very significant, and sometimes disastrous, impact on investment in artists and local repertoire. With their revenues eroded by piracy, music companies have far less to plough back into local artist development. Much has been made of the idea that growing live music revenues can compensate for the fall-off in recorded music sales, but this is, in reality, a myth. Live performance earnings are generally more to the benefit of veteran, established acts, while it is the younger developing acts, without lucrative careers, who do not have the chance to develop their reputation through recorded music sales.
So: digital music is ramping up (mostly through the efforts of non-music industry companies and investors). Investment in local acts and new musicians is down. And overall sales are down. And we're blaming file-sharing? How about blaming at least the last year or so of declining revenues on the recession? How about blaming bean counters at record companies who see a higher profit margin in selling yet more copies of back catalogue tried-and-tested, pure-profit standards like Frank Sinatra and Elvis Presley than in taking risks on new music? At some point, won't everyone have all the copies of the Beatles albums they can possibly use? Er, excuse me, "consume". (The report has a disturbing tendency to talk about "consuming" music; I don't think people have the same relationship with music that they do with food. I'd also question IFPI's whine about live music revenues: all young artists start by playing live gigs, that's how they learn; *radio play* gets audiences in; live gigs *and radio play* sell albums, which help sell live gigs in a virtuous circle, but that's a topic for another day.)

It is a truth rarely acknowledged that all new artists - and all old artists producing new work - are competing with the accumulated back catalogue of the past decades and centuries.

IFPI of course also warns that TV, book publishing, and all other media are about to suffer the same fate as music. The not-so-subtle underlying message: this is why we must implement ferocious anti-file-sharing measures in the Digital Economy Bill, amendments to which, I'm sure coincidentally, were discussed in committee this week, with more to come next Tuesday, January 26.

But this isn't true, or not exactly. As a Dutch report on file-sharing (original in Dutch) pointed out last year, file-sharing, which it noted goes hand-in-hand with buying, does not have the same impact on all sectors. People listen to music over and over again; they watch TV shows fewer but still multiple times; if they don't reread books they do at least often refer back to them; they see most movies only once. If you want to say that file-sharing displaces sales, which is debatable, then clearly music is the least under threat. If you want to say that file-sharing displaces traditional radio listening, well, I'm with you there. But IFPI does not make that argument.

Still, some progress has been made. Look what IFPI says here, on page 4 in the executive summary right up front: "Recent innovations in the à-la-carte sector include...the rollout of DRM-free downloads internationally." Wha-hey! That's what we told them people wanted five years ago. Maybe five years from now they'll be writing how file-sharing helps promote artists who, otherwise, would never find an audience because no one would ever hear their work.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

January 15, 2010

The once and future late-night king

On the face of it, the unexpected renewal of the late-night TV wars is a pretty trivial matter. As The Tonight Show with Conan O'Brien itself points out, there is a lot of real news that's a lot more important - health care, Haiti, Google versus China, network neutrality, and discussions of the Digital Economy bill (my list, not theirs). O'Brien wrote in an open letter a couple of days ago that he has been "absurdly lucky". Even so.

But Conan-versus-Leno is personalization; at heart this story is about the future of broadcasting and its money. Given today's time-shifting choices, few things lure viewers to a particular TV channel at a precise time. Two are live sports and breaking news. A third is the run of talk-variety shows that start in most parts of the US at 11:35pm (10:35 Central) and run until around 2am.

The kingpin of all of these is The Tonight Show, broadcast on NBC every night following the 11 o'clock news for nearly 60 years. For 30 of those years it was presented by a single host, Johnny Carson, probably the biggest star television has ever had - and quite possibly the biggest television ever will have. They make talent like Carson's very infrequently; they don't make broadcasting like that any more. According to Bill Carter in his book The Late Shift: Letterman, Leno, and the Network Battle for the Night, many years Carson's apparently effortless comedy and guest interviews generated 15 to 20 percent of the network's profits.

Every one of today's late-night hosts grew up watching Carson, and probably all of them dreamed of one day having his job. Carson's job, on The Tonight Show on NBC, not a similar job on a similar show at the same time on another network.

The roots of today's mess go back to 1991, when Carson announced he would retire in May 1992. At the time, David Letterman was hosting NBC's 12:30 show, while Jay Leno was Carson's regular substitute host. In a move that seemed to surprise everyone, NBC appointed Leno Carson's successor, fatally assuming that Letterman wouldn't mind. He did mind. The net result was months of uncertainty, politics, and legal wrangling, not least because Leno's early months in the job were unpromising. By 1993, Letterman had begun a competing show at CBS and every other network had tried putting up an 11:30 talk-variety show, most of them dreadful and quickly canned. Since then, Leno has usually won the ratings - but Letterman the awards. Arguably the biggest beneficiary was O'Brien, who landed Letterman's old 12:30 job with barely any performing experience. After following Leno for 16 years, late last year, as per an agreement announced in 2005 and intended to avoid a repeat of 1992, O'Brien got The Tonight Show.

Now, NBC is doing to O'Brien almost exactly what it did to Letterman, apparently filled with panic over declining revenues and shrinking ratings and completely self-destructing (just as Comcast is trying to buy it from GE). As Kansas City critic Aaron Barnhart writes, late-night is about the long haul. In restoring Leno, NBC is hanging onto its past and at best a couple of years of present at the expense of its future. All hosts - almost all entertainers - eventually find their audience is aging along with them. Even Carson seemed old-fashioned to younger viewers by the time he retired at 66: my parents watched Carson; I watch Letterman and Conan; my 20-something friends watch Conan and Jon Stewart.

In his letter, O'Brien says holding The Tonight Show to 11:35 is vital. He is almost certainly right: people go to bed, watch the news and the opening monologue, and progressively drift off to sleep during the guests. By midnight, half of the Tonight Show's viewers are gone; the latest shows are seen by insomniacs and people without kids and early-morning commutes.

Most likely NBC will shortly find out there is no way back to Leno's ratings of 2008. Diehard Leno fans will stick with him but Conan fans will tune out in protest; if they watch anyone it will be Letterman or Stewart. The younger people the network needs for the future watch online.

You may think none of this matters very much outside the US. The shows themselves have never traveled very well, though the format has been widely copied throughout the world. But of all the businesses having to cope with the digital revolution, in television it may be the broadcast networks who are most under threat. Those who copy and share TV shows buy DVDs; they do not return to watch the broadcast versions or consume advertising. Shows have fans; networks don't. The focus on file-sharing ignores the wide variety of streams copied live from broadcasters all over the world that are readily accessible if you know where to look. It is far cheaper to subscribe directly to the tennis tours than to pay Sky Sports or Eurosport, for example - and often free to pick up a stream.

When the history of the digital revolution is written, historians may pinpoint the day Carson announced his retirement as the broadcasting equivalent of Peak Oil.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

November 13, 2009

Cookie cutters

Sometimes laws sneak up on you while you're looking the other way. One of the best examples was the American Telecommunications Act of 1996: we were so busy obsessing about the freedom of speech-suppressing Communications Decency Act amendment that we failed to pay attention to the implications of the bill itself, which allowed the regional Baby Bells to enter the long distance market and changed a number of other rules regarding competition.

We now have a shiny, new example: we have spent so much time and electrons over the nasty three-strikes-and-you're offline provisions that we, along with almost everyone else, utterly failed to notice that the package contains a cookie-killing provision last seen menacing online advertisers in 2001 (our very second net.wars).

The gist: Web sites cannot place cookies on users' computers unless said users have agreed to receive them unless the cookies are strictly necessary - as, for example, when you select something to buy and then head for the shopping cart to check out.

As the Out-Law blog points out this proposal - now to become law unless the whole package is thrown out - is absurd. We said it was in 2001 - and made the stupid assumption that because nothing more had been heard about it the idea had been nixed by an outbreak of sanity at the EU level.

Apparently not. Apparently MEPs and others at EU level spend no more time on the Web than they did eight years ago. Apparently none of them have any idea what such a proposal would mean. Well, I've turned off cookies in my browser, and I know: without cookies, browsing the Web is as non-functional as a psychic being tested by James Randi.

But it's worse than that. Imagine browsing with every site asking you to opt in every - pop-up - time - pop-up - it - pop-up - wants - pop-up - to - pop-up - send - pop-up - you - a - cookie - pop-up. Now imagine the same thing, only you're blind and using the screen reader JAWS.

This soon-to-be-law is not just absurd, it's evil.

Here are some of the likely consequences.

As already noted, it will make Web use nearly impossible for the blind and visually impaired.

It will, because such is the human response to barriers, direct ever more traffic toward those sites - aggregators, ecommerce, Web bulletin boards, and social networks - that, like Facebook, can write a single privacy policy for the entire service to which users consent when they join (and later at scattered intervals when the policy changes) that includes consent to accepting cookies.

According to Out-Law, the law will trap everyone who uses Google Analytics, visitor counters, and the like. I assume it will also kill AdSense at a stroke: how many small DIY Web site owners would have any idea how to implement an opt-in form? Both econsultancy.com and BigMouthMedia think affiliate networks generally will bear the brunt of this legislation. BigMouthMedia goes on to note a couple of efforts - HTTP.ETags and Flash cookies - intended to give affiliate networks more reliable tracking that may also fall afoul of the legislation. These, as those sources note, are difficult or impossible for users to delete.

It will presumably also disproportionately catch EU businesses compared to non-EU sites. Most users probably won't understand why particular sites are so annoying; they will simply shift to sites that aren't annoying. The net effect will be to divert Web browsing to sites outside the EU - surely the exact opposite of what MEPs would like to see happen.

And, I suppose, inevitably, someone will write plug-ins for the popular browsers that can be set to respond automatically to cookie opt-in requests and that include provisions for users to include or exclude specific sites. Whether that will offer sites a safe harbour remains to be seen.

The people it will hurt most, of course, are the sites - like newspapers and other publications - that depend on online advertising to stay afloat. It's hard to understand how the publishers missed it; but one presumes they, too, were distracted by the need to defend music and video from evil pirates.

The sad thing is that the goal behind this masterfully stupid piece of legislation is a reasonably noble one: to protect Internet users from monitoring and behavioural targeting to which they have not consented. But regulating cookies is precisely the wrong way to go about achieving this goal, not just because it disables Web browsing but because technology is continuing to evolve. The EU would be better to regulate by specifying allowable actions and consequences rather than specifying technology. Cookies are not in and of themselves inherently evil; it's how they're used.

Eight years ago, when the cookie proposals first surfaced, they, logically enough, formed part of a consumer privacy bill. That they're now part of the telecoms package suggests they've been banging around inside Parliament looking for something to attach themselves to ever since.

I probably exaggerate slightly, since Out-Law also notes that in fact the EU did pass a law regarding cookies that required sites to offer visitors a way to opt out. This law is little-known, largely ignored, and unenforced. At this point the Net's best hope looks to be that the new version is treated the same way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter or by email to netwars@skeptic.demon.co.uk).

October 23, 2009

The power of Twitter

It was the best of mobs, it was the worst of mobs.

The last couple of weeks have really seen the British side of Twitter flex its 140-character muscles. First, there was the next chapter of the British Chiropractic Association's ongoing legal action against science writer Simon Singh. Then there was the case of Jan Moir, who wrote a more than ordinarily Daily Mailish piece for the Daily Mail about the death of Boyzone's Stephen Gately. And finally, the shocking court injunction that briefly prevented the Guardian from reporting on a Parliamentary question for the first time in British history.

I am on record as supporting Singh, and I, too, cheered when, ten days ago, Singh was granted leave to appeal Justice Eady's ruling on the meaning of Singh's use of the word "bogus". Like everyone, I was agog when the BCA's press release called Singh "malicious". I can see the point in filing complaints with the Advertising Standards Authority over chiropractors' persistent claims, unsupported by the evidence, to be able to treat childhood illnesses like colic and ear infections.

What seemed to edge closer to a witch hunt was the gleeful take-up of George Monbiot's piece attacking the "hanging judge", Justice Eady. Disagree with Eady's ruling all you want, but it isn't hard to find libel lawyers who think his ruling was correct under the law. If you don't like his ruling, your correct target is the law. Attacking the judge won't help Singh.

The same is not true of Twitter's take-up of the available clues in the Guardian's original story about the gag to identify the Parliamentary Question concerned and unmask Carter-Ruck, the lawyers who served it and their client, Trafigura. Fueled by righteous and legitimate anger at the abrogation of a thousand years of democracy, Twitterers had the PQ found and published thousands of times in practically seconds. Yeah!

Of course, this phenomenon (as I'm so fond of saying) is not new. Every online social medium, going all the way back to early text-based conferencing systems like CIX, the WELL, and, of course, Usenet, when it was the Internet's town square (the function in fact that Twitter now occupies) has been able to mount this kind of challenge. Scientology versus the Net was probably the best and earliest example; for me it was the original net.war. The story was at heart pretty simple (and the skirmishes continue, in various translations into newer media, to this day). Scientology has a bunch of super-secrets that only the initiate, who have spent many hours in expensive Scientology training, are allowed to see. Scientology's attempts to keep those secrets off the Net resulted in their being published everywhere. The dust has never completely settled.

Three people can keep a secret if two of them are dead, said Mark Twain. That was before the Internet. Scientology was the first to learn - nearly 15 years ago - that the best way to ensure the maximum publicity for something is to try to suppress it. It should not have been any surprise to the BCA, Trafigura, or Trafigura's lawyers. Had the BCA ignored Singh's article, far fewer people would know now about science's dim view of chiropractic. Trafigura might have hoped that a written PQ would get lost in the vastness that is Hansard; but they probably wouldn't have succeeded in any case.

The Jan Moir case, and the demonstration outside Carter-Ruck's offices are, however rather different. These are simply not the right targets. As David Allen Green (Jack of Kent) explains, there's no point in blaming the lawyers; show your anger to the client (Trafigura) or to Parliament.

The enraged tweets and Facebook postings about Moir's article helped send a record number of over 25,000 complaints to the Press Complaints Commission, whose Web site melted down under the strain. Yes, the piece was badly reasoned and loathsome, but isn't that what the Daily Mail lives for? Tweets and links create hits and discussion. The paper can only benefit. In fact, it's reasonable to suppose that in the Trafigura and Moir cases both the Guardian and the Daily Mail manipulated the Net perfectly to get what they wanted.

But the stupid part about let's-get-Moir is that she does not *matter*. Leave aside emotional reactions, and what you're left with is someone's opinion, however distasteful.

This concerted force would be more usefully turned to opposing the truly dangerous. See for example, the AIDS denialism on parade by Fraser Nelson at The Spectator. The "come-get-us" tone e suggests that they saw attention New Humanist got for Caspar Melville's mistaken - and quickly corrected - endorsement of the film House of Numbers and said, "Let's get us some of that." There is no more scientific dispute about whether HIV causes AIDS than there is about climate change or evolutionary theory.

If we're going to behave like a mob, let's stick to targets that matter. Jan Moir's column isn't going to kill anybody. AIDS denialism will. So: we'll call Trafigura a win, chiropractic a half-win, and Moir a loser.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

October 16, 2009

Unsocial media

"No one under 30 will use email," the convenor objected.

There was a bunch of us, a pre-planning committee for an event, and we were talking about which technology we should have the soon-to-be appointed program committee use for discussions. Email! Convenient. Accessible by computer or phone. Easily archived, forwarded, quoted, or copied into any other online medium. Why are we even talking about this?

And that's when he said it.

Not so long ago, if you had email you were one of the cool kids, the avant-garde who saw the future and said it was electronic. Most of us spent years convincing our far-flung friends and relatives to get email so we didn't have to phone or - gasp - write a letter that required an envelope and a stamp. Being told that "email is for old people" is a lot like a 1960s "Never trust anyone over 30" hippie finding out that the psychedelic school bus he bought to live in to support the original 1970 Earth Day is a gas-guzzling danger to the climate and ought to be scrapped.

Well, what, then? (Aside: we used to have tons of magazines called things like Which PC? and What Micro? to help people navigate the complex maze of computer choices. Why is there no magazine called Which Social Medium??)

Facebook? Clunky interface. Not everyone wants to join. Poor threading. No easy way to export, search, or archive discussions. IRC or other live chat? No way to read discussion that took place before you joined the chat. Private blog with comments and RSS? Someone has to set the agenda. Twitter? Everything is public, and if you're not following all the right people the conversation is disjointed and missing links you can't retrieve. IM? Skype? Or a wiki? You get the picture.

This week, the Wall Street Journal claimed that "the reign of email is over" while saying only a couple of sentences later, "We all still use email, of course." Now that the Journal belongs to Rupert Murdoch, does no one check articles for sense?

Yes, we all still use email. It can be archived, searched, stored locally, read on any device, accessed from any location, replied to offline if necessary, and read and written thoughtfully. Reading that email is dead is like reading, in 2000, that because a bunch of companies went bust the Internet "fad" was over. No one then who had anything to do with the Internet believed that in ten years the Internet would be anything but vastly bigger than it was then. So: no one with any sense is going to believe that ten years from now we'll be sending and receiving less email than we are now. What very likely will be smaller, especially if industrial action continues, is the incumbent postal services.

What "No one under 30 uses email" really means is that it's not their medium of first choice. If you're including college students, the reason is obvious: email is the official stuff they get from their parents and universities. Facebook, MySpace, Twitter, and texting is how they talk to their friends. Come the day they join the workforce, they'll be using email every day just like the rest of us - and checking the post and their voicemail every morning, too.

But that still leave the question: how do you organize anything if no one can agree on what communications technology to use? It's that question that the new Google Wave is trying to answer. It's too soon, really, to tell whether it can succeed. But at a guess, it lacks one of the fundamental things that makes email such a lowest common denominator: offline storage. Yes, I know everything is supposed to be in "the cloud" and even airplanes have wifi. But for anything that's business-critical you want your own archive where you can access it when the network fails; it's the same principle as backing up your data.

Reviews vary in their take on Wave. LifeHacker sees it as a collaborative tool. ZDNet UK editor Rupert Goodwins briefly called it Usenet 2.0 and then retracted and explained using the phrase "unified comms".

That, really, is the key. Ideally, I shouldn't have to care whether you - or my fellow committee members - prefer to read email, participate in phone calls (via speech-to-text, text-to-speech synthesizers), discuss via Usenet, Skype, IRC, IM, Twitter, Web forums, blogs, or Facebook pages. Ideally, the medium you choose should be automatically translated in to the medium I choose. A Babel medium. The odds that this will happen in an age when what companies most want is to glue you to their sites permanently so they can serve you advertising are very small.

Which brings us back to email. Invented in an era when the Internet was commercial-free. Designed to open standards, so that anyone can send and receive it using any reader they like. Used, in fact, to alert users to updates they want to know about to their accounts on Facebook/IRC/Skype/Twitter/Web forums. Yes, it's overrun with corporate CYA memos and spam. But it's still the medium of record - and it isn't going anywhere. Whereas: those 20-somethings will turn 30 one day soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 9, 2009

Phantom tollbooths

This was supposed to be the week that the future of Google Books became clear or at least started to; instead, the court ordered everyone to go away and come up with a new settlement (registration required). The revised settlement is due by November 9; the judge will hear objections probably around the turn of the year.

Instead this turned into the Week of the Postcode, after the Royal Mail issued cease-and-desist letters to the postcode API service Ernest Marples (built by Richard Pope and Open Rights Group advisory council member Harry Metcalfe). Marples' sin: giving away postcode data without a license (PDF).

At heart, the Postcode spat and the Google Books suit are the same issue: information that used to be expensive can now be made available on the Internet for free, and people who make money from the data object.

We all expect books to be copyrighted; but postcodes? When I wrote about it, astonished, in 1993 for Personal Computer World, the spokesperson explained that as an invention of the Royal Mail of course they were the Royal Mail's property (they've now just turned 50). There are two licensed services, the Postcode Address File (automates filling in addresses) and PostZon, the geolocator database useful for Web mashups. The Royal Mail says it's currently reviewing its terms and licensing conditions for PostZon; based on the recent similar exercise for PAF (PDF) we'll guess that the biggest objections to giving it away will come from people who are already paying for it and want to lock out competitors.

There's just a faint hint that postcodes could become a separate business; the Royal Mail does not allow the postcode database and mail delivery to cross-subsidize (to mollify competitors who use the database). Still, Charles Arthur, in the Guardian, estimates that licensing the postcode database costs us more than it makes.

This is the other sense in which postcodes are like Google Books: it costs money to create and maintain the database. But where postcodes are an operational database for the Royal Mail, books may not be for Google Wired UK has shown what happens when Google loses economic interest in a database, in this case Google Groups (aka, the Usenet archive).

But in the analogy Google plays the parts of both the Royal Mail (investing in creating a database from which it hopes to profit) and the geeks seeking to liberate the data (locked-up, out-of-print books, now on the Web! Yeah!). The publishers are merely an intervening toll booth. This is one reason reactions to Google Books have been so mixed and so confusing: everyone's inner author says, "Google will make money. I want some," while their inner geek says, "Wow! That is so *cool*! I want that!".

The second reason everyone's so confused, of course, is that the settlement is 141 pages of dense legalese with 15 appendices, and nobody can stand to read it. (I'm reliably told that the entire basis for handling non-US authors' works is one single word: "If".) This situation is crying out for a wiki where inte