Main

January 20, 2023

New music

Nick_Cave_by_Amelia_Troubridge-370.jpgThe news this week that "AI" "wrote" a song "in the style of" Nick Cave (who was scathing about the results) seemed to me about on a par with the news in the 1970s that the self-proclaimed medium Rosemary Brown was able to take dictation of "new works" by long-dead famous composers. In that: neither approach seems likely to break new artistic ground.

In Brown's case, musicologists, psychologists, and skeptics generally converged on the belief that she was channeling only her own subconscious. AI doesn't *have* a subconscious...but it does have historical inputs, just as Brown did. You can say "AI" wrote a set of "song lyrics" if you want, but that "AI" is humans all the way down: people devised the algorithms and wrote the computer code, created the historical archive of songs on which the "AI" was trained, and crafted the prompt that guided the "AI"'s text generation. But "the machine did it by itself" is a better headline.

Meanwhile...

Forty-two years after the first one, I have been recording a new CD (more details later). In the traditional folk world, which is all I know, getting good recordings is typically more about being practiced enough to play accurately while getting the emotional performance you want. It's also generally about very small budgets. And therefore, not coincidentally, a whole lot less about sound effects and multiple overdubs.

These particular 42 years are a long time in recording technology. In 1980, if you wanted to fix a mistake in the best performance you had by editing it in from a different take where the error didn't appear, you had to do it with actual reels of tape, an edit block, a razor blade, splicing tape...and it was generally quicker to rerecord unless the musician had died in the interim. Here in digital 2023, the studio engineer notes the time codes, slices off a bit of sound file, and drops it in. Result! Also: even for traditional folk music, post-production editing has a much bigger role.

Autotune, which has turned many a wavering tone into perfect pitch, was invented in 1997. The first time I heard about it - it alters the pitch of a note without altering the playback speed! - it sounded indistinguishable from magic. How was this possible? It sounded like artificial intelligence - but wasn't.

The big, new thing now, however, *is* "AI" (or what currently passes for it), and it's got nothing to do with outputting phrases. Instead, it's stem splitting - that is, the ability to take a music file that includes multiple instruments and/or voices, and separate out each one so each can be edited separately.

Traditionally, the way you do this sort of thing is you record each instrument and vocal separately, either laying them down one at a time or enclosing each musician/singer into their own soundproof booth, from where they can play together by listening to each other over headphones. For musicians who are used to singing and playing at the same time in live performance, it can be difficult to record separate tracks. But in recording them together, vocal and instrumental tracks tend to bleed into each other - especially when the instrument is something like an autoharp, where the instrument's soundboard is very close to the singer's mouth. Bleed means you can't fix a small vocal or instrumental error without messing up the other track.

With stem splitting, now you can. You run your music file through one of the many services that have sprung up, and suddenly you have two separated tracks to work with. It's being described to me as a "game changer" for recording. Again: sounds indistinguishable from magic.

This explanation makes it sound less glamorous. Vocals and instruments whose frequencies don't overlap can be split out using masking techniques. Where there is overlap, splitting relies on a model that has been trained on human-split tracks and that improves with further training. Still a black box, but now one that sounds like so many other applications of machine learning. Nonetheless, heard in action it's startling: I tried LALAL_AI on a couple of tracks, and the separation seemed perfect.

There are some obvious early applications of this. As the explanation linked above notes, stem splitting enables much finer sampling and remixing. A singer whose voice is failing - or who is unavailable - could nonetheless issue new recordings by laying their old vocal over a new instrumental track. And vice-versa: when, in 2002, Paul Justman wanted to recreate the Funk Brothers' hit-making session work for Standing in the Shadows of Motown, he had to rerecord from scratch to add new singers. Doing that had the benefit of highlighting those musicians' ability and getting them royalties - but it also meant finding replacements for the ones who had died in the intervening decades.

I'm far more impressed by the potential of this AI development than of any chatbot that can put words in a row so they look like lyrics. This is a real thing with real results that will open up a world of new musical possibilities. By contrast, "AI"-written song lyrics rely on humans' ability to conceive meaning where none exists. It's humans all the way up.


Illustrations: Nick Cave in 2013 (by Amanda Troubridge, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 2, 2022

Hearing loss

amazon-echo-dot-charcoal-front-on-370.jpgSome technologies fail because they aren't worth the trouble (3D movies). Some fail because the necessary infrastructure and underlying technologies aren't good enough yet (AI in the 1980s, pen computing in the 1990s). Some fail because the world goes another, simpler, more readily available way (Open Systems Interconnection). Some fail because they are beset with fraud (the fate that appears to be unfolding with respect to cryptocurrencies), And some fail even though they work as advertised and people want them and use them because they make no money to sustain their development for their inventors and manufacturers.

The latter appears to be the situation with smart speakers, which in 2015 were going to take over the world, and today, in 2022, are installed in 75% of US homes. Despite this apparent success, they are losing money even for market leaders Amazon (third) and Google (second), as Business Insider reported this week. Amazon's Worldwide Digital division, which includes Prime Video as well as Echo smart speakers and Alexa voice technology, lost $3 billion in the first quarter of this year alone, primarily due to Alexa and other devices. The division will now be the biggest target for the layoffs the company announced last week.

The gist: they thought smart speakers would be like razors or inkjet printers, where you sell the hardware at or below cost and reap a steady income stream from selling razor blades or ink cartridges. Amazon thought people would buy their smart speakers, see something they liked, and order the speaker to put through the purchase. Instead, judging from the small sample I have observed personally, people use their smart speakers as timers, radios, and enhanced remote controls, and occasionally to get a quick answer from Wikipedia. And that's it. The friends I watched order their smart speaker to turn on the basement lights and manage their shopping list have, as far as I could tell on a recent visit, developed no new uses for their voice assistant in three years of being locked up at home with it.

The system has developed a new feature, though. It now routinely puts the shopping list items on the wrong shopping list. They don't know why.

In raising this topic at The Overspill, Charles Arthur referred back to a 2016 Wired aritcle summarizing venture capitalist Mary Meeker's assessment in her annual Internet Trends report that voice was going to take over the world and the iPhone had peaked. In slides 115-133, Meeker outlined her argument: improving accuracy would be a game-changer.

Even without looking at recent figures, it's clear voice hasn't taken over. People do use speech when their hands are occupied, especially when driving or when the alternative is to type painfully into their smartphone - but keyboards still populate everyone's desks, and the only people I know who use speech for data entry are people for whom typing is exceptionally difficult.

One unforeseen deterrent may be that privacy emerged as a larger issue than early prognosticators may have expected. Repeated stories have raised awareness that the price of being able to use a voice assistant at will is that microphones in your home listen to everything you say waiting for their cue to send your speech to a distant server to parse. Rising consciousness of the power of the big technology companies has made more of us aware that smart speakers are designed more to fulfill their manufacturers' desires to intermediate and monetize our lives than to help us.

The notion that consumers would want to use Amazon's Echo for shopping appears seriously deluded with hindsight. Even the most dedicated voice users I know want to see what they're buying. Years ago, I thought that as TV and the Internet converged we'd see a form of interactive product placement in which it would be possible to click to buy a copy of the shirt a football player was wearing during a game or the bed you liked in a sitcom. Obviously, this hasn't happened; instead a lot of TV has moved to streaming services without ads, and interactive broadcast TV is not a thing. But in *that* integrated world voice-activated shopping would work quite well, as in "Buy me that bed at the lowest price you can find", or "Send my brother the closest copy you can find of Novak Djokovic's dark red sweatshirt, size large, as soon as possible, all cotton if possible."

But that is not our world, and in our world we have to make those links and look up the details for ourselves. So voice does not work for shopping beyond adding items to lists. And if that doesn't work, what other options are there? As Ron Amadeo writes at Ars Technica, the queries where Alexa is frequently used can't be monetized, and customers showed little interest in using Alexa to interact with other companies such as Uber or Domino's Pizza. And, even Google, which is also cutting investment in its voice assistant, can't risk alienating consumers by using its smart speaker to play ads. Only Apple appears unaffected.

"If you build it, they will come," has been the driving motto of a lot of technological development over the last 30 years. In this case, they built it, they came, and almost everyone lost money. At what point do they turn the servers off?


Illustrations: Amazon Echo Dot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter and/or Mastodon.

November 18, 2022

Being the product

Twtter-bird-upside-down-370.jpgThe past week of Twitter has been marked by a general sense of waiting for the crash, heightened because no one knows when the bad thing will happen or what form it will take. On Twitter itself, I see everyone mourning the incoming loss and setting up camp elsewhere; on professional media journalists are frantically trying to report on what's going on at HQ, where there is now no communications team and precious few engineers.

As noted here last week, it is definitely not so simple as Twitter's loss is Mastodon's/Discord/s/SomeOtherSite's gain.

The general sense of anxiety feels like a localized version of the years of the Trump presidency - that is, people logging in constantly to check, "What's he done now?" Only the "he" is of course new owner Elon Musk, and the "what" is stuff like a team has been fired, someone crucial has quit, there's been a new order to employees ("check this box by 5pm or you're fired!"), making yet another change to the system of blue ticks that may or may not verify a person's identity, or appearing to disable two-factor authentication via SMS shortly after announcing the shutdown of "20% of microservices". This kind of thing makes everyone jumpy. Every tiny glitch could be the first sign that Twitter is crumbling around the edges before cascading into failure, will the process look like HAL losing its marbles in the movie 2001: A Space Odyseey? Ot will it just go black like the end of The Sopranos?

I have never felt so conscious of my data: 15 years of tweets and direct messages all held hostage inside a system with a renegade owner no one trusts. Deleting it feels like killing my past; leaving it in place teems with risks.

The risk level has been abruptly raised by the departure of various security and privacy personnel from Twitter's staff, which led Michael Veale to warn that the platform should be regarded as dangerously vulnerable and insecure. Veale went on to provide instructions for using the law (that is, the General Data Protection Regulation) rather than just Twitter's tools, to delete your data.

Some of my more cautious friends have been regularly deleting their data all along - at the end of every couple of weeks, or every six months, mostly to ensure they can't suddenly become a pariah for something they posted casually five years ago. (It turns out this is a function that Mastodon will automate through user settings.) But, as Veale asks, how do you know Twitter is really deleting the data? Hence his suggestion of applying the law: it gives your request teeth. But is there anyone left at Twitter to respond to legal requests?

The general sense of uncertainty is heightened by things like the reports I saw of strange behavior in response to requests to download account archives: instead of just asking for two-factor authentication before proceeding, the site sent these users to the help center and a form demanding government ID. There seem to be a number of these little weirdnesses, and they're raising users' overall distrust of the system and the sense that we're all just waiting for the thing to break and our data to become an asset in a fire sale - or for a major hack in which all our data gets auctioned on the dark web.

"If you're not paying for the product, you're the product," goes the saying (attribution uncertain). Right now, it feels like we're waiting to find out our product status.

Meanwhile, Apple has spent years now promoting its products by claiming they provide better privacy than the alternatives. It is currently helping destroy the revenue base of Meta (owner of Instagram, Facebook, and WhatsApp) by allowing users to opt to block third-party trackers on its devices. At The Drum, Chris Sutclifee cites estimates that 62% of Apple users have done so; at Forbes Daniel Newman reported in February that Meta projected that the move would cost the company $10 billion in lost ad sales this year. The financial results it's announced since have been accordingly grim.

Part of the point of this is that Apple's promise appeared to be that the money its customers pay for hardware and services also buys them privacy. This week, Tom Germain reported at Gizmodo that Apple's own apps continue to harvest data about users' every move even when those users have - they thought - turned data collection off.

"Even if you're paying for the product, you're the product," Cory Doctorow wrote on discovering this. Double-dipping is familiar in other contexts. But here Apple has broken the pay-with-data bargain that made the web. It may live to regret this; collecting data to which it has exclusive access while shutting down competitors has attracted the attention of German antitrust regulators.

If that's where the commercial world is going, the appeal of something like Mastodon, where we are *not* the product, and where accounts can be moved to other interoperable servers at any time, is obvious. But, as I've written before about professional media, the money to pay for services and servers has to come from *somewhere*. If we're not going to pay with data, then...how?


Illustrations: Twitter flies upside down.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or | | Comments (0) | TrackBacks (0)

November 11, 2022

Moving day

twitter-lettuce-FgnQBzFVIAEox-K-370.jpeg""On the Internet, your home always leaves you," someone observed on Twitter some months back.

Probably everyone who's been online for any length of time has had this experience. That site you visit every day, that's full of memories and familiar people suddenly is no more. Usually, the problem is a new owner, who buys it and closes it down (Television Without Pity, Geocities), or alters it beyond recognition (CompuServe). Or its paradigm falls out of fashion and users leach away until the juice is gone, the fate of many of the early text-based systems.

As the world and all have been reporting - because so many journalists make their online homes there - Twitter is in trouble. A new owner with poor impulse control and a new idea every day - Twitter will be a financial service! (like WeChat?) Twitter will be the world's leading source of accurate information! (like Wikipedia?) Twitter can do multimedia! (like TikTok?), who is driving out what staff he hasn't fired.

The result, Chris Stokel-Walker predicts, will be escalating degradation of the infrastructure - and possibly, Mike Masnick writes, violations of the company's 2011 20-year consent decree with the US Federal Trade Commission, which could ultimately cost the company billions, in addition to the $13 billion in debt Musk added to the company's existing debt load in order to purchase it.

All of that - and the unfolding sequelae Maria Farrell details - will no doubt be a widely used case study at business schools someday.

For me, Twitter has been a fantastic resource. In the 15 years since I created my account, Twitter is where I've followed breaking news, connected with friends, found expert communities. Tight clusters are, Peter Coy finds at the New York Times, why Twitter has been unexpectedly resilient despite its lack of profitability.

But my use of Twitter has nothing in common with its use by those with millions of followers. At that level, it's a broadcast medium. My own experience of chatting with friends or responding randomly to strangers' queries is largely closed to them. Like traveling on the subway, they *can* do it, but not the way the rest of us can. For someone in that position, Twitter is a large audience that fortuitously includes journalists, politicians, and entertainers. The writer Stephen King had the right reaction to the suggestion that verified accounts should pay $20 a month (since reduced to $8) for the privilege: screw *that*. Though even average Twitter users will resist paying to be sold to the advertisers who ultimately fund it the service.

Unusually, a number of alternative platforms are ready and waiting for disaffected Twitter users to experiment with. Chief among them is Mastodon, which looks enough like Twitter to suggest an easy learning curve. There are, however, profound differences, most of them good. Mastodon is a protocol, not a site; like the web, email, or Usenet, anyone can set up a server ("instance") using open source software and connect to other instances. You can form a community on a local instance - or you can use your account as merely a convenient address from which to access postings by users at dozens of other instances. One consequence of this is that hashtags are very much more important in helping people find each other and the postings they're interested in.

Over the last week, I've seen a lot of people trying to be considerate of the natives and their culture, most particularly that they are much more sensitive about content warnings. The reality remains, though, that Mastodon's user base has doubled in a week, and that level of influx will inevitably bring change - if they stay and post, and particularly if many of them adopt a bit of software that allows automated cross-posting between the two services.

All of this has happened without a commercial interest: no one owns Mastodon, it has no ads, and no one is recruiting Twitter users. But that right there may be the biggest problem: the huge influx of new users doesn't bring revenue or staff to help manage it. This will be a big, unplanned test of the system's resilience.

Many are now predicting Twitter's total demise, not least because new owner Elon Musk himself has told employees that the company may become bankrupt due to its burn rate (some of which is his own fault, as previously noted). Barring the system going offline, though, habit is a strong motivator, and it's more likely that many people will treat the new accounts they've set up as "in case of need".

But some will move, because unlike other such situations, whole communities can move together to Mastodon, aided by its ability to ingest lists. I'm seeing people compile lists of accounts in various academic fields, of journalists, of scientists. There are even tools that scans the bios of your Twitter contacts for Mastodon addresses and compiles them into a personal list, which, again, can be easily imported.

If Mastodon works for Twitter's hundreds of millions, there is a big upside: communities don't have to depend for their existence on the grace and favor of a commercial owner. Ultimately, the reason Musk now owns Twitter is he offered shareholders a lucrative exit. They didn't have to care about *us*. And they didn't.

Illustrations: Twitter versus lettuce (via Sheon Han on Twitter).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter or Mastodon

August 26, 2022

Good-enough

rajesh-siri-date.jpgA couple of months on from Amazon's synthesized personal voices, it was intriguing to read this week, in the Financial Times ($) (thanks to Charles Arthur's The Overspill), that several AI startups are threatening voice actors's employment prospects. Actors Equity is campaigning to extend legal protection to the material computers synthesize from actors' voices and likeness that, as Equity puts it, "reproduces performances without generating a 'recording' or a 'copy'." The union's survey found that 65 of performance artists and 93% of audio artists thought AI voices pose a threat to their livelihood.

Voices gives a breakdown of their assignmenets. Fortuitously, most jobs seek "real person" acting - exactly where voice synthesizers fail. For many situations, though - railway announcements, customer service, marketing campaigns - "real person" is overkill. Plus, AI voices, the FT notes, "can be made to say anything at the push of a button". No moral qualms need apply.

We have seen this movie before. This is a more personalized version of appropriating our data in order to develop automated systems - think Google's language translation, developed from billions of human-translated web pages, or the cooption of images posted on Flickr to build facial recognition systems later used to identify deportees. More immediately pertinent are the stories of Susan Bennett, the actress whose voice was Siri 2011, and Jen Taylor, the voice of Microsoft's Cortana. Bennett reportedly had no idea that the phrases and sentences she'd spent so many hours recording were in use until a friend emailed. Shouldn't she have the right to object- or to royalties?

Freelance writers have been here: the 1990s saw an industry-wide shift from first-rights contracts under which we controlled our work and licensed one-time use to all-rights contracts that awarded ownership in perpetuity to a shrinking number of conglomerating publishers. Photographers have been here, watching as the ecosystem of small, dedicated agencies that cared about them got merged into Corbis and Getty while their work opportunities shrank under the confluence of digital cameras, smartphones, and social media. Translators, especially, have been here: while the most complex jobs require humans, for many uses machine translation is good enough. It's actors' "good-enough" ground that is threatened.

Like so many technologies, personalized voice synthesis started with noble intentions - to help people who'd lost their own voices to injury or illness. The new crop of companies the FT identifies are profit-focused; as so often, it's not the technology itself, but the rapidly decreasing cost that's making trouble.

First historical anecdote: Steve Williams, animation director for the 1991 film Terminator 2, warned the London Film Festival that it would soon be impossible to distinguish virtual reality from physical reality. Dead presidents would appear live on the news and Cary Grant would make new movies, Obvious result: just as musicians compete against the entire back catalogue of recorded music, might actors now be up against long-dead stars when auditioning for a role?

Second historical anecdote: in 1993, Silicon Graphics, then leading the field of computer graphics, in collaboration with sensor specialist SimGraphics, presented VActor, a system that captured measurements of body movements from live actors and turned them into computer simulations. Creating a few minutes of the liquid metal man (Robert Patrick) in Terminator 2, although a similar process, took 50 animators a year. VActor was faster and much cheaper at producting a reusable library of "good-enough" expressions and body movements. At the time, the company envisioned the system's use for presentations at exhibitions and trade shows and even talk shows. Prior art: Max Headroom 1987-1988, In 2022, SimGraphics is still offering "real-time interactive characters" - these days, for the metaverse. Its website says VActor, now "AI-VActor", is successfully animating Mario.

Third historical anecdote: in 1997, Fred Astaire, despite being dead at the time, appeared in ads performing some of his most memorable dance moves with a Dirt Devil vacuum cleaner. The ad used CGI to replace two of his dance partners - a mop, a hat rack. If old Cary Grant did have career prospects, they were now lost: the public *hated* the ad. Among the objectors was Astaire's daughter, who returned one of the company's vacuum cleaners with a letter that siad, in part, "Yes, he did dance with a mop but he wasn't selling that mop and it was his own idea " The public at large agreed: Astaire's extraordinary artistry deserved better than an afterlife as a shill.

Today, voice actors really could find themselves competing for work against synthesized versions of themselves. Equity's approach seems to be to push to extend copyright so that performers will get royalties for future reuse. Actors might, however, be better served by the personality rights as granted in some jurisdictions (not the UK). This right helped Cheers actors George Wendt and John Ratzenberger win when they sued and won against a company that created robots that looked like them, and the one Bette Midler used when the singer in an ad fooled people into thinking she herself was singing.

The bottom line: a tough profession looks like getting even tougher. As Michael (Dustin Hoffman) says in Tootsie (written by Murray Schisgal and Larry Gelbart), "I don't believe in Hell. I believe in unemployment, but I don't believe in Hell."


Illustrations:: The Big Bang Theory's Rajesh (Kumal Nayyar) tries to date Siri (Becky O'Donahue).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 12, 2022

Nebraska story

Thumbnail image for Facebook-76536_640.pngThis week saw the arrest of a Nebraska teenager and her mother, who are charged with multiple felonies for terminating the 17-year-old's pregnancy at 28 weeks and burying (and, apparently, trying to burn) the fetus. Allegedly, this was a home-based medication abortion...and the reason the authorities found out is that following a tip-off the police got a search warrant for the pair's Facebook accounts. There, the investigators found messages suggesting the mother had bought the pills and instructed her daughter how to use them.

Cue kneejerk reactions. "Abortion" is a hot button. Facebook privacy is a hot button. Result: in reporting these gruesome events most media have chosen to blame this horror story on Facebook for turning over the data.

As much as I love a good reason to bash Facebook, this isn't the right take.

Meta - Facebook's parent - has responded to the stories with a "correction" that says the company turned over the women's data in response to valid legal warrants issued by the Nebraska court *before* the Supreme Court ruling. The company adds, "The warrants did not mention abortion at all."

What the PR folks have elided is that both the Supreme Court's Dobbs decision, which overturned Roe v. Wade, and the wording of the warrants are entirely irrelevant. It doesn't *matter* that this case was about an abortion. Meta/Facebook will *always* turn over user data in compliance with a valid legal warrant issued by a court, especially in the US, its home country. So will every other major technology company.

You may dispute the justice of Nebraska's 2019 Pain-Capable Unborn Child Act, under which abortion is illegal after 20 weeks from fertilization (22 weeks in normal medical parlance). But that's not Meta's concern. What Meta cares about is legal compliance and the technical validity of the warrant. Meta is a business, not a social justice organization, and while many want Mark Zuckerberg to use his personal judgment and clout to refuse to do business with oppressive regimes (by which they usually mean China, or Myanmar), do you really want him and his company to obey only laws they agree with?

There will be many much worse cases to come, because states will enact and enforce the vastly more restrictive abortion laws that Dobbs enables, and there will be many valid legal warrants that force them to hand data to police bent on prosecuting people in excruciating pregnancy-related situations - and in many more countries. Even in the UK, where (except for Northern Ireland) abortion has been mostly non-contentious for decades, lurking behind the 1967 law which legalized abortion until 24 weeks is an 1861 statute under which abortion is criminal. That law, as Shanti Das recently wrote at the Guardian, has been used to prosecute dozens of women and a few men in the last decade. (See also Skeptical Inquirer.)

So if you're going to be mad at Facebook, be mad that the platform hadn't turned on end-to-end encryption for its messaging. That, as security engineer Alec Muffett has been pointing out on Twitter, would have protected the messages against access by both the system itself and by law enforcement. At the Guardian, Johana Bhuiyan reports the company is now testing turning on end-to-end encryption by default. Doubtless, soon to be followed by law enforcement and governments demanding special access.

Others advocate switching to other encrypted messaging platforms that, like Signal, provide a setting that allows you to ensure that messages automatically vape themselves after a specified number of days. Such systems retain no data that can be turned over.

It's good advice, up to a point. For one thing, it ignores most people's preference for using the familiar services their friends use. Adopting a second service just for, say, medical contacts adds complications; getting everyone you know to switch is almost impossible.

Second, it's also important to remember the power of metadata - data about data, which includes everything from email headers to search histories. "We kill people based on metadata," former NSA head Michael Hayden said in 2014 in a debate on the constitutionality of the NSA. (But not, he hastened to add, metadata collected from *Americans*.)

Logs of who has connected to whom and how frequently is often more revealing than the content of the messages sent back and forth. For example: the message content may be essentially meaningless to an outsider ("I can make it on Monday at two") until the system logs tell you that the sender is a woman of childbearing age and the recipient is an abortion clinic. This is why so many governments have favored retaining Internet connection data. Governments cite the usual use cases - organized crime, drug dealers, child abusers, and terrorists - when pushing for data retention, and they are helped by the fact that most people instinctively quail at the thought of others reading the *content* of their messages but overlook metadata's significance.intuitively grasp the importance of metadata - data about data, as in system logs, connection records - has helped enable mass Internet surveillance.

The net result of all this is to make surveillance capitalism-driven technology services dangerous for the 65.5 million women of childbearing age in the US (2020). That's a fair chunk of their most profitable users, a direct economic casualty of Dobbs.


Illustrations: Facebook.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 5, 2022

Painting by numbers

heron-thames-nw.JPGMy camera can see better than I can. I don't mean that it can take better pictures than I can because of its automated settings, although this is also true. I mean it can capture things I can't *see*. The heron above, captured on a grey day along the Thames towpath, was pretty much invisible to me. I was walking with a friend. Friend pointed and said, "Look. A heron" I pointed the camera more or less where she indicated, pushed zoom to maximum, hit the button, and when I got home there it was.

If the picture were a world-famous original, there might be a squabble about who owned the copyright. I pointed the camera and pushed the button, so in our world the copyright belongs to me. But my friend could stake a reasonable claim: without her, I wouldn't have known where or when to point the camera. The camera company (Sony) could argue, quite reasonably, that the camera and its embedded software, which took years to design and build, did all the work, while my entire contribution took but a second.

I imagine, however, that at the beginning of photography artists who made their living painting landscapes and portraits might have seen reason to be pretty scathing about the notion that photography deserved copyright at all. Instead of working for months to capture the right light and nuances...you just push a button? Where's the creative contribution in that?

This thought was inspired by a recent conversation on Twitter between two copyright experts - Lilian Edwards and Andres Guadamuz - who have been thinking for years about the allocation of intellectual property rights when an AI system creates or helps to create a new work. The proximate cause was Guadamuz's stunning experiments generating images usingMidjourney.

If you try out Midjourney's image-maker via the bot on its Discord server, you quickly find that each detail you add to your prompt adds detail and complexity to the resulting image; an expert at "prompt-craft" can come extraordinarily close to painting with the generation system. Writing prompts to control these generation systems and shape their output is becoming an art in itself, an expertise that will become highly valuable in itself. Guadamuz calls it "AI whispering".

Guadamuz touches on this in a June 2022 blog posting, in which he asks about the societal impact of being able to produce sophisticated essays, artworks, melodies, or software code based on a few prompts. The best human creators will still be the crucial element - I don't care how good you are at writing prompts, unless you're the human known as Vince Gilligan you+generator are not going to produce Breaking Bad or Better Call Saul. However, generation systems *might*, as Guadamuz says, produce material that's good-enough for many contexts, given that it''s free (ish).

More recently, Guadamuz considers the subject he and Edwards were mulling on Twitter: the ownership of copyright in generated images. Guadamuz had been reading the generators' terms and conditions. OpenAI, owner of DALL-E, specifies that users assign the copyright in all "Generations" its system produces, which it then places in the public domain whilegranting users a permanent license to do whatever they want with the Generations their prompts inspire. Midjourney takes the opposite approach: the user owns the generated image, and licenses it back to Midjourney.

What Guardamuz found notable was the trend toward assuming that generated images are subject to copyright, even though lawyers have argued that they can't be and fall into the public domain. Earlier this year, the US Copyright Office has rejected a request to allow an AI copyright a work. The UK is an outlier, awarding copyright in computer-generated works to the "person by whom the arrangements necessary for the creation of the work are undertaken". This is ambiguous: is that person the user who wrote the prompt or the programmers who trained the model and wrote the code?

Much of the discussion evolved around how that copyright might be divided up. Should it be shared between the user and the company that owns the generating tool? We don't assign copyright in the words we write to our pens or word processors; but as Edwards suggested, the generator tool is more like an artist for hire than a pen. Of course, if you hire a human artist to create an image for you, contract terms specify who owns the copyright. If it's a work made for hire, the artist retains no further interest.

So whatever copyright lawyers say, the companies who produce and own these systems are setting the norms as part of choosing their business model. The business of selling today's most sophisticated cameras derives from an industry that grew up selling physical objects. In a more recent age, they might have grown up selling software add-on tools on physical media. Today, they may sell subscriptions and tiers of functionality. Nonetheless, if a company's leaders come to believe there is potential for a low-cost revenue stream of royalties for reusing generated images, it will go for it. Corbis and Getty have already pioneered automated copyright enforcement.

For now, these terms and conditions aren't about developing legal theory; the companies just don't want to get sued. These are cover-your-ass exercises, like privacy policies.


Illustrations: Grey heron hanging out by the Thames in spring 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 15, 2022

Online harms

boris-johnson-on-his-bike-European-Cycling-Federation-370.jpgAn unexpected bonus of the gradual-then-sudden disappearance of Boris Johnson's government, followed by his own resignation, is that the Online Safety bill is being delayed until after Parliament's September return with a new prime minister and, presumably, cabinet.

This is a bill almost no one likes - child safety campaigners think it doesn't go far enough; digital and human rights campaigners - Big Brother Watch, Article 19, Electronic Frontier Foundation, Open Rights Group, Liberty, a coalition of 16 organizations (PDF) - because it threatens freedom of expression and privacy while failing to tackle genuine harms such as the platforms' business model; and technical and legal folks because it's largely unworkable.

The DCMS Parliamentary committee sees it as wrongly conceived. The he UK Independent Reviewer of Terrorism Legislation, Jonathan Hall QC, says it's muzzled and confused. Index on Censorship calls it fundamentally broken, and The Economist says it should be scrapped. The minister whose job it has been to defend it, Nadine Dorries (C-Mid Bedfordshire), remains in place at the Department for Culture, Media, and Sport, but her insistence that resigning-in-disgrace Johnson was brought down by a coup probably won't do her any favors in the incoming everything-that-goes-wrong-was-Johnson's-fault era.

In Wednesday's Parliamentary debate on the bill, the most interesting speaker was Kirsty Blackman (SNP-Aberdeen North), whose Internet usage began 30 years ago, when she was younger than her children are now. Among passionate pleas that her children should be protected from some of the high-risk encounters she experienced, was: "Every person, nearly, that I have encountered talking about this bill who's had any say over it, who continues to have any say, doesn't understand how children actually use the Internet." She called this the bill's biggest failing. "They don't understand the massive benefits of the Internet to children."

This point has long been stressed by academic researchers Sonia Livingstone and Andy Phippen, both of whom actually do talk to children. "If the only horse in town is the Online Safety bill, nothing's going to change," Phippen said at last week's Gikii, noting that Dorries' recent cringeworthy TikTok "rap" promoting the bill focused on platform liability. "The liability can't be only on one stakeholder." His suggestion: a multi-pronged harm reduction approach to online safety.

UK politicians have publicly wished to make "Britain the safest place in the world to be online" all the way back to Tony Blair's 1997-2007 government. It's a meaningless phrase. Online safety - however you define "safety" - is like public health; you need it everywhere to have it anywhere.

Along those lines, "Where were the regulators?" Paul Krugman asked in the New York Times this week, as the cryptocurrency crash continues to flow. The cryptocurrency market, which is now down to $1 trillion from its peak of $3 trillion, is recapitulating all the reasons why we regulate the financial sector. Given the ongoing collapses, it may yet fully vaporize. Krugman's take: "It evolved into a sort of postmodern pyramid scheme". The crash, he suggests, may provide the last, best opportunity to regulate it.

The wild rise of "crypto" - and the now-defunct Theranos - was partly fueled by high-trust individuals who boosted the apparent trustworthiness of dubious claims. The same, we learned this week was true of Uber 2014-2017, Based on the Uber files,124,000 documents provided by whistleblower Mark MacGann, a lobbyist for Uber 2014-2016, the Guardian exposes the falsity of Uber's claims that its gig economy jobs were good for drivers.

The most startling story - which transport industry expert Hubert Horan had already published in 2019 - is the news that the company paid academic economists six-figure sums to produce reports it could use to lobby governments to change the laws it disliked. Other things we knew about - for example, Greyball, the company's technology denying regulators and police rides so they couldn't document Uber's regulatory violations and Uber staff's abuse of customer data - are now shown to have been more widely used than we knew. Further appalling behavior, such as that of former CEO Travis Kalanick, who was ousted in 2017, has been thoroughly documented in the 2019 book, Super Pumped, by Mike Isaac, and the 2022 TV series based on it, Super Pumped.

But those scandals - and Thursday/s revelation that 559 passengers are suing the company for failing to protect them from rape and assault by drivers - aren't why Horan described Uber as a regulatory failure in 2019. For years, he has been indefatigably charting Uber's eternal unprofitability. In his latest, he notes that Uber has lost over $20 billion since 2015 while cutting driver compensation by 40%. The company's share price today is less than half its 2019 IPO price of $45 - and a third of its 2021 peak of $60. The "misleading investors" kind of regulatory failure.

So, returning to the Online Safety bill, if you undermine existing rights and increase the large platforms' power by devising requirements that small sites can't meet *and* do nothing to rein in the platforms' underlying business model...the regulatory failure is built in. This pause is a chance to rethink.

Illustrations: Boris Johnson on his bike (European Cyclists Federation via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 22, 2022

The new cable

Grace and Frankie.png"It's become the new cable," Andrew Lawrence writes about Netflix at the Guardian. He is expressing the theory that it's now Netflix's turn to suffer the fate of TV cable packages, which consumers have been cutting in favor of streaming, because, in his view, its content library has become stale, flat, and unprofitable. Ouch.

Lawrence was responding to Netflix's Tuesday evening announcement that the first quarter brought a loss of 200,000 subscribers and a projected second quarter loss of 2 million. About 700,000 of those were sacrificed when the company quit Russia as part of economic sanctions (some Russian subscribers are suing over this.)

Overnight, Netflix's shares dropped by 35%; having peaked at $700.99 on November 17, 2021, Thursday they closed around $218. One hedge fund sold off its 7% stake. Puncturing expectations of an ever-expanding future shrank Netflix's shares to something closer to their real value.

In the US especially, Netflix's trials may signal the beginning of a new industry phase. In Britain Netflix's biggest competitor remains the free-to-air BBC (for which we all must pay), ITV, and Channel 4, all of which commission world-class programming, plus, especially among younger people, YouTube. In the US, veteran screenwriter Ken Levine commented last week, broadcast networks are moving flagship content to their streaming arms. Eventually, he predicted, broadcast networks will "become the equivalent of the old neighborhood cineplex showing first run films a month after they've run everywhere else."

Another maybe-signal: on Thursday, Warner Bros Discovery (following a just-completed merger) announced it will close its month-old streaming platform CNN+ on April 30. At Axios, Sara Fischer reports that as of Tuesday the service had 150,000 subscribers, and that new owner WBD prefers to build HBO Max as a unified service.

I see this as a signal because the underlying question is: how many streaming services can people afford? Most of the cable cord-cutting Lawrence alluded to is for cost/value reasons.

Last week, Mark Sweney reported at the Guardian that due to the cost-of-living crisis the number of UK households that pay for at least one streaming service fell by 215,000 in the first quarter. Many still see Netflix as a "must-have"; first chopped are newer arrivals - Disney+ in particular. Amazon subscribers are also more likely to stay, perhaps because of Prime delivery. We'd guess also that the removal of pandemic restrictions coupled with warmer weather means people are going out more, which eats into both available time and entertainment budgets, and resuming commuters are rediscovering being time-stressed and cash-strapped.

Netflix has plans for recovery: it intends to create lower-priced subscription tiers part-subsidized by advertising and to crack down on the 100 million households it believes are sharing passwords instead of buying their own subs. The latter sounds like the next phase of the file-sharing wars; companies' reputations never came out well. In any event, it's unlikely Netflix will ever again see the adoption rates of the last ten years. It can put prices up for its ad-free tiers; it can (and almost certainly will at some point) pay artists less. In 2019, their outlays on talent led monopoly specialist Matt Stoller to call Amazon and Netflix predatory.

In order to build its own library of original content (the stuff Lawrence complained about), Netflix loaded up with as much as $16 billion in debt (at peak), apparently successfully. In January 2021 it announced an end to further borrowing because its subscriber revenues were now enough to support both operating costs and content investment. However, the company remains vulnerable to interest rate rises, given it still owes $14.5 billion.

At the Guardian, Alex Hern notes that Netflix, unlike competitors Amazon, Apple, and Disney, offers no news or sports, which people *will* pay to consume in real time, but adds that it has a gaming service for subscribers. Based on the complaints I see from subscribers, Netflix could also make its customers happier by improving its interface, particularly to aid content discovery.

The moment of peak streaming was always going to come. It's sooner because of the pandemic; it's later because the traditional broadcasters and media companies took so long to catch up with the technology companies who were the first movers.

For now, content is king, and all these companies hope their exclusive catalogues are sufficiently unique selling points to build their subscriber base. Anyone who was drawn to Netflix by Friends or The Office must now go elsewhere. Making new hits is *hard*. As Jeff Bezos recently learned, you can't make a new Game of Thrones by following a checklist.

Longer-term, the problem they all have is that no one cares about them. But we do care if every new series requires an extensive search and a new subscription. Even given apps like JustWatch, which find the best-priced option, piracy's single interface is far easier.

At a guess, there are three main future possibilities: the streaming services can consolidate, partner into something like cable packages, or open up content licensing and compete on pricing, features, absence of ads, interface design, and technical quality. Whatever its competitors do, Netflix's wild growth phase is over.


Illustrations: Lily Tomlin and Jane Fonda in the Netflix series Grace and Frankie.

net.wars does not accept guest posts, and does not accept payment, even in kind, to include links or "share resources". Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 25, 2022

Dangerous corner

War_damages_in_Mariupol,_12_March_2022_(01).jpgIf there is one thing the Western world has near-universally agreed in the last month, it's that in the Russian invasion of Ukraine, the Ukrainians are the injured party. The good guys.

If there's one thing that privacy advocates and much of the public agree on, it's that Clearview AI, which has amassed a database of (it claims) 10 billion facial images by scraping publicly accessible social media without the subjects' consent and sells access to it to myriad law enforcement organizations, is one of the world's creepiest companies. This assessment is exacerbated by the fact that the company and its CEO refuse to see anything wrong about their unconsented repurposing of other people's photos; it's out there for the scraping, innit?

Last week, Reuters reported that Clearview AI was offering Ukraine free access to its technology. Clearview's suggested uses: vetting people at checkpoints; debunking misinformation on social media; reuniting separated family members; and identifying the dead. Clearview's CEO, Hoan Ton-That, told Reuters that the company has 2 billion images of Russians scraped from Russian Facebook clone Vkonakte.

This week, it's widely reported that Ukraine is accepting the offer. At Forbes, Tom Brewster reports that Ukraine is using the technology to identify the dead.

Clearview AI has been controversial ever since January 2020, when Kashmir Hill reported its existence in the New York Times, calling it "the secretive company that might end privacy as we know it". Social media sites LinkedIn, Twitter, and YouTube all promptly sent cease-and-desist notices. A month later, Kim Lyons reported at The Verge that its 2,200 customers included the FBI, Interpol, the US Department of Justice, Immigration and Customs Enforcement, a UAE sovereign wealth fund, the Royal Canadian Mounted Police, and college campus police departments.

In May 2021, Privacy International filed complaints in five countries. In response, Canada, Australia, the UK, France, and Italy have all found Clearview to be in breach of data protection laws and ordered it to delete all the photos of people that it has collected in their territories. Sweden, Belgium, and Canada have declared law enforcement use of Clearview's technology to be illegal.

Ukraine is its first known use in a war zone. In a scathing blog posting, Privacy International says, "...the use of Clearview's database by authorities is a considerable expansion of the realm of surveillance, with very real potential for abuse."

Brewster cites critics, who lay out familiar privacy issues. Misidentification in a war zone could lead to death if a live soldier's nationality is wrongly assessed (especially common when the person is non-white) and unnecessary heartbreak for dead soldiers' families. Facial recognition can't distinguish civilians and combatants. In addition, the use of facial recognition by the "good guys" in a war zone might legitimize the technology. This last seems to me unlikely; we all distinguish the difference between what's acceptable in peace time versus an extreme context. This issue here is *company*, not the technology, as PI accurately pinpoints: "...it seems no human tragedy is off-limits to surveillance companies looking to sanitize their image."

Jack McDonald, a senior lecturer in war studies at Kings College London who researches the relationship between ethics, law, technology, and war, sees the situation differently.

Some of the fears Brewster cites, for example, are far-fetched. "They're probably not going to be executing people at checkpoints." If facial recognition finds a match in those situations, they'll more likely make an arrest and do a search. "If that helps them to do this, there's a very good case for it, because Russia does appear to be flooding the country with saboteurs." Cases of misidentification will be important, he agrees, but consider the scale of harm in the conflict itself.

McDonald notes, however, that the use of biometrics to identify refugees is an entirely different matter and poses huge problems. "They're two different contexts, even though they're happening in the same space."

That leaves the use Ukraine appears to be most interested in: identifying dead bodies. This, McDonald explains, represents a profound change from the established norms, which include social and institutional structures and has typically been closely guarded. Even though the standard of certainty is much lower, facial recognition offers the possibility of being able to do identification at scale. In both cases, the people making the identification typically have to rely on photographs taken elsewhere in other contexts, along with dental records and, if all else fails, public postings.

The reality of social media is already changing the norms. In this first month of the war, Twitter users posting pictures of captured Russian soldiers are typically reminded that it is technically against the Geneva Convention to do so. The extensive documentation - video clips, images, first-person reports - that is being posted from the conflict zones on services like TikTok and Twitter is a second front in its own right. In the information war, using facial recognition to identify the dead is strategic.

This is particularly true because of censorship in Russia, where independent media have almost entirely shut down and citizens have only very limited access to foreign news. Dead bodies are among the only incontrovertible sources of information that can break through the official denials. The risk that inaccurate identification could fuel Russian propaganda remains, however.

Clearview remains an awful idea. But if I thought it would help save my country from being destroyed, would I care?


Illustrations: War damage in Mariupol, Ukraine (Ministry of Internal Affairs of Ukraine, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 4, 2022

Consent spam

openRTB.pngThis week the system of adtech that constantly shoves banners in our face demanding consent to use tracking cookies was ruled illegal by the Belgian Data Protection Authority, leading 28 EU data protection authorities. The Internet Advertising Bureau, whose Transparency and Consent Framework formed the basis of the complaint that led to the decision, now has two months to redesign its system to bring it into compliance with the General Data Protection Regulation.

The ruling marks a new level of enforcement that could begin to see the law's potential fulfilled.

Ever since May 2018, when GDPR came into force, people have been complaining that so far all we've really gotten from it is bigger! worse! more annoying! cookie banners, while the invasiveness of the online advertising industry has done nothing but increase. In a May 2021 report, for example, Access Now examined the workings of GDPR and concluded that so far the law's potential had yet to be fulfilled and daily violations were going unpunished - and unchanged.

There have been fines, some of them eye-watering, such as Amazon' s 2021 fine of $877 million for its failure to get proper consent for cookies. But even Austrian activist lawyer Max Schrems' repeated European court victories have so far failed to force structural change, despite requiring the US and EU to rethink the basis of allowing data transfers.

To "celebrate" last week's data protection day, Schrems documented the situation: since the first data protection laws were passed,enforcement has been rare. Schrems' NGO, noyb, has plenty of its own experience to drawn on. Of the 51 individual cases noyb has filed in Europe since its founding in 2018, only 15% have been decided wthin a year, none of them pan-European. Four cases filed with the Irish DPA in May 2018, the day after GDPR came into force, have yet to be given a final decision.

Privacy International, which filed seven complaints against adtech companies in 2018, also has an enforcement timeline. Only one, against Experian, resulted in an investigation, and even in that case no action has been taken since Experian's appeal in 2021. A recent study of diet sites showed that they shared the sensitive information they collect with unspecified third parties, PI senior tecnologist Eliot Bendinelli told last week's Privacy Camp. PI's complaint is yet to be enforced, though it has led some companies to change their practices.

Bendinelli was speaking on a panel trying to learn from GDPR's enforcement issues in order to ensure better protection of fundamental rights from the EU's upcoming Digital Services Act. Among the complaints with respect to GDPR: the lack of deadlines to spur action and inconsistencies among the different national authorities.

The complaint at the heart of this week's judgment began in 2018, when Open Rights Group director Jim Killock, UCL researcher Michael Veale, and Irish Council on Civil Liberties senior fellow Johnny Ryan took the UK Information Commissioner's Office to court over the ICO's lack of action regarding real-time bidding, which the ICO itself had found illegal under the UK's Data Protection Act (2018), the UK's post-Brexit GDPR clone. In real-time bidding, your visit to a participating web page launches an instant mini-auction to find the advertiser willing to pay the most to fill the ad space you're about to see. Your value is determined by crunching all the data the site and its external sources have or can get about you.

If all this sounds like it oughtta be illegal under GDPR, well, yes. Enter the IAB's TCF, which extracts your permission via those cookie consent banners. With many of these, dark patterns design make "consent" instant and rejection painfully slow. The Big Tech sites, of course, handle all this by using logins; you agree to the terms and conditions when you create your account and then you helpfully forget how much they learn about you every time you use the site.

In December 2021, the UK's Upper Tribunal refused to require the ICO to reopen the complaint, though it did award Killock and Veal concessions they hope will make the ICO more accountable in future.

And so back to this week's judgment that the IAB's TCF, which is used on 80% of the European Internet, is illegal. The Irish DPA is also investigating Google's similar system, as well as Quantcast's consent management system. On Twitter, Ryan explained the gist: cookie-consent pop-ups don't give publishers adequate user consent, and everyone must delete all the data they've collected.

Ryan and the Open Rights Group also point out that the judgment spikes the UK government's claim that revamping data protection law is necessary to get rid of cookie banners (at the expense of some of the human rights enshrined in the law). Ryan points to DuckDuckGo as an example of the non-invasive alternative: contextual advertising. He also observed that all that "consent spam" makes GDPR into merely "compliance theater".

Meanwhile, other moves are also making their mark. Also this week, Facebook (Meta)'s latest earnings showed that Apple's new privacy controls, which let users opt out of tracking, will cost it $10 billion this year. Apparently 75% of Apple users opt out.

Moral: given the tools and a supportive legal environment, people will choose privacy.

Illustrations: Diagram of OpenRTB, from the Belgian decision.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 21, 2022

Power plays

thames-kew-2022-01-17.jpegWe are still catching up on updates and trends.

Two days before the self-imposed deadline, someone blinked in the game of financial chicken between Amazon UK and Visa. We don't know which one it was, but on January 17 Amazon said it wouldn't stop accepting Visa credit cards after all. Negotiations are reportedly ongoing.

Ostensibly, the dispute was about the size of Visa's transaction fees. At Quartz, Ananya Bhattacharya quotes Banked.com's Ben Goodall's alternative explanation: the dispute allowed Amazon to suck up a load of new data that will help it build "the super checkout for the future. For Visa, she concludes, resolving the dispute has relatively little value beyond PR: Amazon accounts for only 1% of its UK credit card volume. For the rest of us, it remains disturbing that our interests matter so little. If you want proof of market dominance, look no further.

In June 2021, the Federal Trade Commission tried to bring an antitrust suit against Facebook, and failed when the court ruled that in its complaint the FTC had failed to prove its most basic assumption: that Facebook had a dominant market position. Facebook was awarded the dismissal it requested. This week, however, the same judge ruled that the FTC's amended complaint, which was filed in August, will be allowed to go ahead, though he suggests in his opinion that the FTC will struggle to substantiate some of its claims. Essentially, the FTC accuses Facebook of a "buy or bury" policy when faced with a new and innovative competitor and says it needed to make up for its own inability to adapt to the mobile world.

We will know if Facebook (or its newly-renamed holding company owner, Meta) is worried if it starts claiming that damaging the company is bad for America. This approach began as satire, Robert Heller explained in his 1994 book The Fate of IBM. Heller cites a 1990 PC Magazine column by William E. Zachmann, who used it as the last step in an escalating list of how the "IBMpire" would respond to antitrust allegations.

This week, Google came close to a real-life copy in a blog posting opposing an amendment to the antitrust bill currently going through the US Congress. The goal behind the bill is to make it easier for smaller companies to compete by prohibiting the major platforms from advantaging their own products and services. Google argues, however, that if the bill goes through Americans might get worse service from Google's products, American technology companies could be placed at a competitive disadvantage, and America's national security could be threatened. Instead of suggesting ways to improve the bills, however, Google concludes with the advice that Congress should delay the whole thing.

To be fair, Google isn't the only one that dislikes the bill. Apple argues its provisions might make it harder for users to opt out of unwanted monitoring. Free Press Action argues that it will make it harder to combat online misinformation and hate speech by banning the platforms from "discriminating" against "similarly situated businesses" (the bill's language), competitor or not. EFF, on the other hand, thinks copyright is a bigger competition issue. All better points than Google's.

A secondary concern is the fact that these US actions are likely to leave the technology companies untouched in the rest of the world. In Africa, Nesrine Malik writes at the Guardian, Facebook is indispensable and the only Internet most people know because its zero-rating allows its free use outside of (expensive) data plans. Most African Internet users are mobile-only, and most data users are on pay-as-you-go plans. So while Westerners deleting their accounts is a real threat to the company's future - not least because, as Frances Haugen testified, they produce the most revenue - the company owns the market in Africa. There, it is literally the only game in town for both businesses and individuals. Twenty-five years ago, we thought the Internet would be a vehicle for exporting the First Amendment. Instead...

Much of the discussion about online misinformation focuses on content moderation. In a new report the Royal Society asks how to create a better information environment. Despite its harm, the report comes down against simply removing scientific misinformation. Like Charles Arthur in his 2021 book Social Warming, the report's authors argue for slowing the spread by various methods - adding a friction to social media sharing, reconfiguring algorithms, in a few cases de-platforming superspreaders. I like the scientists' conclusion that simple removal doesn't work; in science you must show your work, and deletion fuels conspiracy theories. During this pandemic, Twitter has been spectacular at making it possible to watch scientists grapple with uncertainty in real time.

The report also disputes some of our longstanding ideas about how online interaction works. A literature review finds that the filter bubbles and echo chambers Eli Pariser posited in 2011 are less important than we generally think. Instead most people have "relatively diverse media diets" and the minority who "inhabit politically partisan online news echo chambers" is about 6% to 8% of users.

Keeping it that way, however, depends on having choices, which leads back to these antitrust cases. The bigger and more powerful the platforms are, the less we - as both individuals and societies - matter to them.


Illustrations: The Thames at an unusually quiet moment, in January 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 12, 2021

Third wave

512px-Web_2.0_Map.svg.pngIt seems like only yesterday that we were hearing that Web 2.0 was the new operating system of the Internet. Pause to look up. It was 2008, in the short window between the founding of today's social media giants (2004-2006) and their smartphone-accelerated explosion (2010).

This week a random tweet led me to discover Web3. As Aaron Mak explains at Slate, "Web3" is an idea for running a next-generation Internet on public blockchains in the interests of decentralization (which net.wars has long advocated). To date, the aspect getting the most attention is decentralized finance (DeFi, or, per Mary Branscombe, deforestation finance), a plan for bypassing banks and governments by conducting financial transactions on the blockchain.

At Freecode, Nader Dabit goes into more of the technical underpinnings. At Fabric Ventures (Medium), Max Mersch and Richard Muirhead explain its importance. Web3 will bring a "borderless and frictionless" native payment layer (upending mediator businesses like Paypal and Square), bring the "token economy" to support new businesses (upending venture capitalists), and tie individual identity to wallets (bypassing authentication services like OAuth, email plus password, and technology giant logins), thereby enabling multiple identities, among other things. Also interesting is the Cloudflare blog, where Thibault Meunier states that as a peer-to-peer system Web3 will use cryptographic identifiers and allow users to selectively share their personal data at their discretion. Some of this - chiefly the robustness of avoiding central points of failure - is a return to the Internet's original design goals.

Standards-setter W3C is working on at least one aspect - cryptographically verifiable Decentralized Identifiers, and it's running into opposition, from Google, Apple, and Mozilla, whose browsers control 87% of the market.

Let's review a little history.

The 20th century Internet was sorta, kinda decentralized, but not as much as people like to think. The technical and practical difficulties of running your own server at home fueled the growth of portals and web farms to do the heavy lifting. Web design went from plain text (see for example, Live Journal and Blogspot (now owned by Google). You can argue about how exactly it was that a lot of blogs died off circa 2010, but I'd blame Twitter, writers found it easier to craft a sentence or two and skip writing the hundreds of words that make a blog post. Tim O'Reilly and Clay Shirky described the new era as interactive, and moving control "up the stack" from web browsers and servers to the services they enabled. Data, O'Reilly predicted, was the key enabler, and the "long tail" of niche sites and markets would be the winner. He was right about data, and largely wrong about the long tail. He was also right about this: "Network effects from user contributions are the key to market dominance in the Web 2.0 era." Nearly 15 years later, today's web feels like a landscape of walled cities encroaching on all the public pathways leading between them.

Point Network (Medium) has a slightly different version of this history; they call Web 1.0 the "read-only web"; Web 2.0 the "server/cloud-based social Web", and Web3 the "decentralized web".

The pattern here is that every phase began with a "Cambrian" explosion of small sites and businesses and ended with a consolidated and centralized ecosystem of large businesses that have eaten or killed everyone else. The largest may now be so big that they can overwhelm further development to ensure their future dominance; at least, that's one way of looking at Mark Zuckerberg's metaverse plan.

So the most logical outcome from Web3 is not the pendulum swing back to decentralization that we may hope, but a new iteration of the existing pattern, which is at least partly the result of network effects. The developing plans will have lots of enemies, not least governments, who are alert to anything that enables mass tax evasion. But the bigger issue is the difficulty of becoming a creator. TikTok is kicking ass, according to Chris Stokel-Walker, because it makes it extremely easy for users to edit and enhance their videos.

I spy five hard problems. One: simplicity and ease of use. If it's too hard, inconvenient, or expensive for people to participate as equals, they will turn to centralized mediators. Two: interoperability and interconnection. Right now, anyone wishing to escape the centralization of social media can set up a Discord or Mastodon server, yet these remain decidedly minority pastimes because you can't message from them to your friends on services like Facebook, WhatsApp, Snapchat, or TikTok. A decentralized web in which it's hard to reach your friends is dead on arrival. Three: financial incentives. It doesn't matter if it's venture capitalists or hundreds of thousands of investors each putting up $10, they want returns. As a rule of thumb, decentralized ecosystems benefit all of society; centralized ones benefit oligarchs - so investment flows to centralized systems. Four: sustainability. Five: how do we escape the power law of network effects?

Gloomy prognostications aside, I hope Web3 changes everything, because in terms of its design goals, Web 2.0 has been a bust.


Illustrations: Tag cloud from 2007 of Web 2.0 themes (Markus Angermeier and Luca Cremonini, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 15, 2021

The future is hybrid

grosser-somebody.JPGEvery longstanding annual event-turned-virtual these days has a certain tension.

"Next year, we'll be able to see each other in person!" says the host, beaming with hope. Excited nods in the Zoom windows and exclamation points in the chat.

Unnoticed, about a third of the attendees wince. They're the folks in Alaska, New Zealand, or Israel, who in normal times would struggle to attend this event in Miami, Washington DC, or London because of costs or logistics.

"We'll be able to hug!" the hosts say longingly.

Those of us who are otherwhere hear, "It was nice having you visit. Hope the rest of your life goes well."

When those hosts are reminded of this geographical disability, they immediately say how much they'd hate to lose the new international connections all these virtual events have fostered and the networks they have built. Of course they do. And they mean it.

"We're thinking about how to do a hybrid event," they say, still hopefully.

At one recent event, however, it was clear that hybrid won't be possible without considerable alterations to the event as it's historically been conducted - at a rural retreat, with wifi available only in the facility's main building. With concurrent sessions in probably six different rooms and only one with the basic capability to support remote participants, it's clear that there's a problem. No one wants to abandon the place they've used every year for decades. So: what then? Hybrid in just that one room? Push the facility whose selling point is its woodsy distance from modern life to upgrade its broadband connections? Bring a load of routers and repeaters and rig up a system for the weekend? Create clusters of attendees in different locations and do node-to-node Zoom calls? Send each remote participant a hugging pillow and a note saying, "Wish you were here"?

I am convinced that the future is hybrid events, if only because businesses sound so reluctant to resume paying for so much international travel, but the how is going to take a lot of thought, collaboration, and customization.

***

Recent events suggest that the technology companies' own employees are a bigger threat to business-as-usual than portending regulation and legislation. Facebook's had two major whistleblowers - Sophie Zhang and Frances Haugen in the last year, and basically everyone wants to fix the site's governance. But Facebook is not alone...

At Uber, a California court ruled in August that drivers are employees; a black British driver has filed a legal action complaining that Uber's driver identification face-matching algorithm is racist; and Kenyan drivers are suing over contract changes they say have cut their takehome pay to unsustainably low levels.

Meanwhile, at Google and Amazon, workers are demanding the companies pull out of contracts with the Israeli military. At Amazon India, a whistleblower has handed Reuters documents showing the company has exploited internal data to copy marketplace sellers' products and rig its search engine to display its own versions first. *And* Amazon's warehouse workers continue to consider unionizing - and some cities back them.

Unfortunately, the bigger threat of the legislation being proposed in the US, UK, New Zealand, Canada is *also* less to the big technology companies than to the rest of the Internet. For example, in reading the US legislation Mike Masnick finds intractable First Amendment problems. Last week I liked this idea of focusing on content social media companies' algorithms amplify, but Masnick persuasively argues it's not so simple, citing Daphne Koller, who thought more critically about the First Amendment problems that will arise in implementing that idea.

***

The governor of Missouri, Mike Parson, has accused Josh Renaud, a journalist with the St Louis Post-Dispatch, of hacking into a government website to view several teachers' social security numbers. From the governor's description, it sounds like Renaud hit either CTRL-U or hit F12, looked at the HTML code, saw startlingly personal data, and decided correctly that the security flaw was newsworthy. (He also responsibly didn't publish his article until he had notified the website administrators and they had fixed the issue.)

Parson disagrees about the legitimacy of all this, and has called for a criminal investigation into this incident of "hacking" (see also scraping). The ability to view the code that makes up a web page and tells the browser how to display it is a crucial building block of the web; when it was young and there were no instruction manuals, that was how you learned to make your own page by copying. A few years ago, the Guardian even posted technical job ads in its pages' HTML code, where the right applicants would see them. No password, purloined or otherwise, is required. The code is just sitting there in plain sight on a publicly accessible server. If it weren't, your web page would not display.

Twenty-five years ago, I believed that by now governments would be filled with 30-somethings who grew up with computers and the 2000-era exploding Internet and could restrain this sort of overreaction. I am very unhappy to be wrong about this. And it's only going to get worse: today's teens are growing up with tablets, phones, and closed apps, not the open web that was designed to encourage every person to roll their own.


Illustrations: Exhibit from Ben Grosser's "Software for Less, reimagining Facebook alerts, at the Arebyte Gallery until end October.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 8, 2021

The inside view

Facebook user revenue chart.pngSo many lessons, so little time.

We have learned a lot about Facebook in the last ten days, at least some of it new. Much of it is from a single source, the documents exfiltrated and published by Frances Haugen.

We knew - because Haugen is not the first to say so - that the company is driven by profits and a tendency to view its systemic problems as PR issues. We knew less about the math. One of the more novel points in Haugen's Senate testimony on Tuesday was her explanation of why Facebook will always be poorly moderated outside the US: safety does not scale. Safety costs the same for each new country Facebook adds - but each new country is also a progressively smaller market than the last. Consequence: the cost-benefit analysis fails. Currently, Haugen said, Facebook only covers 50 of the world's approximately 500 languages, and even in some of those cases the country does not have local experts to help understand the culture. What hope for the rest?

Additional data: at the New York Times, Kate Klonick checks Facebook's SEC filings to find that average revenue per North American user per *quarter* was $53.56 in the last quarter of 2020, compared to $16.87 for Europe, $4.05 for Asia, and $2.77 for the rest of the world. Therefore, Klonick said at In Lieu of Fun, most of its content moderation money is spent in the US, which has less than 10% of the service's users. All those revenue numbers dropped slightly in Q1 2021.

We knew that in some countries Facebook is the only Internet people can afford to access. We *thought* that it only represented a single point of failure in those countries. Now we know that when Facebook's routing goes down - its DNS and BGP routing were knocked out by a "maintenance error" - the damage can spread to other parts of the Internet. The whole point of the Internet was to provide communications in case of a bomb outage. This is bad.

As a corollary, the panic over losing connections to friends and customers even in countries where social pressure, not data plans, ties people to Facebook is a sign of monopoly. Haugen, like Kevin Roose in the New York Times, sees signs of desperation in the documents she leaked. This company knows its most profitable audiences are aging; Facebook is now for "old people". The tweens are over at Snapchat, TikTok, and even Telegram, which added 70 million signups in the six hours Facebook was out.

We already knew Facebook's business model was toxic, a problem it shares with numerous other data-driven companies not currently in the spotlight. A key difference: Zuckerberg's unassailable control of his company's voting shares. The eight SEC complaints Haugen has filed is the first potential dent in that.

Like Matt Stoller, I appreciate a lot of Haugen's ideas for remediation: pushing people to open links before sharing, and modifying Section 230 to make platforms responsible for their algorithmic amplification, an idea also suggested by fellow data scientist Roddy Lindsay and British technology journalist Charles Arthur in his new book, Social Warming. For Stoller, these are just tweaks to how Facebook works. Haugen says she wants to "save" Facebook, not harm it. Neither her changes nor Zuckerberg's call for government regulation touch its concentrated power. Stoller wants "radical decentralization". Arthur wants cap social network size.

One fundamental mistake may be to think of Facebook as *a* monopoly rather than several at once. As an economic monopoly, businesses all over the world depend on Facebook and subsidiaries to reach their customers, and advertisers have nowhere else to go. Despite last year's pledged advertising boycott over hate speech on Facebook, since Haugen's revelations began, advertisers have been notably silent. As a social monopoly, Facebook's outage was disastrous in regions where both humanitarians and vulnerable people rely on it for lifesaving connections; in richer countries, the inertia of established connections leaves Facebook in control of large swaths of our social and community networks. This week taught us that its size also threatens infrastructure. Each of these calls for a different approach.

Stoller has several suggestions for crashing Facebook's monopoly power, one of which is to ban surveillance advertising. But he rejects regulation and downplays the crucial element of interoperability; create a standard so that messaging can flow between platforms, and you've dismantled customer lock-in. The result would be much more like the decentralized Internet of the 1990s.

Greater transparency would help; just two months ago Facebook shut down independent research into content interactions and its political advertising - and tried to blame the Federal Trade Commission.

This is *not* a lesson. Whatever we have learned Mark Zuckerberg has not. At CNN, Donie O'Sullivan fact-checks Zuckerberg's response.

A day after Haugen's testimony, Zuckerberg wrote (on Facebook, requiring a login): "I think most of us just don't recognize the false picture of the company that is being painted." Cue Robert Burns: "O wad some Pow'r the giftie gie us | To see oursels as ithers see us!" But really, how blinkered do you have to be to not recognize that if your motto is Move fast and break things people are going to blame you for the broken stuff everywhere?


Illustrations: Slide showing revenue by Facebook user geography from its Q1 2021 SEC filing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 15, 2021

One thousand

net.wars-the-book.gifIn many ways, this 1,000th net.wars column is much like the first (the count is somewhat artificial, since net.wars began as a 1998 book, then presaged by four years of news analysis pieces for the Daily Telegraph, and another book in 2001...and a lot of my other writing also fits under "computers, freedom, and privacy"; *however*). That November 2001 column was sparked by former Home Office minister Jack Straw's smug assertion that after 9/11 those of us who had defended access to strong cryptography must be feeling "naive". Here, just over a week after the Capitol invasion, three long-running issues are pertinent: censorship; security and the intelligence failures that enabled the attack; and human rights when demands for increased surveillance capabilities surface, as they surely will.

Censorship first. The US First Amendment only applies to US governments (a point that apparently requires repeating). Under US law, private companies can impose their own terms of service. Most people expected Twitter would suspend Donald Trump's account approximately one second after he ceased being a world leader. Trump's incitement of the invasion moved that up, and led Facebook, including its subsidiaries Instagram and WhatsApp, Snapchat, and, a week after the others, YouTube to follow suit. Less noticeably, a Salesforce-owned email marketing company ceased distributing emails from the Republican National Committee.

None of these social media sites is a "public square", especially outside the US, where they've often ignored local concerns. They are effectively shopping malls, and ejecting Trump is the same as throwing out any other troll. Trump's special status kept him active when many others were unjustly banned, but ultimately the most we can demand from these services is clearly stated rules, fairly and impartially enforced. This is a tough proposition, especially when you are dependent on social media-driven engagement.

Last week's insurrection was planned on numerous openly accessible sites, many of which are still live. After Twitter suspended 70,000 accounts linked to QAnon, numerous Republicans complaining they had lost followers seemed to be heading to Parler, a relatively new and rising alt-right Twitterish site backed by Rebekah Mercer, among others. Moving elsewhere is an obvious outcome of these bans, but in this crisis short-term disruption may be helpful. The cost will be longer-term adoption of channels that are harder to monitor.

By January 9 Apple was removing Parler from the App Store, to be followed quickly by Android (albeit less comprehensively, since Android allows side-loading). Amazon then kicked Parler off its host, Amazon Web Services. It is unknown when, if ever, the site will return.

Parler promptly sued Amazon claiming an antitrust violation. AWS retaliated with a crisp brief that detailed examples of the kinds of comments the site felt it was under no obligation to host and noted previous warnings.

Whether or not you think Parler should be squashed - stipulating that the imminent inauguration requires an emergency response - three large Silicon Valley platforms have combined to destroy a social media company. This is, as Jillian C. York, Corynne McSherry, and Danny O'Brien write at EFF, a more serious issue. The "free speech stack", they write, requires the cooperation of numerous layers of service providers and other companies. Twitter's decision to ban one - or 70,000 - accounts has limited impact; companies lower down the stack can ban whole populations. If you were disturbed in 2010, when, shortly after the diplomatic cables release, Paypal effectively defunded Wikleaks after Amazon booted it off its servers, then you should be disturbed now. These decisions are made at obscure layers of the Internet where we have little influence. As the Internet continues to centralize, we do not want just these few oligarchs making these globally significant decisions.

Security. Previous attacks - 9/11 in particular - led to profound damage to the sense of ownership with which people regard their cities. In the UK, the early 1990s saw the ease of walking into an office building vanish, replaced by demands for identification and appointments. The same happened in New York and some other US cities after 9/11. Meanwhile, CCTV monitoring proliferated. Within a year of 9/11, the US passed the PATRIOT Act, and the UK had put in place a series of expansions to surveillance powers.

Currently, residents report that Washington, DC is filled with troops and fences. Clearly, it can't stay that way permanently. But DC is highly unlikely to return to the openness of just ten days ago. There will be profound and permanent changes, starting with decreased access to government buildings. This will be Trump's most visible legacy.

Which leads to human rights. Among the videos of insurrectionists shocked to discover that the laws do apply to them were several in which prospective airline passengers discovered they'd been placed preemptively on the controversial no-fly list. Many others who congregated at the Capitol were on a (separate) terrorism watch list. If the post-9/11 period is any guide, the fact that the security agencies failed to connect any of the dots available to them into actionable intelligence will be elided in favor of insisting that they need more surveillance powers. Just remember: eventually, those powers will be used to surveil all the wrong people.


Illustrations: net.wars, the book at the beginning.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 2, 2020

Searching for context

skyler-gundason-social-dilemma.pngIt's meant, I think, to be a horror movie. Unfortunately, Jeff Orlowski's The Social Dilemma comes across as too impressed with itself to scare as thoroughly as it would like.

The plot, such as it is: a group of Silicon Valley techies who have worked on Google, Facebook, Instagram, Palm (!), and so on present mea culpas. "I was co-inventor...of the Like button," Tristan Harris says by way of introduction. It seems such a small thing to include. I'm sure it wasn't that easy, but Slashdot was upvoting messages when Mark Zuckerberg was 14. The techies' thoughts are interspersed with those of outside critics. Intermittently, the film inserts illustrative scenarios using actors, a technique better handled in The Big Short. In these, Vincent Kartheiser plays a multiplicity of evil algorithmic masterminds doing their best to exploit their target, a fictional teenage boy (Skyler Gisondo) who has accepted the challenge of giving up his phone for a week with the predictable results of an addiction film. As he becomes paler and sweatier, you expect him to crash out in a grotty public toilet, like Julia Ormond's character in Traffik. Instead, he face-plants when the police arrest him at Charlottesville.

The first half of the movie is predominantly a compilation of favorite social media nightmares: teens are increasingly suffering from depression and other mental health issues; phone addiction is a serious problem; we are losing human connection; and so on. As so often, causality is unclear. The fact that these Silicon Valley types consciously sought to build addictive personal tracking and data crunching systems and change the world does not automatically tie every social problem to their products.

I say this because so much of this has a long history the movie needs for context. The too-much-screen-time of my childhood was TV, though my (older) parents worried far more about the intelligence-drainage perpetrated by comic books. Girls who now seek cosmetic surgery in order to look more like filter-enhanced Instagram images were preceded by girls who starved themselves to look like air-brushed, perfect models in teen magazines. Today's depressed girls could have been those profiled in Mary Pipher's 1994 Reviving Ophelia, and she, too, had forerunners. Claims about Internet addiction go back more than 20 years, and until very recently were focused on gaming. Finally, though data does show that teens are going out less, less interested in learning to drive, and are having less sex and using less drugs, is social media the cause or the compensation for a coincidental overall loss of physical freedom? Even pre-covid they were growing up into a precarious job market and a badly damaged planet; depression might just be the sane response.

In the second half the film moves on to consider social media divisions as assault on democracy. Here, it's on firmer ground, but really only because the much better film The Great Hack has already exposed how Facebook (in particular) was used to spark violence and sway elections even before 2016. And then it wraps up: people are trapped, the companies have no incentive to change, and (says Jaron Lanier) the planet will die. As solutions, the film's many spokespeople suggest familiar ideas: regulation, taxation, withdrawal. Shoshana Zuboff is the most radical: outlaw them. (Please don't take Twitter! I learn so much from Twitter!)

"We are allowing technologists to frame this as a problem that they are equipped to solve," says data scientist Cathy O'Neil. "That's a lie." She goes on to say that AI can't distinguish truth. Even if it could, truth is not part of the owners' business model.

Fair enough, but remove Facebook and YouTube, and you still have Fox News, OANN, and the Daily Mail inciting anger and division with expertise honed over a century of journalistic training - and amoral world leaders. This week, a study published this week from Cornell University found that Donald Trump is implicated in 38% of the coronavirus misinformation circulating on online and traditional media. Knock out a few social media sites...and that still won't change because his pulpit is too powerful.

Most of the film's speakers eventually close by recommending we delete our social media accounts. It seems a weak response, in part because the movie does a poor job of disentangling the dangers of algorithmic manipulation from the myriad different reasons why people use phones and social media: they listen to music, watch TV, connect with their friends, play games, take pictures, and navigate unfamiliar locations. It's absurd to ask them to give that up without suggesting alternatives for fulfilling those functions.

A better answer may be that offered this week by the 25-odd experts who have formed an independent Facebook oversight board (the actual oversight board Facebook announced months ago is still being set up and won't begin meeting until after the US presidential election). The expertise assembled is truly impressive, and I hope that, like the Independent SAGE group of scientists who have been pressuring the UK government into doing a better job on coronavirus, they will have a mind-focusing effect on our Facebook overlords, perhaps later to be copied for other sites. The problem - an aspect also omitted from The Social Dilemma - is that under the company's shareholder structure Zuckerberg is under no requirement to listen.


Illustrations: Skyler Gisondo as Ben, in The Social Dilemma.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 11, 2020

Autofail

sfo-fires-hasbrouck.jpegA new complaint surfaced on Twitter this week. Anthony Ryan may have captured it best: "In San Francisco everyone is trying unsuccessfully to capture the hellish pall that we're waking up to this morning but our phone cameras desperately want everything to be normal." california-fires-sffdpio.jpegIn other words: as in these pictures, the wildfires have turned the Bay Area sky dark orange ("like dusk on Mars," says one friend), but people attempting to capture it on their phone cameras are finding that the automated white balance correction algorithms recalibrate the color to wash out the orange in favor of grey daylight.

At least that's something the computer is actually doing, even if it's counter-productive. Also this week, the Guardian ran an editorial that it boasted had been "entirely" written by OpenAI's language generator, GPT-3. Here's what they mean by "written" and "entirely": the AI was given a word length, a theme, and the introduction, from which it produced eight unique essays, which the Guardian editors chopped up and pieced together into a single essay, which they then edited in the usual way, cutting lines and rearranging paragraphs as they saw fit. Trust me, human writers don't get to submit eight versions of anything; we'd be fired when the first one failed. But even if we did, editing, as any professional writer will tell you, is the most important part of writing anything. As I commented on Twitter, the whole thing sounds like a celebrity airily claiming she's written her new book herself, with "just some help with the organizing". I'd advise that celebrity (name withheld) to have a fire extinguisher ready for when her ghostwriter reads that and thinks of all the weeks they spent desperately rearranging giant piles of rambling tape transcripts into a (hopefully) compelling story.

The Twitter discussion of this little foray into "AI" briefly touched on copyright. It seems to me hard to argue that the AI is the author given the editors' recombination of its eight separately-generated pieces (which likely took longer than if one of them had simply written the piece). Perhaps you could say - if you're willing to overlook the humans who created, coded, and trained the AI - that the AI is the author of the eight pieces that became raw material for the essay. As things are, however, it seems clear that the Guardian is the copyright owner, just as it would be if the piece had been wholly staff-written (by humans).

Meanwhile, the fallout from Max Schrems' latest win continues to develop. The Irish Data Protection Authority has already issued a preliminary order to suspend data transfers to the US; Facebook is appealing. The Swiss data protection authority has issued a notice that the Swiss-US Privacy Shield is also void. During a September 3 hearing before the European Parliament Committee on Civil Liberties, Justice, and Home Affairs, MEP Sophie in't Veld said that by bringing the issue to the courts Schrems is doing the job data protection authorities should be doing themselves. All agreed that a workable - but this time "Schrems-proof" - solution must be found to the fundamental problem, which Gwendolyn Delbos-Corfield summed up as "how to make trade with a country that has decided to put mass surveillance as a rule in part of its business world". In't Veld appeared to sum up the entire group's feelings when she said, "There must be no Schrems III."

Of course we all knew that the UK was going to get caught in the middle between being able to trade with the EU, which requires a compatible data protection regime (either the continuation of the EU's GDPR or a regime that is ruled equal), and the US, which wants data to be free-flowing and which has been trying to use trade agreements to undermine the spread of data protection laws around the world (latest newcomer: Brazil). What I hadn't quite focused on (although it's been known for a while) is that, just like the US surveillance system, the UK's own surveillance regime could disqualify it from the adequacy ruling it needs to allow data to go on flowing. When the UK was an EU member state, this didn't arise as an issue because EU data protection law permits member states to claim exceptions for national security. Now that the UK is out, that exception no longer applies. It was a perk of being in the club.

Finally, the US Senate, not content with blocking literally hundreds of bills passed by the House of Reprsentatives over the last few years, has followed up July's antitrust hearings with the GAFA CEOs with a bill that's apparently intended to answer Republican complaints that conservative voices are being silenced on social media. This is, as Eric Goldman points out in disgust one of several dozen bits of legislation intended to modify various pieces of S230 or scrap it altogether. On Twitter, Tarleton Gillespie analyzes the silliness of this latest entrant into the fray. While modifying S230 is probably not the way to go about it, right now curbing online misinformation seems like a necessary move - especially since Facebook CEO Mark Zuckerberg has stated outright that Facebook won't remove anti-vaccine posts. Even in a pandemic.


Illustrations: The San Francisco sky on Wednesday ("full sun, no clouds, only smoke"), by Edward Hasbrouck; accurate color comparison from the San Francisco Fire Department.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 29, 2020

Tweeted

sbisson-parrot-49487515926_0c97364f80_o.jpgAnyone who's ever run an online forum has at some point grappled with a prolific poster who deliberately spreads division, takes over every thread of conversation, and aims for outraged attention. When your forum is a few hundred people, one alcohol-soaked obsessive bent on suggesting that anyone arguing with him should have their shoes filled with cement before being dropped into the nearest river is enormously disruptive, but the decision you make about whether to ban, admonish, or delete their postings matters only to you and your forum members. When you are a public company, your forum is several hundred million people, and the poster is a world leader...oy.

Some US Democrats have been calling Donald Trump's outrage this week over having two tweets labeled with a fact-check an attempt to distract us all from the terrible death toll of the pandemic under his watch. While this may be true, it's also true that the tweets Trump is so fiercely defending form part of a sustained effort to spread misinformation that effectively acts as voter suppression for the upcoming November election. In the 12 hours since I wrote this column, Trump has signed an Executive Order to "prevent online censorship", and Twitter has hidden, for "glorifying violence", Trump tweets suggesting shooting protesters in Minneapolis. It's clear this situation will escalate over the coming week. Twitter has a difficult balance to maintain: it's important not to hide the US president's thoughts from the public, but it's equally important to hold the US president to the same standards that apply to everyone else. Of course he feels unfairly picked on.

Rewind to Tuesday. Twitter applied its recently-updated rules regarding election integrity by marking two of Donald Trump's tweets. The tweets claimed that conducting the November presidential election via postal ballots would inevitably mean electoral fraud. Trump, who moved his legal residence to Florida last year, voted by mail in the last election. So did I. Twitter added a small, blue line to the bottom of each tweet: "! Get the facts about mail-in ballots". The link leads to numerous articles debunking Trump's claim. At OneZero, Will Oremus explains Twitter's decision making process. By Wednesday, Trump was threatening to "shut them down" and sign an Executive Order on Thursday.

Thursday morning, a leaked draft of the proposed executive order had been found, and Daphne Keller had color coded it to show which bits matter. In a fact-check of what power Trump actually has for Vox, Shirin Ghaffary quotes a tweet from Lawrence Tribe, who calls Trump's threat "legally illiterate". Unlike Facebook, Twitter doesn't accept political ads that Trump can threaten to withdraw, and unlike Facebook and Google, Twitter is too small for an antitrust action. Plus, Trump is addicted to it. At the Washington Post, Tribe adds that Trump himself *is* violating the First Amendment by continuing to block people who criticize his views, a direct violation of a 2019 court order.

What Trump *can* do - and what he appears to intend to do - is push the FTC and Congress to tinker with Section 230 of the Communications Decency Act (1996), which protects online platforms from liability for third-party postings spreading lies and defamation. S230 is widely credited with having helped create the giant Internet businesses we have today; without liability protection, it's generally believed that everything from web comment boards to big social media platforms will become non-viable.

On Twitter, US Senator Ron Wyden (D-OR), one of S230's authors, explains what the law does and does not do. At the New York Times, Peter Baker and Daisuke Wakabayashi argue, I think correctly, that the person a Trump move to weaken S230 will hurt most is...Trump himself. Last month, the Washington Post put the count of Trump's "false or misleading claims" while in office at 18,000 - and the rate has grown over time. Probably most of them have been published on Twitter.

As the lawyer Carrie A. Goldberg points out on Twitter, there are two very different sets of issues surrounding S230. The victims she represents cannot sue the platforms where they met serial rapists who preyed on them or continue to tolerate the revenge porn their exes have posted. Compare that very real damage to the victimhood conservatives are claiming: that the social media platforms are biased against them and disproportionately censor their posts. Goldberg wants access to justice for the victims she represents, who are genuinely harmed, and warns against altering S230 for purposes such as "to protect the right to spread misinformation, conspiracy theory, and misinformation".

However, while Goldberg's focus on her own clients is understandable, Trump's desire to tweet unimpeded about mail-in ballots or shooting protesters is not trivial. We are going to need to separate the issue of how and whether S230 should be updated from Trump's personal behavior and his clearly escalating war with the social medium that helped raise him from joke to viable presidential candidate. The S230 question and how it's handled in Congress is important. Calling out Trump when he flouts clearly stated rules is important. Trump's attempt to wield his power for a personal grudge is important. Trump versus Twitter, which unfortunately is much easier to write about, is a sideshow.


Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 22, 2020

The pod exclusion

Vintage_Gloritone_Model_27_Cathedral-Tombstone_Style_Vacuum_Tube_Radio,_AM_Band,_TRF,_Circa_1930_(14663394535).jpgThis week it became plain that another bit of the Internet is moving toward the kind of commercialization and control the Internet was supposed to make difficult in the first place: podcasts. The announcement that one of the two most popular podcasts, the Joe Rogan Experience, will move both new episodes and its 11-year back catalogue to Spotify exclusively in a $100 million multiyear deal is clearly a step change. Spotify has also been buying up podcast networks, and at the Verge, Ashley Carman suggests the podcast world will bifurcate into twin ecosystems, Spotify versus Everyone Else.

Like a few hundred million other people, I am an occasional Rogan listener, my interest piqued by a web forum mention of his interview with Jeff Novitzky, the investigator in the BALCO doping scandal. Other worth-the-time interviews from his prolific output include Lawrence Lessig, epidemiologist Michael Osterholm (particularly valuable because of its early March timing), Andrew Yang, and Bernie Sanders. Parts of Twitter despise him; Rogan certainly likes to book people (usually, but not always, men - for example Roseanne Barr) who are being pilloried in the news and jointly chew over their situation. Even his highest-profile interviewees rarely find, anywhere else, the two to three hours Rogan spends letting them talk quietly about their thinking. He draws them out by not challenging them much, and his predilection for conspiracy theories and interest in unproven ideas about nutrition make it advisable to be selective and look for countervailing critiques.

It's about 20 years since I first read about Dave Winers early experiments in "audio blogging", renamed "podcast" after the 2001 release of the iPod eclipsed all previously existing MP3 players. The earliest podcasts tended to be the typical early-stage is-this-thing-on? that leads the unimaginative to dismiss the potential. But people with skills honed in radio were obviously going to do better, and within a few years (to take one niche example) the skeptical world was seeing weekly podcasts like Skepchick (beginning 2005) and The Pod Delusion (2009-2014). By 2014, podcast networks were forming, and an estimated 20% of Americans were listening to podcasts at least once a month.

That era's podcasts, although high-quality, were - and in some cases still are - produced by people seeking to educate or promote a cause, and were not generally money-making enterprises in their own right. The change seems to have begun around 2010, as the acclerating rise of smartphones made podcasts as accessible as radio for mobile listening. I didn't notice until late 2016, when the veteran screenwriter and former radio announcer and DJ Ken Levine announced on his daily 11-year-old blog that he was starting up Hollywood & Levine and I discovered the ongoing influx of professional comedians, actors, and journalists into podcasting. Notably, they all carried ads for the same companies - at the minimum, SquareSpace and Blue Apron. Like old-time radio, these minimal-production ads were read by the host, sometimes making the whole affair feel uncomfortably fake. Per the Wall Street Journal, US advertising revenue from podcasting was $678.7 million last year, up 42% over 2018.

No wonder advertisers like podcasts: users can block ads on a website or read blog postings via RSS, but no matter how you listen to a podcast the ads remain in place, and if you, like most people, listen to podcasts (like radio) when your hands are occupied, you can't easily skip past them. For professional communicators, podcasts therefore provide direct access to revenues that blogging had begun to offer before it was subsumed by social media and targeted advertising.

The Rogan deal seems a watershed moment that will take all this to a new level. The key element really isn't the money, as impressive as it sounds at first glance; it's the exclusive licensing. Rogan built his massive audience by publishing his podcast in both video and audio formats widely on multiple platforms, primarily his own websites and YouTube; go to any streaming site and you're likely to find it listed. Now, his audience is big enough that Spotify apparently thinks that paying for exclusivity will net the company new subscribers. If you prefer downloads to streaming, however, you'll need a premium subscription. Rogan himself apparently thinks he will lose no control over his show; he distrusts YouTube's censorship.

At his blog on corporate competition, Matt Stoller proclaims that the Rogan deal means the death of independent podcasting. While I agree that podcasts circa 2017-2020 are in a state similar to the web in the 2000s, I don't agree this means the death of all independent podcasting - but it will be much harder for their creators to find audiences and revenues as Spotify becomes the primary gatekeeper. This is what happened with blogs between 2008 and 2015 as social media took over.

Both Carman's and Stoller's predictions are grim: that podcasts will go the way of today's web and become a vector for data collection and targeted advertising. Carman, however, imagines some survival for a privacy-protecting, open ecosystem of podcasts. I want to believe this. But, like blogging now, that ecosystem will likely have to find a new business model.


Illustrations: 1930s vacuum tube radio (via Joe Haupte).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2020

Uncontrolled digital unlending

800px-Books_HD_(8314929977).jpg
The Internet has made many aspects of intellectual property contentious at the best of times. In this global public health emergency, it seems inarguable that some of them should be set aside. Who can seriously object to copying ventilator parts so they can be used to save lives in this crisis? Similarly, if there were ever a moment for scientific journals to open up access to all paywalled research on coronaviruses to aid scientists all over the world, this is it.

But what about book authors, the vast majority of whom make only modest sums from their writing? This week, National Public Radio set off a Twitter storm when it highlighted the Internet Archive's "National Emergency Library". On Twitter, authors demanded to know why NPR was promoting a "pirate site". One wrote, "They stole [my book]." Another called it "Flagrant and wilful stealing." Some didn't mind: "Thrilled there's 15 of my books"; Longtime open access campaigner Cory Doctorow endorsed it.

The background: the Internet Archive's Open Library originally launched in 2006 with a plan to give every page of every book its own URL. Early last year, public conflict over the project built enough for net.wars to notice, when dozens of authors', creators', and publishers' organizations accused the site of mass copyright violation and demanded it cease distributing copyrighted works without permission.

The Internet Archive finds self-justification in a novel argument: that because the state of California has accepted it as a library it can buy and scan books and "lend" the digital copies without requiring explicit permission. On this basis, the Archive offers anyone two weeks to read any of the 1.4 million copyrighted books in its collection either online as images or downloaded as copy-protected Adobe Digital Editions. Meanwhile, the book is unavailable to others, who wait on a list, as in a physical library. The Archive's white paper by lawyers David Hansen and Kyle K. Courtney argues that this "controlled digital lending" is legal.

Enter the coronavirus,. On the basis that the emergency has removed access to library books from both school kids and adults for teaching, research, scholarship, and "intellectual stimulation", the Archive is dropping the controls - "suspending waitlists" - and is presenting those 1.4 million books as the globally accessible National Emergency Library. "An opportunistic attack", the Association of American Publishers calls it.

The anger directed at the Archive has led it to revise its FAQ (Google Doc) and publish a blog posting. In both it explains that you can still only "borrow" a book for 14 days, but no waitlists means others can, too, and you can renew immediately if you want more time. The change will last until June 30, 2020 or the end of the US national emergency, whichever is later. It claims support "from across the library and educational communities". According to the FAQ, the collection includes very few current textbooks; the collection is primarily ordinary books published between 1922 and the early 2000s.

The Archive still justifies all this as "fair use" by saying it's what libraries do: buy (or accept as donations) and lend books. Outside the US, however, library lending pays authors a small but real royalty on those loans, payments the Archive ignores. For the National Writers Union, Edward Hasbrouck objects strenuously: besides not paying authors or publishers, the Archive takes no account of whether the works are still in print or available elsewhere in authorized digital editions. Authors who have updated digital editions specifically for the current crisis have no way to annotate the holdings to redirect people. Authors *can* opt out -but opt-out is the opposite of how copyright law works. " Do librarians and archivists really want to kick authors while our incomes are down?" he asks, pointing to the NWU's 2019 explanation of why CDL is a harmful divergence from traditional library lending. Instead, he suggests that public funds should be spent to purchase or license the books for public use.

Other objectors make similar points: many authors make very little in the first place; authors with new books, the result of years of work, are seeing promotional tours and paid speaking engagements collapse. Others' books are being delayed or canceled. Everyone else involved in the project is being paid - just not the people who created the works in the first place.

At the New Yorker, writer Jill Lepore again cites Courtney, who argues that in exigent circumstances libraries have "superpowers" that allows them to grant exceptional access "for research, scholarship, and study". This certainly seems a reason for libraries of scientific journal articles, like JSTOR, to open up their archives. But is the Archive's collection comparable?

Overall, it seems to me there are two separate issues. The first is the service itself - the unique legal claim, the service's poor image quality and typo-ridden uncorrected ebooks, and the refusal to engage with creators and publishers. The second - that it's an emergency stop-gap - is more defensible; no one expected the abrupt closure of libraries and schools. A digital service is ideally placed to fill the resulting gaps, and ensuring universal access to books should be part of our post-crisis efforts to rebuild with better resilience. For the first, however, the Internet Archive should engage with authors and publishers. The result could be a better service for all sides.


Illustrations: Books (Abhi Sharma via wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 6, 2020

Mission creep

Haystack-Cora.png"We can't find the needles unless we collect the whole haystack," a character explains in the new play The Haystack, written by Al Blyth and in production at the Hampstead Theatre through March 7. The character is Hannah (Sarah Woodward), and she is director of a surveillance effort being coded and built by Neil (Oliver Johnstone) and Zef (Enyi Ororonkwo), familiarly geeky types whose preferred day-off activities are the cinema and the pub, rather than catching up on sleep and showers, as Hannah pointedly suggests. Zef has a girlfriend (and a "spank bank" of downloaded images) and is excited to work in "counter-terrorism". Neil is less certain, less socially comfortable, and, we eventually learn, more technically brilliant; he must come to grips with all three characteristics in his quest to save Cora (Rona Morison). Cue Fleabag: "This is a love story."

The play is framed by an encrypted chat between Neil and Denise, Cora's editor at the Guardian (Lucy Black). We know immediately from the technological checklist they run down in making contact that there has been a catastrophe, which we soon realize surrounds Cora. Even though we're unsure what it is, it's clear Neil is carrying a load of guilt, which the play explains in flashbacks.

As the action begins, Neil and Zef are waiting to start work as a task force seconded to Hannah's department to identify the source of a series of Ministry of Defence leaks that have led to press stories. She is unimpressed with their youth, attire, and casual attitude - they type madly while she issues instructions they've already read - but changes abruptly when they find the primary leaker in seconds. Two stories remain; because both bear Cora's byline she becomes their new target. Both like the look of her, but Neil is particularly smitten, and when a crisis overtakes her, he breaks every rule in the agency's book by grabbing a train to London, where, calling himself "Tom Flowers", he befriends her in a bar.

Neil's surveillance-informed "god mode" choices of Cora's favorite music, drinks, and food when he meets her remind of the movie Groundhog Day, in which Phil (Bill Murray) slowly builds up, day by day, the perfect approach to the women he hopes to seduce. In another cultural echo, the tense beginning is sufficiently reminiscent of the opening of Laura Poitras's film about Edward Snowden, CitizenFour, that I assumed Neil was calling from Moscow.

The requirement for the haystack, Hannah explains at the beginning of Act Two, is because the terrorist threat has changed from organized groups to home-grown "lone wolves", and threats can come from anywhere. Her department must know *everything* if it is to keep the nation safe. The lone-wolf theory is the one surveillance justification Blyth's characters don't chew over in the course of the play; for an evidence-based view, consult the VOX-Pol project. In a favorite moment, Neil and Hannah demonstrate the frustrating disconnect between technical reality and government targets. Neil correctly explains that terrorists are so rare that, given the UK's 66 million population, no matter how much you "improve" the system's detection rate it will still be swamped by false positives. Hannah, however, discovers he has nonetheless delivered. The false positive rate is 30% less! Her bosses are thrilled! Neil reacts like Alicia Florrick in The Good Wife after one of her morally uncomfortable wins.

Related: it is one of the great pleasures of The Haystack that its three female characters (out of a total of five) are smart, tough, self-reliant, ambitious, and good at their jobs.

The Haystack is impressively directed by Roxana Silbert. It isn't easy to make typing look interesting, but this play manages it, partly by the well-designed use of projections to show both the internal and external worlds they're seeing, and partly by carefully-staged quick cuts. In one section, cinema-style cross-cutting creates a montage that fast-forwards the action through six months of two key relationships.

Technically, The Haystack is impressive; Zef and Neil speak fluent Python, algorithms, and Bash scripts, and laugh realistically over a journalist's use of Hotmail and Word with no encryption ("I swear my dad has better infosec"), while the projections of their screens are plausible pieces of code, video games, media snippets, and maps. The production designers and Blyth, who has a degree in econometrics and a background as a research economist, have done well. There were just a few tiny nitpicks: Neil can't trace Cora's shut-down devices "without the passwords" (huh?); and although Neil and Zef also use Tor, at one point they use Firefox (maybe) and Google (doubtful). My companion leaned in: "They wouldn't use that." More startling, for me, the actors who play Neil and Zef pronounce "cache" as "cachet"; but this is the plaint of a sound-sensitive person. And that's it, for the play's 1:50 length (trust me; it flies by).

The result is an extraordinary mix of a well-plotted comic thriller that shows the personal and professional costs of both being watched and being the watcher. What's really remarkable is how many of the touchstone digital rights and policy issues Blyth manages to pack in. If you can, go see it, partly because it's a fine introduction to the debates around surveillance, but mostly because it's great entertainment.


Illustrations: Rona Morison, as Cora, in The Haystack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 13, 2019

Becoming a science writer

JamesRandi-Florida2016jpg
As an Association of British Science Writers board member, I occasionally speak to science PhD students and postdocs about science writing. Since the most recent of these excursions was just this week, I thought I'd summarize some of what I've said.

To trained scientists aiming to make the switch: you are starting from a more knowledgeable place than the arts graduates who mostly populate this field. You already know how to investigate and add to a complex field of study, have a body of knowledge from which to reliably evaluate new claims, and know the significant contributors to your field and adjacent ones. What you need to learn are basic journalism skills such as interviewing, identifying stories, pitching them to venues where they might fit, remaining on the right side of libel law, and journalistic ethics and culture. Your new deadlines will seem really short!

Figuring out what kind of help you need is where an organization like the ABSW (and its counterparts in other countries) can help, first by offering opportunities for networking with other science writers, and second by providing training and resources. ABSW maintains, for example, a page that includes some basics and links.

Besides that, if you put "So You Want to Be a Science Writer" into your favorite search engine, you will find many guides from reputable sources such as other science writers' associations and university programs. I particularly like Ivan Oransky's talk for the National Association of Science Writers, because he begins with "my first failures".

Every career path is idiosyncratic enough that no one can copy its specifics. I began my writing career by founding The Skeptic magazine in 1987. Through the skeptics, I met all sorts of people, including one who got me my first writing-related job as a temporary subeditor on a computer magazine. Within weeks, I knew the editors of all the other magazines on its floor, and began writing features for them. In 1991, when I got online and sent my first email, I decided to specialize on the Internet because it was obviously the future of communication. A friend advises that if you find a fast-moving field, there will always be people willing to pay you to explain it to them.

So: I self-published, networked early and often - I joined the ABSW as soon as I was qualified - and luckily landed on a green field at the beginning of a complex and far-reaching social, cultural, political, and technological revolution. Today's early-career science writers will have to work harder to build their own networks than in the early 1990s, when we all met regularly at press conferences and shows - but they have vastly further reach than we had.

I have never had a job, so I can't tell people how to get one. I can, however, observe that if you focus solely on traditional media you will be aiming at a shrinking number of slots. Think more broadly about what science communication is, who does it, and in what context. The kind of journalism that used to be the sole province of newspapers and news magazines now has a home in NGOs, who also hire people who can do solid research, crunch data, and think creatively about new areas for investigation. You should also broaden your idea of "media" and "science communication". Few can be Robin Ince or Richard Wiseman, who combine comedy, magic, and science into sell-out shows, but everyone can work in non-traditional contexts in which to communicate science.

At the moment, commercial money is going into podcasts; people are building big followings for niche interests on YouTube and through self-publishing ebooks; and constant tweeters are important communicators, as botanist James Wong proves every day. Edward Hasbrouck, at the National Writers Union, has published solid advice on writing for digital formats: look to build revenue streams. The Internet offers many opportunities, but, as Hasbrouck writes, many are invisible to traditional publishing; as he also writes, traditional employment is just one of writers' many business models.

The big difficulty for trained academics is rethinking how you approach telling a story. Forget the academic structure of: 1) here is what I am going to say; 2) this is what I'm saying; 3) this is my summary of what I just said. Instead, when writing for the general public, put your most important findings first and tell your specific audience why it matters to *them*. Then show why they can have confidence in your claim by explaining your methods and how your findings fit into the rest of the relevant body of scientific knowledge. (Do not use net.wars as your model!)

Over time, you will probably want to branch out into other fields. Do not fear this; you know how to learn a complex field, and if you can learn one you can learn another.

It's inevitable that you will make some mistakes. When it happens, do your best to correct them, learn from how you made them, and avoid making the same one again.

Finally just a couple of other resources. My favorite book on writing is William Goldman's Adventures in the Screen Trade. He has solid advice for story structure no matter what you're writing. A handout I wrote for a blogging workshop for scientists (PDF) has some (I hope, useful) writing tips. Good luck!


Illustrations: Magician James Randi communicates science, Florida 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2019

When we were

zittrain-cim-iphone.jpg
"These people changed the world," said Jeff Wilkins, looking out across a Columbus, Ohio ballroom filled with more than 400 people. "And they know it, and are proud of it."

At one time, all this was his.

Wilkins was talking about...CompuServe, which he co-founded in 1969. How does it happen, he asked, that more than 400 people show up to celebrate a company that hasn't really existed for the last 23 years? I can't say, but a group of people happier to see each other (and random outsiders) again would be hard to find. "This is the only reunion I go to," one woman said.

It's easy to forget - or to never have known - CompuServe's former importance. Circa 1993, the Twitter handle now displayed on everyone's business cards and slides was their numbered CompuServe ID. My inclusion of mine (70007,5537) at the end of a Guardian article led a reader to complain that I should instead promote the small ISPs it would kill when broadband arrived. In 1994, Aerosmith released a single on CompuServe, the first time a major label tried online distribution. It probably took five hours to download.

In Wilkins' story, he was studying electrical engineering at the University of Arizona when his father-in-law asked for help with data processing for his new insurance company. Wilkins and fellow grad students Sandy Trevor, John Goltz, Larry Shelley, and Doug Chinnock, soon relocated to Columbus. It was, Wilkins said, Shelley who suggested starting a time-sharing company - "or should I say cloud computing?" Wilkins quipped, to applause and cheers.

Yes, he should. Everything new is old again.

In time-sharing, the fledgling company competed with GE and IBM. The information service started in 1979, as a way to occupy the computers during the empty evenings when the businesses had gone home. For the next 20 years, CompuServers invented everything for themselves: "GO" navigation commands, commercial email (first customer: HJ Heinz), live chat ("CB , news wires, online games and virtual worlds (partnering with Fujitsu on a graphical MUD), shopping... The now-ubiquitous GIF was the brainchild of Steve Wilhite (it's pronounced "JIF"). The legend of CompuServe inventions is kept alive by by Sandy Trevor and Dave Eastburn, whose Nuvocom "software archeology" business holds archives that have backed expert defense against numerous patent claims on technologies that CompuServe provably pioneered.

A panel reminisced about the CIS shopping mall. "We had an online stockbroker before anyone else thought about it," one said. Another remembered a call asking for a 30-minute meeting from the then-CEO of the nationwide flowers delivery service FTD. "I was too busy." (The CEO was Meg Whitman.). For CompuServe's 25th anniversary, the mall's travel agency collaborated on a three-day cruise with, as invited guests, the film critic Roger Ebert, who disseminated his movie reviews through the service and hosted the "Ask Roger Ebert" section in the Movies Forum, and his wife, Chaz. "That may have been the peak."

Mall stores paid an annual fee; curation ensured there weren't too many of any one category of store. Banners advertising products were such a novelty at the time - and often the liveliest, most visually attractive thing on the page - that as many as 25% of viewers clicked on them. Today, Amazon takes a percentage of transactions instead. "If we could have had a universal shopping cart, like Amazon," lamented one, "what might have been?"

Well, what? Could CompuServe now be under threat of a government-mandated breakup to separate its social media business, search, cloud provider, and shopping? Both CompuServe and AOL, whose speed to embrace graphical interfaces and aggressive marketing led it to first outstrip and then buy and dismantle CompuServe in the 1990s, would have had to cannibalize their existing businesses. Used to profits from access fees, both resisted the Internet's monthly subscription model.

One veteran openly admitted how profoundly he underestimated the threat of the Internet after surveying the rickety infrastructure designed by/for academics and students. "I didn't think that the Internet could survive in the reality of a business..." Instead, the information services saw their competition as each other. A contemporary view of the challenges is visible in this 1995 interview with Barry Berkov, the vice-president in charge of CIS.

However, CompuServe's closed approach left no opening for individuals' self-expression. The 1990s rising Internet stars, Geocities and MySpace, were all about that, as are today's social media.

So many shifts have changed social media since then: from topic-centered to person-centered forums, from proprietary to open to centralized, from dial-up modems to pervasive connections, the massive ramp-up of scale and, mobile-fueled, speed, along with the reconfiguration of business models and tehcnical infrastructure. Some things have degraded: past postings on Twitter and Facebook are much harder to find, and unwanted noise is everywhere. CompuServe would have had to navigate each of those shifts without error. As we know now, they didn't make it.

And yet, for 20-odd years, a company of early 20-somethings 2,500 miles from Silicon Valley, invented a prototype of today's world, at first unaware of the near-simultaneous first ARPAnet connection, the beginnings of the network they couldn't imagine would ever be trustworthy enough for businesses and governments to rely on. They may yet be proven right about that.

cis50-banner.jpg

Illustrations: Jonathan Zittrain's mockup of the CompuServe welcome screen (left, with thanks) next to today's iPhone showing how little things have changed; the reunion banner.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2019

The China syndrome

800px-The_Great_wall_-_by_Hao_Wei.jpgAbout five years ago, a friend commented that despite the early belief - promulgated by, among others, then-US president Bill Clinton and vice-president Al Gore - that the Internet would spread democracy around the world, so far the opposite seemed to be the case. I suggested perhaps it's like the rising sea level, where local results don't give the full picture.

Much longer ago, I remember wondering how Americans would react when large parts of the Internet were in Chinese. My friend shrugged. Why should they care? They don't have to read them.

This week's news shows that we may both have been wrong in both cases. The reality, as the veteran technology journalist Charles Arthur suggested in the Wednesday and Thursday editions of his weekday news digest, The Overspill, is that the Hong Kong protests are exposing and enabling the collision between China's censorship controls and Western standards for free speech, aided by companies anxious to access the Chinese market. We may have thought we were exporting the First Amendment, but it doesn't apply to non-government entities.

It's only relatively recently that it's become generally acknowledged that governments can harness the Internet themselves. In 2008, the New York Times thought there was a significant domestic backlash against China's censors; by 2018, the Times was admitting China's success, first in walling off its own edited version of the Internet, and second in building rival giant technology companies and speeding past the US in areas such as AI, smartphone payments, and media creation.

So, this week. On Saturday, Demos researcher Carl Miller documented an ongoing edit war at Wikipedia: 1,600 "tendentious" edits across 22 articles on topics such as Taiwan, Tiananmen Square, and the Dalai Lama to "systematically correct what [officials and academics from within China] argue are serious anti-Chinese biases endemic across Wikipedia".

On Sunday, the general manager of the Houston Rockets, an American professional basketball team, withdrew a tweet supporting the Hong Kong protesters after it caused an outcry in China. Who knew China was the largest international market for the National Basketball Association? On Tuesday, China responded that it wouldn't show NBA pre-season games, and Chinese fans may boycott the games scheduled for Shanghai. The NBA commissioner eventually released a statement saying the organization would not regulate what players or managers say. The Americanness of basketball: restored.

Also on Tuesday, Activision Blizzard suspended Chung Ng Wai, a professional player of the company's digital card game, Hearthstone, after he expressed support for the Hong Kong protesters in a post-win official interview and fired the interviewers. Chung's suspension is set to last for a year, and includes forfeiting his thousands of dollars of 2019 prize money. A group of the company's employees walked out in protest, and the gamer backlash against the company was such that the moderators briefly took the Blizzard subreddit private in order to control the flood of angry posts (it was reopened within a day). By Wednesday, EU-based Hearthstone gamers were beginning to consider mounting a denial-of-service-attack against Blizzard by sending so many subject access requests under the General Data Protection Regulation that it will swamp the company's resources complying with the legal requirement to fulfill them.

On Wednesday, numerous media outlets reported that in its latest iOS update Apple has removed the Taiwan flag emoji from the keyboard for users who have set their location to Hong Kong or Macau - you can still use the emoji, but the procedure for doing so is more elaborate. (We will save the rant about the uselessness of these unreadable blobs for another time.)

More seriously, also on Wednesday, the New York Times reported that Apple has withdrawn the HKmap.live app that Hong Kong protesters were using to track police after China's state media accusing and protecting the protesters.

Local versus global is a long-standing variety of net.war, dating back to the 1991 Amateur Action bulletin board case. At Stratechery, Ben Thompson discusses the China-US cultural clash, with particular reference to TikTok, the first Chinese company to reach a global market; a couple of weeks ago, the Guardian revealed the site's censorship policies.

Thompson argues that, "Attempts by China to leverage market access into self-censorship by U.S. companies should also be treated as trade violations that are subject to retaliation." Maybe. But American companies can't win at this game.

In her recent book, The Big Nine, Amy Webb discusses China AI advantage as it pours resources and, above all, data into becoming the world leader via Baidu, Ali Baba, and Tencent, which have grown to rival Google, Amazon, and Facebook, without ever needing to leave home. Beyond that, China has been spreading its influence by funding telecommunications infrastructure. The Belt and Road initiative has projects in 152 countries. In this, China is taking advantage of the present US administration's inward turn and worldwide loss of trust.

After reviewing the NBA's ultimate decision, Thompson writes, "I am increasingly convinced this is the point every company dealing with China will reach: what matters more, money or values?" The answer will always be money; whose values count will depend on which market they can least afford to alienate. This week is just a coincidental concatenation of early skirmishes; just wait for the Internet of Things.

Illustrations: The Great Wall of China (by Hao Wei, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 13, 2019

Purposeful dystopianism

Truman-Show-exist.pngA university comparative literature class on utopian fiction taught me this: all utopias are dystopias underneath. I was reminded of this at this week's Gikii, when someone noted the converse, that all dystopias contain within themselves the flaw that leads to their destruction. Of course, I also immediately thought of the bare patch on Smaug's chest in The Hobbit because at Gikii your law and technology come entangled with pop culture. (Write-ups of past years: 2018; 2016; 2014; 2013; 2008.)

Granted, as was pointed out to me, fictional utopias would have no dramatic conflict without dystopian underpinnings, just as dystopias would have none without their misfits plotting to overcome. But the context for this subdiscussion was the talk by Andres Guadamuz, which he began by locating "peak Cyber-utopianism" at 2006 to 2010, when Time magazine celebrated the power the Internet had brought each of us, Wikileaks was doing journalism, bitcoin was new, and social media appeared to have created the Arab Spring. "It looked like we could do anything." (Ah, youth.)

Since then, serially, every item on his list has disappointed. One startling statistic Guadamuz cited: streaming now creates more carbon emissions than airplanes. Streaming online video generates as much carbon dioxide per year as Belgium; bitcoin uses as much energy as Austria. By 2030, the Internet is projected to account for 20% of all energy consumption. Cue another memory, from 1995, when MIT Media Lab founder Nicholas Negroponte was feted for predicting in Being Digital that wired and wireless would switch places: broadcasting would move to the Internet's series of tubes, and historically wired connections such as the telephone network would become mobile and wireless. Meanwhile, all physical forms of information would become bits. No one then queried the sense of doing this. This week, the lab Negroponte was running then is in trouble, too. This has deep repercussions beyond any one institution.

Twenty-five years ago, in Tainted Truth, journalist Cynthia Crossen documented the extent to which funders get the research results they want. Successive generations of research have backed this up. What the Media Lab story tells us is that they also get the research they want - not just, as in the cases of Big Oil and Big Tobacco, the *specific* conclusions they want promoted but the research ecosystem. We have often told the story of how the Internet's origins as a cooperative have been coopted into a highly centralized system with central points of failure, a process Guadamuz this week called "cybercolonialism". Yet in focusing on the drivers of the commercial world we have paid insufficient attention to those driving the academic underpinnings that have defined today's technological world.

To be fair, fretting over centralization was the most mundane topic this week: presentations skittered through cultural appropriation via intellectual property law (Michael Dunford, on Disney's use of Māui, a case study of moderation in a Facebook group that crosses RuPaul and Twin Peaks fandom (Carolina Are), and a taxonomy of lying and deception intended to help decode deepfakes of all types (Andrea Matwyshyn and Miranda Mowbray).

Especially, it is hard for a non-lawyer to do justice to the discussions of how and whether data protection rights persist after death, led by Edina Harbinja, Lilian Edwards, Michael Veale, and Jef Ausloos. You can't libel the dead, they explained, because under common law, personal actions die with the person: your obligation not to lie about someone dies when they do. This conflicts with information rights that persist as your digital ghost: privacy versus property, a reinvention of "body" and "soul". The Internet is *so many* dystopias.

Centralization captured so much of my attention because it is ongoing and threatening. One example is the impending rollout of DNS-over-HTTPS. We need better security for the Internet's infrastructure, but DoH further concentrates centralized control. In his presentation Derek MacAuley noted that individuals who need the kind of protection DoH is claimed to provide would do better to just use Tor. It, too, is not perfect, but it's here and it works. This adds one more to so many historical examples where improving the technology we had that worked would have spared us the level of control now exercised by the largest technology companies.

Centralization completely undermines the Internet's original purpose: to withstand a bomb outage. Mozilla and Google surely know this. The third DoH partner, Cloudflare, the content delivery network in the middle, certainly does: when it goes down, as it did for 15 minutes in July, millions of websites become unreachable. The only sensible response is to increase resilience with multiple pathways. Instead, we have Facebook proposing to further entrench its central role in many people's lives with its nascent Libra cryptocurrency. "Well, *I*'m not going to use it" isn't an adequate response when in some countries Facebook effectively *is* the Internet.

So where are the flaws in our present Internet dystopias? We've suggested before that advertising saturation may be one; the fakery that runs all the way through the advertising stack is probably another. Government takeovers and pervasive surveillance provide motivation to rebuild alternative pathways. The built-in lack of security is, as ever, a growing threat. But the biggest flaw built into the centralized Internet may be this: boredom.


Illustrations: The Truman Show.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 12, 2019

Public access

WestWing-Bartlet-campaign-phone.pngIn the fantasy TV show The West Wing, when fictional US president Jed Bartlet wants to make campaign phone calls, he departs the Oval Office for the "residence", a few feet away, to avoid confusing his official and political roles. In reality, even before the show began in 1999, the Internet was altering the boundaries between public and private; the show's end in 2006 coincided with the founding of Twitter, which is arguably completing the job.

The delineation of public and private is at the heart of a case filed in 2017 by seven Twitter users backed by the Knight First Amendment Institute against US president Donald Trump. Their contention: Trump violated the First Amendment by blocking them for responding to his tweets with criticism. That Trump is easily offended, is not news. But, their lawyers argued, because Trump uses his Twitter account in his official capacity as well as for personal and campaign purposes, barring their access to his feed means effectively barring his critics from participating in policy. I liked their case. More important, lawyers liked their case; the plaintiffs cited many instances where Trump or members of his administration had characterized his tweets as official policy..

In May 2018, Trump lost in the Southern District of New York. This week, the US Court of Appeals for the Second Circuit unanimously upheld the lower court. Trump is perfectly free to block people from a personal account where he posts his golf scores as a private individual, but not from an account he uses for public policy announcements, however improvised and off-the-cuff they may be.

At The Volokh Conspiracy, Stuart Benjamin finds an unexplored tension between the government's ability to designate a space as a public forum and the fact that a privately-owned company sets the forum's rules. Here, as Lawrence Lessig showed in 1999, system design is everything. The government's lawyers contended that Twitter's lack of tools for account-holders leaves Trump with the sole option of blocking them. Benjamin's answer is: Trump didn't have to choose Twitter for his forum. True, but what other site would so reward his particular combination of impulsiveness and desperate need for self-promotion? A moderated blog, as Benjamin suggests, would surely have all the life sucked out of it by being ghost-written.

Trump's habit of posting comments that would get almost anyone else suspended or banned has been frequently documented - see for example Cory Scarola at Inverse in November 2016. In 2017, Jack Moore at GQ begged Twitter to delete his account to keep us all safer after a series of tweets in which Trump appeared to threaten North Korea with nuclear war. The site's policy team defended its decision not to delete the tweets on the grounds of "public interest". At the New York Times, Kara Swisher (heralding the piece on Twitter with the neat twist on Sartre, Hell is other tweeters) believes that the ruling will make a full-on Trump ban less likely.

Others have wondered whether the case gives Americans that Twitter has banned for racism and hate speech the right to demand readmission by claiming that they are being denied their First Amendment rights. Trump was already known to be trying to prove that social media sites are systemically biased towards banning far-right voices; those are the people he invited to the White House this week for a summit on social media.

It seems to me, however, that the judges in this case have correctly understood the difference between being banned from a public forum because of your own behavior and being banned because the government doesn't like your kind. The first can and does happen in every public space anywhere; as a privately-owned space, Twitter is free to make such decisions. But when the government decides to ban its critics, that is censorship, and the First Amendment is very clear about it. It's logical enough, therefore, to feel that the court was right.

Female politicians, however, probably already see the downside. Recently, Amnesty International highlighted the quantity and ferocity of abuse they get. No surprise that within a day the case was being cited by a Twitter user suing Alexandria Ocasio-Cortez for blocking him. How this case resolves will be important; we can't make soaking up abuse the price of political office, while the social media platforms are notoriously unresponsive to such complaints.

No one needs an account to read any Twitter user's unprotected tweets. Being banned costs the right to interact,, not the right to read. But because many tweets turn into long threads of public discussion it makes sense that the judges viewed the plaintiffs' loss as significant. One consequence, though, is that the judgment conceptually changes Trump's account from a stream through an indivisible pool into a subcommunity with special rules. Simultaneously, the company says it will obscure - though not delete - tweets from verified accounts belonging to politicians and government officials with more than 100,000 followers that violate its terms and conditions. I like this compromise: yes, we need to know if leaders are lighting matches, but it shouldn't be too easy to pour gasoline on them - and we should be able to talk (non-abusively) back.


Illustrations:The West Wing's Jed Bartlet making phone calls from the residence.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 5, 2019

Legal friction

ny-public-library-lions.JPGWe normally think of the Internet Archive, founded in 1996 by Brewster Kahle, as doing good things. With a mission of "universal access to all knowledge", it archives the web (including many of my otherwise lost articles), archives TV news footage and live concerts, and provides access to all sorts of information that would otherwise be lost.

Equally, authors usually love libraries. Most grew up burrowed into the stacks, and for many libraries are an important channel to a wider public. A key element of the Archive's position in what follows rests on the 2007 California decision officially recognizing it as a library.

Early this year, myriad authors and publishers organizations - including the UK's Society of Authors and the US's Authors Guild - issued a joint statement attacking the Archive's Open Library project. In this "controlled digital lending" program, borrowers - anyone, via an Archive account - get two weeks to read ebooks, either online in the Archive's book reader or offline in a copy-protected format in Adobe Digital Editions.

What offends rights holders is that unlike the Gutenberg Project, which offers downloadable copies of works in the public domain, Open Library includes still-copyrighted modern works (including net.wars-the-book). The Archive believes this is legal "fair use".

You may, like me, wonder if the Archive is right. The few precedents are mixed. In 2000, "My MP3.com" let users stream CDs after proving ownership of a physical copy by inserting it in their CD drive. In the resulting lawsuit the court ruled MP3.com's database of digitized CDs an infringement, partly because it was a commercial, ad-supported service. Years later, Amazon does practically the same thing..

In 2004, Google Books began scanning libraries' book and magazine collections into a giant database that allows searchers to view scraps of interior text. In 2015, publishers lost their lawsuit. Google is a commercial company - but Google Books carries no ads (though it presumably does collect user data), and directs users to source copies from libraries or booksellers.

A third precedent, cited by the Authors Guild, is Capitol Records v. ReDigi. In that case, rulings have so far held that ReDigi's resale process, which transfers music purchased on iTunes from old to new owners means making new and therefore infringing copies. Since the same is true of everything from cochlear implants to reading a web page, this reasoning seems wrong.

Cambridge University Press v. Patton, filed in 2008 and still ongoing, has three publishers suing Georgia State University over its e-reserves system, which loans out course readings on CDL-type terms. In 2012, the district court ruled that most of this is fair use; appeal courts have so far mostly upheld that view.

The Georgia case is cited David R. Hansen and Kyle K. Courtney in their white paper defending CDL. As "format-shifting", they argue CDL is fair use because it replicates existing library lending. In their view, authors don't lose income because the libraries already bought copies, and it's all covered by fair use, no permission needed. One section of their paper focuses on helping libraries assess and minimize their legal risk. They concede their analysis is US-only.

From a geek standpoint, deliberately introducing friction into ebook lending in order to replicate the time it takes the book to find its way back into the stacks (for example) is silly, like requiring a guy with a flag on a horse to escort every motor car. And it doesn't really resolve the authors' main complaints: lack of permission and no payment. Of equal concern ought to be user complaints about zillions of OCR errors. The Authors Guild's complaint that saved ebooks "can be made readable by stripping DRM protection" is, true, but it's just as true of publishers' own DRM - so, wash.

To this non-lawyer, the white paper appears to make a reasonable case - for the US, where libraries enjoy wider fair use protection and there is no public lending right, which elsewhere pays royalties on borrowing that collection societies distribute proportionately to authors.

Outside the US, the Archive is probably screwed if anyone gets around to bringing a case. In the UK, for example, the "fair dealing" exceptions allowed in the Copyright, Designs, and Patents Act (1988) are narrowly limited to "private study", and unless CDL is limited to students and researchers, its claim to legality appears much weaker.

The Authors Guild also argues that scanning in physical copies allows libraries to evade paying for library ebook licenses. The Guild's preference, extended collective licensing, has collection societies negotiating on behalf of authors. So that's at least two possible solutions to compensation: ECL, PLR.

Differentiating the Archive from commercial companies seems to me fair, even though the ask-forgiveness-not-permission attitude so pervasive in Silicon Valley is annoying. No author wants to be an indistinguishable bunch of bits an an undifferentiated giant pool of knowledge, but we all consume far more knowledge than we create. How little authors earn in general is sad, but not a legal argument: no one lied to us or forced us into the profession at gunpoint. Ebook lending is a tiny part of the challenges facing anyone in the profession now, and my best guess is that whatever the courts decide now eventually this dispute will just seem quaint.

Illustrations: New York Public Library (via pfhlai at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2019

Reopening the source

SphericalCow2.gif
"There is a disruption coming." Words of doom?

Several months back we discussed Michael Salmony's fear that the Internet is about to destroy science. Salmony reminded that his comments came in a talk on the virtues of the open economy, and then noted the following dangers:

- Current quality-assurance methods (peer-review, quality editing, fact checking etc) are being undermined. Thus potentially leading to an avalanche of attention-seeking open garbage drowning out the quality research;
- The excellent high-minded ideals (breaking the hold of the big controllers, making all knowledge freely accessible etc) of OA are now being subverted by models that actually ask authors (or their funders) to spend thousands of dollars per article to get it "openly accessible". Thus again privileging the rich and well connected.

The University of Bath associate professor Joanna Bryson rather agreed with Salmony, also citing the importance of peer review. So I stipulate: yes, peer review is crucial for doing good science.

In a posting deploring the death of the monograph, Bryson notes that, like other forms of publishing, many academic publishers are small and struggle for sustainability. She also points to a Dutch presentation arguing that open access costs more.

Since she, as an academic researcher, has skin in this game, we have to give weight to her thoughts. However, many researchers dissent, arguing that academic publishers like Elsevier, Axel Springer profit from an unfair and unsustainable business model. Either way, an existential crisis is rolling toward academic publishers like a giant spherical concrete cow.

So to yesterday's session on the ten-year future of research, hosted by European Health Forum Gastein and sponsored by Elsevier. The quote of doom we began with was voiced there.

The focal point was a report (PDF), the result of a study by Elsevier and Ipsos MORI. Their efforts eventually generated three scenarios: 1) "brave open world", in which open access publishing, collaboration, and extensive data sharing rule; 2) "tech titans", in which technology companies dominate research; 3) "Eastern ascendance", in which China leads. The most likely is a mix of the three. This is where several of us agreed that the mix is already our present. We surmised, cattily, that this was more an event looking for a solution to Elsevier's future. That remains cloudy.

The rest does not. For the last year I've been listening to discussions about how academic work can find greater and more meaningful impact. While journal publication remains essential for promotions and tenure within academia, funders increasingly demand that research produce new government policies, change public conversations, and provide fundamentally more effective practice.

Similarly, is there any doubt that China is leading innovation in areas like AI? The country is rising fast. As for "tech titans", while there's no doubt that these companies lead in some fields, it's not clear that they are following the lead of the great 1960s and 1970s corporate labs like Bell Labs, Xerox PARC and IBM Watson, which invested in fundamental research with no connection to products. While Google, Facebook, and Microsoft researchers do impressive work, Google is the only one publicly showing off research, that seems unrelated to its core business">.

So how long is ten years? A long time in technology, sure: in 2009: Twitter, Android, and "there's an app for that" were new(ish), the iPad was a year from release, smartphones got GPS, netbooks were rising, and 3D was poised to change the world of cinema. "The academic world is very conservative," someone at my table said. "Not much can change in ten years."

Despite Sci-Hub, the push to open access is not just another Internet plot to make everything free. Much of it is coming from academics, funders, librarians, and administrators. In the last year, the University of California dropped Elsevier rather than modify its open access policy or pay extra for the privilege of keeping it. Research consortia in Sweden, Germany, and Hungary have had similar disputes; a group of Norwegian institutions recently agreed to pay €9 million a year to cover access to Elsevier's journals and the publishing costs of its expected 2,000 articles.

What is slow to change is incentives within academia. Rising scholars are judged much as they were 50 years ago: how much have they published, and where? The conflict means that younger researchers whose work has immediate consequences find themselves forced to choose between prioritizing career management - via journal publication - or more immediately effective efforts such as training workshops and newspaper coverage to alert practitioners in the field of new problems and solutions. Choosing the latter may help tens of thousands of people - at a cost of a "You haven't published" stall to their careers. Equally difficult, today's structure of departments and journals is poorly suited for the increasing range of multi-, inter-, and trans-disciplinary research. Where such projects can find publication remains a conundrum.

All of that is without considering other misplaced or perverse incensitives in the present system: novel ideas struggle to emerge; replication largely does not happen or fails, and journal impact factors are overvalued. The Internet has opened up beneficial change: Ben Goldacre's COMPare project to identify dubious practices such as outcome switching and misreported findings, and the push to publish data sets; and preprint servers give much wider access to new work. It may not be all good; but it certainly isn't all bad.


Illustrations: A spherical cow jumping over the moon (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 14, 2019

Copywrong

Anti-copyright.svg.pngJust a couple of weeks ago it looked like the EU's proposed reform of the Copyright Directive, last updated in 2001, was going to run out of time. In the last three days, it's revived, and it's heading straight for us. As Joe McNamee, the outgoing director of European Digital Rights (EDRi), said last year, the EU seems bent on regulating Facebook and Google by creating an Internet in which *only* Facebook and Google can operate.

We'll start with copyright. As previously noted, the EU's proposed reforms include two particularly contentious clauses: Article 11, the "link tax", which would require anyone using more than one or two words to link to a news article elsewhere to get a license, and Article 13, the "upload filter", which requires any site older than three years *or* earning more than €10,000,000 a year in revenue to ensure that no user posts anything that violates copyright, and sites that allow user-generated content must make "best efforts" to buy licenses for anything they might post. So even a tiny site - like net.wars, which is 13 years old - that hosted comments would logically be required to license all copyrighted content in the known universe, just in case. In reviewing the situation at TechDirt, Mike Masnick writes, "If this becomes law, I'm not sure Techdirt can continue publishing in the EU." Article 13, he continues, makes hosting comments impossible, and Article 11 makes their own posts untenable. What's left?

Thumbnail image for Thumbnail image for Julia Reda-wg-2016-06-24-cropped.jpgTo these known evils, the German Pirate Party MEP Julia Reda finds that the final text adds two more: limitations on text and data mining that allow rights holders to opt out under most circumstances, and - wouldn't you know it? - the removal of provisions that would have granted authors the right to proportionate remuneration (that is, royalties) instead of continuing to allow all-rights buy-out contracts. Many younger writers, particularly in journalism, now have no idea that as recently as 1990 limited contracts were the norm; the ability to resell and exploit their own past work was one reason the writers of the mid-20th century made much better livings than their counterparts do now. Communia, an association of digital rights organizations, writes that at least this final text can't get any *worse*.

Well, I can hear Brexiteers cry, what do you care? We'll be out soon. No, we won't - at least, we won't be out from under the Copyright Directive. For one thing, the final plenary vote is expected in March or April - before the May European Parliament general election. The good side of this is that UK MEPs will have a vote, and can be lobbied to use that vote wisely; from all accounts the present agreed final text settled differences between France and Germany, against which the UK could provide some balance. The bad side is that the UK, which relies heavily on exports of intellectual property, has rarely shown any signs of favoring either Internet users or creators against the demands of rights holders. The ugly side is that presuming this thing is passed before the UK brexits - assuming that happens - it will be the law of the land until or unless the British Parliament can be persuaded to amend it. And the direction of travel in copyright law for the last 50 years has very much been toward "harmonization".

Plus, the UK never seems to be satisfied with the amount of material its various systems are blocking, as the Open Rights Group documented this week. If the blocks in place weren't enough, Rebecca Hill writes at the Register: under the just-passed Counter-Terrorism and Border Security Act, clicking on a link to information likely to be useful to a person committing or preparing an act of terrorism is committing an offense. It seems to me that could be almost anything - automotive listings on eBay, chemistry textbooks, a *dictionary*.

What's infuriating about the copyright situation in particular is that no one appears to be asking the question that really matters, which is: what is the problem we're trying to solve? If the problem is how the news media will survive, this week's Cairncross Review, intended to study that exact problem, makes some suggestions. Like them or loathe them, they involve oversight and funding; none involve changing copyright law or closing down the Internet.

Similarly, if the problem is market dominance, try anti-competition law. If the problem is the increasing difficulty of making a living as an author or creator, improve their rights under contract law - the very provisions that Reda notes have been removed. And, finally, if the problem is the future of democracy in a world where two companies are responsible for poisoning politics, then delving into campaign finances, voter rights, and systemic social inequality pays dividends. None of the many problems we have with Facebook and Google are actually issues that tightening copyright law solves - nor is their role in spreading anti-science, such as this, just in from Twitter, anti-vaccination ads targeted at pregnant women.

All of those are problems we really do need to work on. Instead, the only problem copyright reform appears to be trying to solve is, "How can we make rights holders happier?" That may be *a* problem, but it's not nearly so much worth solving.


Illustrations: Anti-copyright symbol (via Wikimedia); Julia Reda MEP in 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 25, 2019

Reversal of fortunes

Seabees_remove_corroded_zinc_anodes_from_an_undersea_cable._(28073762161).jpgIt may seem unfair to keep busting on the explosion of the Internet's origin myths, but documenting what happens to the beliefs surrounding the beginning of a new technology may help foster more rational thinking next time.

Today's two cherished early-Internet beliefs: 1) the Internet was designed withstand a bomb outage; 2) the Internet is impossible to censor. The first of these is true - the history books are clear on this - but it was taken to mean that the Internet could withstand all damage. That's just not true; it can certainly be badly disrupted on a national or regional basis.

While the Internet was new, a favorite route to overload was introducing a new application - the web, for example. Around 1996, Peter Dawe, the founder of one of Britain's first two ISPs, predicted that video would kill the Internet. For "kill" read "slow down horribly". Bear in mind that this was BB - before broadband - so an 11MB video file took hours to trickle in. Stream? Ha!

In 1995, Bob Metcalfe, the co-inventor of ethernet, predicted that the Internet would start to collapse in 1996. In 1997, he literally ate his column as penance for being wrong.

It was weird: with one of their brains people were staking their lives on online businesses, yet with another part the Internet was always vulnerable. My favorite was Simson Garfinkel, writing "Fifty Ways to Kill the Internet" for Wired in 1997 who nailed the best killswitch: "Buy ten backhoes." Underneath all the rhetoric about virtuality the Internet remains a physical network of cables. You'd probably need more than ten backhoes today, but it's still a finite number.

People have given up these worries even though parts of the Internet are actually being blacked out - by governments. In the acute form either access providers (ISPs, mobile networks) are ordered to shut down, or the government orders blocks on widely-used social media that people use to distribute news (and false news) and coordinate action, such as Twitter, Facebook, or WhatsApp.

In 2018 , governments shutting down "the Internet" became an increasingly frequent fixture of the the fortnightly Open Society Foundation Information Program News Digest. The list for 2018 is long, as Access Now says. At New America, Justin Sherman predicts that 2019 will see a rise in Internet blackouts - and I doubt he'll have to eat his pixels. The Democratic Republic of Congo was first, on January 1, soon followed by Zimbabwe.

There's general agreement that Internet shutdowns are bad for both democracy and the economy. In a 2016 study, the Brookings Institution estimated that Internet shutdowns cost countries $2.4 billion in 2015 (PDF), an amount that surely rises as the Internet becomes more deeply embedded in our infrastructure.

But the less-worse thing about the acute form is that it's visible to both internal and external actors. The chronic form, the second of our "things they thought couldn't be done in 1993", is long-term and less visible, and for that reason is the more dangerous of the two. The notion that censoring the Internet is impossible was best expressed by EFF co-founder John Gilmore in 1993: "The Internet perceives censorship as damage and routes around it". This was never a happy anthropomorphization of a computer network; more correctly, *people* on the Internet... Even today, ejected Twitterers head toGab; disaffected 4chan users create 8chan. But "routing around the damage" only works as long as open protocols permit anyone to build a new service. No one suggests that *Facebook* regards censorship as damage and routes around it; instead, Facebook applies unaccountable censorship we don't see or understand. The shift from hundreds of dial-up ISPs to a handful of broadband providers is part of this problem: centralization.

The country that has most publicly and comprehensively defied Gilmore's aphorism is China; in the New York Times, Raymond Zhong recently traced its strategy. At Technology Review, James Griffiths reports that the country is beginning to export its censorship via malware infestations and DDoS attacks, while Abdi Latif Dahir writes at Quartz that it is also exporting digital surveillance to African countries such as Morocco, Egypt, and Libya inside the infrastructure it's helping them build as part of its digital Silk Road.

The Guardian offers a guide to what Internet use is like in Russia, Cuba, India, and China. Additional insight comes from Chinese professor Bai Tongdong, who complains in the South China Morning Post that Westerners opposing Google's Dragonfly censored search engine project do not understand the "paternalism" they are displaying in "deciding the fate of Chinese Internet users" without considering their opinion.

Mini-shutdowns are endemic in democratic countries: unfair copyright takedowns, the UK's web blocking, and EU law limiting hate speech. "From being the colonizers of cyberspace, Americans are now being colonized by the standards adopted in Brussels and Berlin," Jaccob Mchangama complains at Quillette.

In the mid-1990s, Americans could believe they were exporting the First Amendment. Another EFF co-founder, John Perry Barlow, was more right than he'd have liked when, in a January 1992 column for Communications of the ACM, he called the US First Amendment "a local ordinance". That is much less true of the control being built into our infrastructure now.


Illustrations: The old threat model: Seabees remove corroded zinc anodes from an undersea cable (via Wikimedia, from the US Navy site.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2019

Misforgotten

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpg"It's amazing. We're all just sitting here having lunch like nothing's happening, but..." This was on Tuesday, as the British Parliament was getting ready to vote down the Brexit deal. This is definitely a form of privilege, but it's hard to say whether it's confidence born of knowing your nation's democracy is 900 years old, or aristocrats-on-the-verge denial as when World War I or the US Civil War was breaking out.

Either way, it's a reminder that for many people historical events proceed in the background while they're trying to get lunch or take the kids to school. This despite the fact that all of us in the UK and the US are currently hostages to a paralyzed government. The only winner in either case is the politics of disgust, and the resulting damage will be felt for decades. Meanwhile, everything else is overshadowed.

One of the more interesting developments of the past digital week is the European advocate general's preliminary opinion that the right to be forgotten, part of data protection law, should not be enforceable outside the EU. In other words, Google, which brought the case, should not have to prevent access to material to those mounting searches from the rest of the world. The European Court of Justice - one of the things British prime minister Theresa May has most wanted the UK to leave behind since her days as Home Secretary - typically follows these preliminary opinions.

The right to be forgotten is one piece of a wider dispute that one could characterize as the Internet versus national jurisdiction. The broader debate includes who gets access to data stored in another country, who gets to crack crypto, and who gets to spy on whose citizens.

This particular story began in France, where the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection regulator, fined Google €100,000 for selectively removing a particular person's name from its search results on just its French site. CNIL argued that instead the company should delink it worldwide. You can see their point: otherwise, anyone can bypass the removal by switching to .com or .co.jp. On the other hand, following that logic imposes EU law on other countries, such as the US's First Amendment. Americans in particular tend to regard the right to be forgotten with the sort of angry horror of Lady Bracknell contemplating a handbag. Google applied to the European Court of Justice to override CNIL and vacate the fine.

A group of eight digital rights NGOs, led by Article 19 and including Derechos Digitales, the Center for Democracy and Technology, the Clinique d'intérêt public et de politique d'Internet du Canada (CIPPIC), the Electronic Frontier Foundation, Human Rights Watch, Open Net Korea, and Pen International, welcomed the ruling. Many others would certainly agree.

The arguments about jurisdiction and censorship were, like so much else, foreseen early. By 1991 or thereabouts, the question of whether the Internet would be open everywhere or devolve to lowest-common-denominator censorship was frequently debated, particularly after the United States v. Thomas case that featured a clash of community standards between Tennessee and California. If you say that every country has the right to impose its standards on the rest of the world, it's unclear what would be left other than a few Disney characters and some cat videos.

France has figured in several of these disputes: in (I think) the first international case of this kind, in 2000, it was a French court that ruled that the sale of Nazi memorabilia on Yahoo!'s site was illegal; after trying to argue that France was trying to rule over something it could not control, Yahoo! banned the sales on its French auction site and then, eventually, worldwide.

Data protection law gave these debates a new and practical twist. The origins of this particular case go back to 2014, when the European Court of Justice ruled in Google Spain v AEPD and Mario Costeja González that search engines must remove links to web pages that turn up in a name search and contain information that is irrelevant, inadequate, or out of date. This ruling, which arguably sought to redress the imbalance of power between individuals and corporations publishing information about them and free expression. Finding this kind of difficult balance, the law scholar Judith Rauhofer argued at that year's Computers, Freedom, and Privacy, is what courts *do*. The court required search engines to remove from the search results that show up in a *name* search the link to the original material; it did not require the original websites to remove it entirely or require the link's removal from other search results. The ruling removed, if you like, a specific type of power amplification, but not the signal.

How far the search engines have to go is the question the ECJ is now trying to settle. This is one of those cases where no one gets everything they want because the perfect is the enemy of the good. The people who want their past histories delinked from their names don't get a complete solution, and no one country gets to decide what people in other countries can see. Unfortunately, the real winner appears to be geofencing, which everyone hates.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 28, 2018

Opening the source

Participants_at_Budapest_meeting,_December_1,_2001.jpegRecently, Michael Salmony, who has appeared here before appeared horrified to discover open access, the movement for publishing scientific research so it's freely accessible to the public (who usually paid for it) instead of closed to subscribers. In an email, he wrote, "...looks like the Internet is now going to destroy science as well".

This is not my view.

The idea about science that I grew up with was that scientists building on and reviewing each other's work is necessary for good science, a self-correcting process that depends on being able to critique and replicate each other's work. So the question we should ask is: does the business model of traditional publishing support that process? Are there other models that would support that process better? Science spawns businesses, serves businesses, and may even be a business itself, but good-quality science first serves the public interest.

There are three separate issues here. The first is the process of science itself: how best to fund, support, and nurture it. The second is the business model of scientific *publishing*. The third, which relates to both of those, is how to combat abuse. Obviously, they're interlinked.

The second of these is the one that resonates with copyright battles past. Salmony: "OA reminds me warmly of Napster disrupting music publishing, but in the end iTunes (another commercial, quality controlled) model has won."

iTunes and the music industry are not the right models. No one dies of lack of access to Lady Gaga's latest hit. People *have* died through being unable to afford access to published research.

Plus, the push is coming from an entirely different direction. Napster specifically and file-sharing generally were created by young, anti-establishment independents who coded copyright bypasses because they could. The open access movement began with a statement of principles codified by university research types - mavericks, sure, but representing the Public Library of Science, Open Society Institute, BioMed Central, and universities in Montreal, London, and Southampton. My first contact with the concept was circa 1993, when World Health Organization staffer Christopher Zielinski raised the deep injustice of pricing research access out of developing countries' reach.

Sci-Hub is a symptom, not a cause. Another symptom: several months ago, 60 German universities canceled their subscriptions to Elsevier journals to protest the high fees and restricted access. Many scientists are offended at the journals' expectation that they will write papers for free and donate their time for peer review while then charging them to read the published results. One way we know this is that Sci-Hub builds its giant cache via educational institution proxies that bypass the paywalls. At least some of these are donated by frustrated people inside those institutions. Many scientists use it.

As I understand it, publication costs are incorporated into research grants; there seems no reason why open access should impede peer review or indexing. Why shouldn't this become financially sustainable and assure assure quality control as before?

A more difficult issue is that one reason traditional journals still matter is that academic culture has internalized their importance in determining promotions and tenure. Building credibility takes time, and many universities have been slow to adapt. However, governments and research councils in Germany, the UK, and South Africa are all pushing open access policies via their grant-making conditions.

Plus, the old model is no longer logistically viable in many fields as the pace of change accelerates. Computer scientists were first to ignore it, relying instead on conference proceedings and trading papers and research online.

Back to Salmony: "Just replacing one bad model with another one that only allows authors who can afford to pay thousands of dollars (or is based on theft, like Sci Hub) and that threatens the quality (edited, peer review, indexed etc) sounds less than convincing." In this he's at odds with scientists such as Ben Goldacre, who in 2007 called open access "self-evidently right and good".

This is the first issue. In 1992, Marcel C. LaFollette's Stealing into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing documented many failures of traditional peer review. In 2010, the Greek researcher John Ioannidis established how often medical research is retracted. At Retraction Watch, science journalist Ivan Oransky finds remarkable endemic sloppiness and outright fraud. Admire the self-correction, but the reality is that journals have little interest in replication, preferring newsworthy new material - though not *too* new.

Ralph Merkle, the "third man", alongside Whit Diffie and Martin Hellman, inventing public key cryptography, has complained that journals favor safe, incremental steps. Merkle's cryptography idea was dismissed with: "There is nothing like this in the established literature." True. But it was crucial for enabling ecommerce.

Salmony's third point: "[Garbage] is the plague of the open Internet", adding a link to a Defon 26 talk. Sarah Jeong's Internet of Garbage applies.

Abuse and fakery are indeed rampant, but a lot is due to academic incentives. For several years, my 2014 article for IEEE Security & Privacy explaining the Data Retention and Investigatory Powers Act (2014) attracted invitations to speak at (probably) fake conferences and publish papers in (probably) fake journals. Real researchers tell me this is par for the course. But this is a problem of human predators, not "the open Internet", and certainly not open access.


Illustrations: Participants in drafting the Budapest principles (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 30, 2018

Digital rights management

parliament-whereszuck.jpg"I think we would distinguish between the Internet and Facebook. They're not the same thing." With this, the MP Damian Collins (Conservative, Folkstone and Hythe) closed Tuesday's hearing on fake news, in which representatives of nine countries, combined population 400 million, posed questions to Facebook VP for policy Richard Allan, proxying for non-appearing CEO Mark Zuckerberg.

Collins was correct when you're talking about the countries present: UK, Ireland, France, Belgium, Latvia, Canada, Argentina, Brazil, and Singapore. However, the distinction is without a difference in numerous countries where poverty and no-cost access to Facebook or its WhatsApp subsidiary keeps the population within their boundaries. Foreseeing this probable outcome, India's regulator banned Facebook's Free Basics on network neutrality grounds.

Much less noticed, the nine also signed a set of principles for governing the Internet. Probably the most salient point is the last one, which says technology companies "must demonstrate their accountability to users by making themselves fully answerable to national legislatures and other organs of representative democracy". They could just as well have phrased it, "Hey, Zuckerberg: start showing up."

This was, they said, the first time multiple parliaments have joined together in the House of Commons since 1933, and the first time ever that so many nations assembled - and even that wasn't enough to get Zuckerberg on a plane. Even if Allan was the person best-placed to answer the committee's questions, it looks bad, like you think your company is above governments.

The difficulty that has faced would-be Internet regulators from the beginning is this: how do you get 200-odd disparate cultures to agree? China would openly argue for censorship; many other countries would openly embrace freedom of expression while happening to continue expanding web blocking, filtering, and other restrictions. We've seen the national disparities in cultural sensitivities played out for decades in movie ratings and TV broadcasting rules. So what's striking about this declaration is that nine countries from three continents have found some things they can agree on - and that is that libertarian billionaires running the largest and most influential technology companies should accept the authority of national governments. Hence, the group's first stated principle: "The internet is global and law relating to it must derive from globally agreed principles". It took 22 years, but at last governments are responding to John Perry Barlow's 1996 Declaration of the Independence of Cyberspace: "Not bloody likely."

Even Allan, a member of the House of Lords and a former MP (LibDem, Sheffield Hallam), admitted, when Collins asked how he thought it looked that Zuckerberg had sent a proxy to testify, "Not great!"

The governments' principles, however, are a statement of authority, not a bill of rights for *us*, a tougher proposition that many have tried to meet. In 2010-2012, there was a flurry of attempts. Then-US president Barack Obama published a list of privacy principles; the 2010 Computers, Freedom, and Privacy conference, led by co-chair Jon Pincus, brainstormed a bill of rights mostly aimed at social media; UK deputy Labour leader Tom Watson ran for his seat on a platform of digital rights (now gone from his website); and US Congressman Darrell Issa (R-OH) had a try.

Then a couple of years ago, Cybersalon began an effort to build on all these attempts to draft a bill of rights hoping it would become a bill in Parliament. Labour drew on it for its Digital Democracy Manifesto (PDF) in 2016 - though this hasn't stopped the party from supporting the Investigatory Powers Act.

The latest attempt came a few weeks ago, when Tim Berners-Lee launched a contract for the web, which has been signed by numerous organizations and individuals. There is little to object to: universal access, respect for privacy, free expression, and human rights, civil discourse. Granted, the contract is, like the Bishop of Oxford's ten commandments for artificial intelligence, aspirational more than practically prescriptive. The civil discourse element is reminiscent of Tim O'Reilly's 2007 Code of Conduct, which many, net.wars included, felt was unworkable.

The reality is that it's unlikely that O'Reilly's code of conduct or any of its antecedents and successors will ever work without rigorous human moderatorial intervention. There's a similar problem with the government pledges: is China likely to abandon censorship? Next year half the world will be online - but alongside the Contract a Web Foundation study finds that the rate at which people are getting online has fallen sharply since 2015. Particularly excluded are women and the rural poor, and getting them online will require significant investment in not only broadband but education - in other words, commitments from both companies and governments.

Popular Mechanics calls the proposal 30 years too late; a writer on Medium calls it communist; and Bloomberg, among others, argues that the only entities that can rein in the big technology companies is governments. Yet the need for them to do this appears nowhere in the manifesto. "...The web is long past attempts at self-regulation and voluntary ethics codes," Bloomberg concludes.

Sadly, this is true. The big design error in creating both the Internet and the web was omitting human psychology and business behavior. Changing today's situation requires very big gorillas. As we've seen this week, even nine governments together need more weight.


Illustrations: Zuckerberg's empty chair in the House of Commons.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2018

Phished

cupidsmessage-missourihistoricalsociety.jpgI regularly get Friend requests on Facebook from things I doubt are real people. They are always male and, at a guess, 40-something, have no Friends in common with me, and don't bother to write a message explaining how I know them. If I take the trouble to click through to their profiles, their Friends lists are empty. This week's request, from "Smith Thomson", is muscled, middle-aged, and slightly brooding. He lists his workplace as a US Army base and his birthplace as Houston. His effort is laughably minimal: zero Friends and the only profile content is the cover photograph plus a second photo with a family in front of a Disney castle, probably Photoshopped. I have a nasty, suspicious mind, and do not accept the request.

One of the most interesting projects under the umbrella of the Research Institute for Science of Cyber Security is Detecting and Preventing Mass-Marketing Fraud, led from the University of Warwick by Monica Whitty, and explained here. We tend to think of romance scams in particular, less so advance-fee fraud, as one-to-one rip-offs. Instead, the reality behind them is highly organized criminals operating at scale.

This is a billion-dollar industry with numerous victims. On Monday, the BBC news show Panorama offered a carefully worked example. The journalists followed the trail of these "catfish" by setting up a fake profile and awaiting contact, which quickly arrived. Following clues and payment instructions led the journalists to the scammer himself, in Lagos, Nigeria. One of the victims in particular displays reactions Whitty has seen in her work, too: even when you explain the fraud, some victims still don't recognize the same pattern when they are victimized again. Panorama's saddest moment is an older man who was clearly being retargeted after having already been fleeced of £100,000, his life savings. The new scammer was using exactly the same methodology, and yet he justified sending his new "girlfriend" £500 on the basis that it was comparatively modest, though at least he sounded disinclined to send more. He explained his thinking this way: "They reckon that drink and drugs are big killers. Yeah, they are, but loneliness is a bigger killer than any of them, and trying to not be lonely is what I do every day."

I doubt Panorama had to look very hard to find victims. They pop up a lot at security events, where everyone seems to know someone who's been had: the relative whose computer they had to clean after they'd been taken in by a tech support scam, the friend they'd had to stop from sending money. Last year, one friend spent several months seeking restitution for her mother, who was at least saved from the worst by an alert bank teller at her local branch. The loss of those backstops - people in local bank branches and other businesses who knew you and could spot when you were doing something odd - is a largely unnoticed piece of why these scams work.

In a 2016 survey, Microsoft found that two-thirds of US consumers had been exposed to a tech support scam in the previous year. In the UK in 2016, a report by the US Better Business Bureau says (PDF) , there were more than 34,000 complaints about this type of fraud alone - and it's known that less than 10% of victims complain. Each scam has its preferred demographic. Tech support fraud doesn't typically catch older people, who have life experience and have seen other scams even if not this particular one. The biggest victims of this type of scam are millennials aged 18 to 34 - with no gender difference.

DAPM's meeting mostly focused on dating scams, a particular interest of Whitty's because the emotional damage, on top of the financial damage, is so fierce. From her work, I've learned that the military connection "Smith Thomson" claimed is a common pattern. Apparently some people are more inclined to trust a military background, and claiming that they're located on a military base makes it easy for scammers to dodge questions about exactly what they're doing and where they are and resist pressure to schedule a real-life meeting.

Whitty and her fellow researchers have already discovered that the standard advice we give people doesn't work. "If something looks too good to be true it usually is" is only meaningful at the beginning - and that's not when the "too good to be true" manifests itself. Fraudsters know to establish trust before ratcheting up the emotions and starting to ask - always urgently - for money. By then, requests that would raise alarm flags at the beginning seem like merely the natural next steps in a developed relationship. Being scammed once gets you onto a "suckers list", ripe for retargeting - like Panorama's victim. These, too, are not new; they have been passed around among fraudsters for at least a century.

The point of DAPM's research is to develop interventions. They've had some statistically significant success with instructions teaching people to recognize scams. However, this method requires imparting a lot of information, which means the real conundrum is how you motivate people to participate when most believe they're too smart to get caught. The situation is very like the paranormal claims The Skeptic deals with: no matter how smart you are or how highly educated, you, too, can be fooledz. And, unlike in other crimes, DAPM finds, 52% of these victims blame themselves.


Illustrations: Cupid's Message (via Missouri Historical Society.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 16, 2018

Septet

bush-gore-hanging-chad-florida.jpgThis week catches up on some things we've overlooked. Among them, in response to a Twitter comment: two weeks ago, on November 2, net.wars started its 18th unbroken year of Fridays.

Last year, the writer and documentary filmaker Astra Taylor coined the term "fauxtomation" to describe things that are hyped as AI but that actually rely on the low-paid labor of numerous humans. In The Automation Charade she examines the consequences: undervaluing human labor and making it both invisible and insecure. Along these lines, it was fascinating to read that in Kenya, workers drawn from one of the poorest places in the world are paid to draw outlines around every object in an image in order to help train AI systems for self-driving cars. How many of us look at a self-driving car see someone tracing every pixel?

***

Last Friday, Index on Censorship launched Demonising the media: Threats to journalists in Europe, which documents journalists' diminishing safety in western democracies. Italy takes the EU prize, with 83 verified physical assaults, followed by Spain with 38 and France with 36. Overall, the report found 437 verified incidents of arrest or detention and 697 verified incidents of intimidation. It's tempting - as in the White House dispute with CNN's Jim Acosta - to hope for solidarity in response, but it's equally likely that years of politicization have left whole sectors of the press as divided as any bullying politician could wish.

***

We utterly missed the UK Supreme Court's June decision in the dispute pitting ISPs against "luxury" brands including Cartier, Mont Blanc, and International Watch Company. The goods manufacturers wanted to force BT, EE, and the three other original defendants, which jointly provide 90% of Britain's consumer Internet access, to block more than 46,000 websites that were marketing and selling counterfeits. In 2014, the High Court ordered the blocks. In 2016, the Court of Appeal upheld that on the basis that without ISPs no one could access those websites. The final appeal was solely about who pays for these blocks. The Court of Appeal had said: ISPs. The Supreme Court decided instead that under English law innocent bystanders shouldn't pay for solving other people's problems, especially when solving them benefits only those others. This seems a good deal for the rest of us, too: being required to pay may constrain blocking demands to reasonable levels. It's particularly welcome after years of expanded blocking for everything from copyright, hate speech, and libel to data retention and interception that neither we nor ISPs much want in the first place.

***

For the first time the Information Commissioner's Office has used the Computer Misuse Act rather than data protection law in a prosecution. Mustafa Kasim, who worked for Nationwide Accident Repair Services, will serve six months in prison for using former colleagues' logins to access thousands of customer records and spam the owners with nuisance calls. While the case reminds us that the CMA still catches only the small fry, we see the ICO's point.

***

In finally catching up with Douglas Rushkoff's Throwing Rocks at the Google Bus, the section on cashless societies and local currencies reminded us that in the 1960s and 1970s, New Yorkers considered it acceptable to tip with subway tokens, even in the best restaurants. Who now would leave a Metro Card? Currencies may be local or national; cashlessness is global. It may be great for those who don't need to think about how much they spend, but it means all transactions are intermediated, with a percentage skimmed off the top for the middlefolk. The costs of cash have been invisible to us, as Dave Birch says, but it is public infrastructure. Cashlessness privatizes that without any debate about the social benefits or costs. How centralized will this new infrastructure become? What happens to sectors that aren't commercially valuable? When do those commissions start to rise? What power will we have to push back? Even on-the-brink Sweden is reportedly rethinking its approach for just these reasons In a survey, only 25% wanted a fully cashless society.

***

Incredibly, 18 years after chad hung and people disposed in Bush versus Gore, ballots are still being designed in ways that confuse voters, even in Broward County, which should have learned better. The Washington Post tell us that in both New York and Florida ballot designs left people confused (seeing them, we can see why). For UK voters accustomed to a bit of paper with big names and boxes to check with a stubby pencil, it's baffling. Granted, the multiple federal races, state races, local officers, judges, referendums, and propositions in an average US election make ballot design a far more complex problem. There is advice available, from the US Election Assistance Commission, which publishes design best practices, but I'm reliably told it's nonetheless difficult to do well. On Twitter, Dana Chisnell provides a series of links that taken together explain some background. Among them is this one from the Center for Civic Design, which explains why voting in the US is *hard* - and not just because of the ballots.

***

Finally, a word of advice. No matter how cool it sounds, you do not want a solar-powered, radio-controlled watch. Especially not for travel. TMOT.

Illustrations: Chad 2000.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 2, 2018

The Brother proliferation

Thumbnail image for Security_Monitoring_Centre-wikimedia.jpgThere's this about having one or two big threats: they distract attention from the copycat threats forming behind them. Unnoticed by most of us - the notable exception being Jeff Chester and his Center for Digital Democracy, the landscape of data brokers is both consolidating and expanding in new and alarming ways. Facebook and Google remain the biggest data hogs, but lining up behind them are scores of others embracing the business model of surveillance capitalism. For many, it's an attempt to refresh their aging business models; no one wants to become an unexciting solid business.

The most obvious group is the telephone companies - we could call them "legacy creepy". We've previously noted their moves into TV. For today's purposes, Exhibit A is Verizon's 2015 acquisition of AOL, which Fortune magazine attributed to AOL's collection of advertising platforms, particularly in video, as well as its more visible publishing sites (which include the Huffington Post, Engadget, and TechCrunch). Verizon's 2016 acquisition of Yahoo! and its 3 billion user accounts and long history also drew notice, most of it negative. Yahoo!, the reasoning went, was old and dying, plus: data breaches that were eventually found to have affected all 3 billion Yahoo! accounts. Oath, Verizon's name for the division that owns AOL and Yahoo!, also owns MapQuest and Tumblr. For our purposes, though, the notable factor is that with these content sites Verizon gets a huge historical pile of their users' data that it can combine with what it knows about its subscribers in truly disturbing ways. This is a company that only two years ago was fined $1.35 million for secretly tracking its customers.

Exhibit B is AT&T, which was barely finished swallowing Time-Warner (and presumably its customer database along with it) when it announced it would acquire the adtech company AppNexus, a deal Forrester's Joanna O'Connell calls a material alternative to Facebook and Google. Should you feel insufficiently disturbed by that prospect, in 2016 AT&T was caught profiting from handing off data to federal and local drug officials without a warrant. In 2015, the company also came up with the bright idea of charging its subscribers not to spy on them via deep packet inspection. For what it's worth, AT&T is also the longest-serving campaigner against network neutrality.

In 2017, Verizon and AT&T were among the biggest lobbyists seeking to up-end the Federal Communications Commission's privacy protections.

The move into data mining appears likely to be copied by legacy telcos internationally. As evidence, we can offer Exhibit C, Telenor, which in 2016 announced its entry into the data mining business by buying the marketing technology company Tapad.

Category number two - which we can call "you-thought-they-had-a-different-business-model creepy" - is a surprise, at least to me. Here, Exhibit A is Oracle, which is reinventing itself from enterprise software company to cloud and advertising platform supplier. Oracle's list of recent acquisitions is striking: the consumer spending tracker Datalogix, the "predictive intelligence" company DataFox, the cross-channel marketing company Responsys, the data management platform BlueKai, the cross-channel machine learning company Crosswise, and audience tracker AddThis. As a result, Oracle claims it can link consumers' activities across devices, online and offline, something just about everyone finds creepy except, apparently, the people who run the companies that do it. It may surprise you to find Adobe is also in this category.

Category number three - "newtech creepy" - includes data brokers like Acxiom, perhaps the best-known of the companies that have everyone's data but that no one's ever heard of. It, too, has been scooping up competitors and complementary companies, for example LiveRamp, which it acquired from fellow profiling company RapLeaf, and which is intended to help it link online and offline identities. The French company Criteo uses probabilistic matching to send ads following you around the web and into your email inbox. My favorite in this category is Quantcast, whose advertising and targeting activities include "consent management". In other words, they collect your consent or lack thereof to cookies and tracking at one website and then follow you around the web with it. Um...you have to opt into tracking to opt out?

Meanwhile, the older credit bureaus Experian and Equifax - "traditional creepy" - have been buying enhanced capabilities and expanded geographical reach and partnering with telcos. One of Equifax's acquisitions, TALX, gave the company employment and payroll information on 54 million Americans.

The detail amounts to this: big companies with large resources are moving into the business of identifying us across devices, linking our offline purchases to our online histories, and packaging into audience segments to sell to advertisers. They're all competing for the same zircon ring: our attention and our money. Doesn't that make you feel like a valued member of society?

At the 2000 Computers, Freedom, and Privacy conference, the science fiction writer Neal Stephenson presciently warned that focusing solely on the threat of Big Brother was leaving us open to invasion by dozens of Little Brothers. It was good advice. Now, Very Large Brothers are proliferating all around us. GDPR is supposed to redress this imbalance of power, but it only works when you know who's watching you so you can mount a challenge.


Illustrations: "Security Monitoring Centre" (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2018

We know where you should live

Thumbnail image for PatCadigan-Worldcon75.jpgIn the memorable panel "We Know Where You Will Live" at the 1996 Computers, Freedom, and Privacy conference, the science fiction writer Pat Cadigan startled everyone, including fellow panelists Vernor Vinge, Tom Maddox, and Bruce Sterling, by suggesting that some time in the future insurance companies would levy premiums for "risk purchases" - beer, junk foods - in supermarkets in real time.

Cadigan may have been proved right sooner than she expected. Last week, John Hancock, a 156-year-old US insurance company, announced it would discontinue underwriting traditional life insurance policies. Instead, in future all its policies will be "interactive"; that is, they will come with the "Vitality" program, under which customers supply data collected by their wearable fitness trackers or smartphones. John Hancock promotes the program, which it says is already used by 8 million customers in 18 countries, and as providing discounts. In the company's characterization, it's a sort of second reward for "living healthy". In the company's depiction, everyone wins - you get lower premiums and a healthier life, and John Hancock gets your data, enabling it to make more accurate risk assessments and increase its efficiency.

Even then, Cadigan was not the only one with the idea that insurance companies would exploit the Internet and the greater availability of data. A couple of years later, a smart and prescient friend suggested that we might soon be seeing insurance companies offer discounts for mounting a camera on the hood of your car so they could mine the footage to determine blame when accidents occurred. This was long before smartphones and GoPros, but the idea of small, portable cameras logging everything goes back at least to 1945, when Vannevar Bush wrote As We May Think, an essay that imagined something a lot like the web, if you make allowances for storing the whole thing on microfilm.

This "interactive" initiative is clearly a close relative of all these ideas, and is very much the kind of thing University of Maryland professor Frank Pasquale had in mind when writing his book The Black Box Society. John Hancock may argue that customers know what data they're providing, so it's not all that black a box, but the reality is that you only know what you upload. Just like when you download your data from Facebook, you do not know what other data the company matches it with, what else is (wrongly or rightly) in your profile, or how long the company will keep penalizing you for the month you went bonkers and ate four pounds of candy corn. Surely it's only a short step to scanning your shopping cart or your restaurant meal with your smartphone to get back an assessment of how your planned consumption will be reflected in your insurance premium. And from there, to automated warnings, and...look, if I wanted my mother lecturing me in my ear I wouldn't have left home at 17.

There has been some confusion about how much choice John Hancock's customers have about providing their data. The company's announcement is vague about this. However, it does make some specific claims: Vitality policy holders so far have been found to live 13-21 years longer than the rest of the insured population; generate 30% lower hospitalization costs; take nearly twice as many steps as the average American; and "engage with" the program 576 times a year.

John Hancock doesn't mention it, but there are some obvious caveats about these figures. First of all, the program began in 2015. How does the company have data showing its users live so much longer? Doesn't that suggest that these users were living longer *before* they adopted the program? Which leads to the second point: the segment of the population that has wearable fitness trackers and smartphones tends to be more affluent (which tends to favor better health already) and more focused on their health to begin with (ditto). I can see why an insurance company would like me to "engage with" its program twice a day, but I can't see why I would want to. Insurance companies are not my *friends*.

At the 2017 Computers, Privacy, and Data Protection, one of the better panels discussed the future for the insurance industry in the big data era. For the insurance industry to make sense, it requires an element of uncertainty: insurance is about pooling risk. For individuals, it's a way of managing the financial cost of catastrophes. Continuously feeding our data into insurance companies so they can more precisely quantify the risk we pose to their bottom line will eventually mean a simple equation: being able to get insurance at a reasonable rate is a pretty good indicator you're unlikely to need it. The result, taken far enough, will be to undermine the whole idea of insurance: if everything is known, there is no risk, so what's the point? betting on a sure thing is cheating in insurance just as surely as it is in gambling. In the panel, both Katja De Vries and Mireille Hildebrandt noted the sinister side of insurance companies acting as "nudgers" to improve our behavior for their benefit.

So, less "We know where you will live" and more "We know where and how you *should* live."


Illustrations: Pat Cadigan (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 21, 2018

Facts are screwed

vlad-enemies-impaled.gif"Fake news uses the best means of the time," Paul Bernal said at last week's gikii conference, an annual mingling of law, pop culture, and technology. Among his examples of old media turned to propaganda purposes: hand-printed woodcut leaflets, street singers, plays, and pamphlets stuck in cracks in buildings. The big difference today is data mining, profiling, targeting, and the real-time ability to see what works and improve it.

Bernal's most interesting point, however, is that like a magician's plausible diversion the surface fantasy story may stand in front of an earlier fake news story that is never questioned. His primary example was Vlad, the Impaler, the historical figure who is thought to have inspired Dracula. Vlad's fame as a vicious and profligate killer, derives from those woodcut leaflets. Bernal suggests the reasons: a) Vlad had many enemies who wrote against him, some of it true, most of it false; b) most of the stories were published ten to 20 years after he died; and c) there was a whole complicated thing about the rights to Transylvanian territory.

"Today, people can see through the vampire to the historical figure, but not past that," he said.

His main point was that governments' focus on content to defeat fake news is relatively useless. A more effective approach would have us stop getting our news from Facebook. Easy for me personally, but hard to turn into public policy.

Soon afterwards, Judith Rauhofer outlined a related problem: because Russian bots are aimed at exacerbating existing divisions, almost anyone can fall for one of the fake messages. Spurred on by a message from the Tumblr powers that be advising that she had shared a small number of messages that were traced to now-closed Russian accounts, Rauhofer investigated. In all, she had shared 18 posts - and these had been reblogged 2.7 million times, and are still being recirculated. The focus on paid ads means there is relatively little research on organic and viral sharing of influential political messages. Yet these reach vastly bigger audiences and are far more trusted, especially because people believe they are not being influenced by them.

In the particular case Rauhofer studied, "There are a lot of minority groups under attack in the US, the UK, Germany, and so on. If they all united in their voting behavior and political activity they would have a chance, but if they're fighting each other that's unlikely to happen." Divide and conquer, in other words, works as well as it ever has.

The worst part of the whole thing, she said, is that looking over those 18 posts, she would absolutely share them again and for the same reason: she agreed with them.

Rauhofer's conclusion was that the combination of prioritization - that is, the ordering of what you see according to what the site believes you're interested in - and targeting form "a fail-safe way of creating an environment where we are set against each other."

So in Bernal's example, an obvious fantasy masks an equally untrue - or at least wildly exaggerated - story, while in Rauhofer's the things you actually believe can be turned into weapons of mass division. Both scenarios require much more nuance and, as we've argued here before, many more disciplines to solve than are currently being deployed.

Andrea Matwyshyn, in providing five mini-fables as a way of illustrating five problems to consider when designing AI - or, as she put it, five stories of "future AI failure". These were:

- "AI inside" a product can mean sophisticated machine learning algorithms or a simple regression analysis; you cannot tell from the outside what is real and what's just hype, and the specifics of design matter. When Google's algorithm tagged black people as "gorillas", the company "fixed" the algorithm by removing "gorilla" from its list of possible labels. The algorithm itself wasn't improved.

- "Pseudo-AI" has humans doing the work of bots. Lots of historical examples for this one, most notably the mechanical Turk; Matwyshyn chose the fake autonomaton the Digesting Duck.

- Decisions that bring short-term wins may also bring long-term losses in the form of unintended negative consequences that haven't been thought through. Among Matwyshyn's examples were a number of cases where human interaction changed the analysis such as the failure of Google flu trends and Microsoft's Tay bot.

- Minute variations or errors in implementation or deployment can produce very different results than intended. Matwyshyn's prime example was a pair of electronic hamsters she thought could be set up to repeat each other w1ords to form a recursive loop. Perhaps responding to harmonics less audible to humans, they instead screeched unintelligibly at each other. "I thought it was a controlled experiment," she said, "and it wasn't."

- There will always be system vulnerabilities and unforeseen attacks. Her example was squirrels that eat power lines, but ten backhoes is the traditional example.

To prevent these situations, Matwyshyn emphasized disclosure about code, verification in the form of third-party audits, substantiation in the form of evidence to back up the claims that are made, anticipation - that is, liability and good corporate governance, and remediation - again a function of good corporate governance.

"Fail well," she concluded. Words for our time.


Illustrations: Woodcut of Vlad, with impaled enemies.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?


Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 7, 2018

Watching brief

Amazon-error-message-usopen-2018.png"Hope the TV goes out at the same time," the local cable company advised me regarding outages when they supplied my Internet service circa 2001. "Because then so many people complain that it gets fixed right away."

Amazon is discovering the need to follow this principle. As the Guardian reported last week, this year's US Open tennis is one of Amazon Prime's first forays into live sports streaming, and tennis fans are unhappy.

"Please leave tennis alone," says one of the more polite user reviews.

It seems like only yesterday that being able to watch grainy, stuttering video in a corner of one's computer screen was like a miracle (and an experience no one would ever want to repeat unless they had to). Now, streaming is so well established that people complain about the quality, the (lack of) features, and even the camera angles. People! Only ten years ago you'd have been *grateful*!

A friend, seeing the Guardian's story, emailed: "Are you seeing this?" Well, yes. Most of it. On my desktop machine the picture looks fine to me, but it's a 24-inch monitor, not a giant HD TV, and as long as I can pick out the ball consistently, who cares whether it's 1020p? However, on two Windows laptops both audio and video stutter badly. That was a clue: my Linux-based desktop has a settings advisory: "HD TV Not Available - Why?" It transpires that because Linux machines lack the copy protection built into HDMI, Amazon doesn't send HD. I'm guessing that the smaller amount of data means smoother reception and a better experience, even if the resolution is lower. That said, even on the Linux machines the stream fails regularly. Reload window, click play.

The camera angle is indeed annoying, but for that you have to blame the USTA and the new Armstrong stadium design. There's only one set of cameras, and the footage is distributed by the host broadcaster to everyone else. Whine to Amazon all you want; but all the company can do is forward the complaints.

One reason tennis fans are so picky is that the tennis tours adopted streaming years ago, as did Eurosport, as a way of reaching widely dispersed fans: tennis is a global minority sport. So they are experienced, and they have expectations. On the ATP (men's) tour's own site, TennisTV, if you're getting a stuttering picture you can throttle the bitrate; the scores and schedule are ready to hand; and you can pause a match and resume it later or step back to the beginning or any point in between. Replays are available very soon after a match ends. On Amazon, there's an icon to click to replay the last ten seconds, but you can't pause and resume, and you can only go back about a half an hour. Lest you think that's trivial: US Open night sessions, which generally feature the most popular matches, start at 7pm New York time - and therefore midnight in the UK.

In general, it's clear that Amazon hasn't really thought through the realities of the way fans embrace the US Open. Instead of treating the US Open as an *event*: instead of replays, Amazon treats live matches, and highlights compilations as separate "items". The replays Amazon began posting after a couple of days seem to be particularly well-hidden in that they're not flagged from either the highlights page or the live page and they're called "match of the day". When I did find them, they refused to play.

I would probably have been more annoyed about all this if UK coverage of the US Open hadn't been so frequently frustrating in the past (I remember "watching" the 1990 men's final by watching the Teletext scores update, and the frustrations of finding live matches when Sky scattered them across four premium channels). Watching the US Open in Britain is like boarding a plane for a long flight in economy: you don't ask if you're going to be uncomfortable. Instead, you assemble a toolkit and then which ask components you're going to need to make it as tolerable as possible within the constraints. So: I know where the Internet hides recordings of recently played matches and free streams. The US Open site has the scores and schedule of who's playing where. All streams bomb out at exactly the wrong moment. Unlike the USTA, however, it only took a day or two for Amazon to respond to viewer complaints by labeling the streams with who was playing. I *have* liked hearing some different commentators for a change. But I do not want to be a Prime subscriber.

Amazon will likely get better at this over the next four years of its five-year, $40 million contract and the course of its £50 million, five-year contract to show the ATP Tour. Nonetheless, sports are almost the only programming viewers are guaranteed to want to watch in real time, and fans, broadcasters, and the sports themselves are unlikely to be well-served in the long term by a company that uses live sports is a loss-leader - like below-cost pricing on milk in a grocery store - to build platform loyalty and subscribers for its delivery service. Sports are a strategy for the company, not its business. Book publishers welcomed Amazon, too, once.

Illustrations: Amazon error message.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2018

Ghosted

GDPR-LATimes.pngThree months after the arrival into force of Europe's General Data Protection Regulation, Nieman Lab finds that more than 1,000 US newspapers are still blocking EU visitors.

"We are engaged on the issue", says the placard that blocks access to even the front pages of the New York Daily News and the Chicago Tribune, both owned by Tronc, as well as the Los Angeles Times, which was owned by Tronc until very recently. Ironically, Wikipedia tells us that the silly-sounding name "Tronc" was derived from "Tribune Online Content"; you'd think a company whoe name includes "online" would grasp the illogic of blocking 500 million literate readers. Nieman Lab also notes that Tronc is for sale, so I guess the company has more urgent problems.

Also apparently unable to cope with remediating its systems, despite years of notice, is Lee Enterprises, which owns numerous newspapers including the Carlisle, PA Sentinel and the Arizona Daily Star; these return "Error 451: Unavailable due to legal reasons", and blame GDPR as the reason "access cannot be granted at this time". Even the giant retail chain Williams-Sonoma has decided GDPR is just too hard, redirecting would-be shoppers to a UK partner site that is almost, but not quite, entirely unlike Williams-Sonoma - and useless if you want to ship a gift to someone in the US.

If you're reading this in the US, and you want to see what we see, try any of those URLs in a free proxy such as Hide Me, setting set your location to Amsterdam. Fun!

Less humorously, shortly after GDPR came into force a major publisher issued new freelance contracts that shift the liability for violations onto freelances. That is, if I do something that gets the company sued for GDPR violations, in their world I indemnify them.

And then there are the absurd and continuing shenanigans of ICANN, which is supposed to be a global multi-stakeholder modeling a new type of international governance, but seems so unable to shake its American origins that it can't conceive of laws it can't bend to its will.

Years ago, I recall that the New York Times, which now embraces being global, paywalled non-US readers because we were of no interest to their advertisers. For that reason, it seems likely that Tronc and the others see little profit in a European audience. They're struggling already; it may be hard to justify the expenditure on changing their systems for a group of foreign deadbeats. At the same time, though, their subscribers are annoyed that they can't access their home paper while traveling.

On the good news side, the 144 local daily newspapers and hundreds of other publications belonging to GateHouse Media seem to function perfectly well. The most fun was NPR, which briefly offered two alternatives: accept cookies or view in plain text. As someone commented on Twitter, it was like time-traveling back to 1996.

The intended consequence has been to change a lot of data practices. The Reuters Institute finds that the use of third-party cookies is down 22% on European news sites in the three months GDPR has been in force - and 45% on UK news sites. A couple of days after GDPR came into force, web developer Marcel Freinbichler did a US-vs-EU comparison on USA Today: load time dropped from 45 seconds to three, from 124 JavaScript files to zero, and a more than 500 requests to 34.

gdpr-unbalanced-cookingsite.jpgBut many (and not just US sites) are still not getting the message, or are mangling it. For example, numerous sites now display boxes displaying the many types of cookies they use and offering chances to opt in or out. A very few of these are actually well-designed, so you can quickly opt out of whole classes of cookies (advertising, tracking...) and get on with reading whatever you came to the site for. Others are clearly designed to make it as difficult as possible to opt out; these sites want you to visit a half-dozen other sites to set controls. Still others say that if you click the button or continue using the site your consent will be presumed. Another group say here's the policy ("we collect your data"), click to continue, and offer no alternative other than to go away. Not a lawyer - but sites are supposed to obtain explicit consent for collecting data on an opt-in basis, not assume consent on an an opt-out basis while making it onerous to object.

The reality is that it is far, far easier to install ad blockers - such as EFF's Privacy Badger - than to navigate these terrible user interfaces. In six months, I expect to see surveys coming from American site owners saying that most people agree to accept advertising tracking, and what they will mean is that people clicked OK, trusting their ad blockers would protect them.

None of this is what GDPR was meant to do. The intended consequence is to protect citizens and redress the balance of power; exposing exploitative advertising practices and companies' dependence on "surveillance capitalism" is a good thing. Unfortunately, many Americans seem to be taking the view that if they just refuse service the law will go away. That approach hasn't worked since Usenet.


Illustrations: Personally collected screenshots.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 24, 2018

Cinema surveillant

Dragonfly-Eyes_poster_3-web-460.jpgThe image is so low-resolution that it could be old animation. The walking near-cartoon figure has dark, shoulder-length hair and a shape that suggests: young woman. She? stares at a dark oblong in one hand while wandering ever-closer to a dark area. A swimming pool? A concrete river edge? She wavers away, and briefly it looks like all will be well. Then another change of direction, and in she falls, with a splash.

This scene opens Dragonfly Eyes, which played this week at London's Institute of Contemporary Arts. All I knew going in was that the movie had been assembled from fragments of imagery gathered from Chinese surveillance cameras. The scene described above wasn't *quite* the beginning - first, the filmmaker, Chinese artist Xu Bing, provides a preamble explaining that he originally got the idea of telling a story through surveillance camera footage in 2013, but it was only in 2015, when the cameras began streaming live to the cloud, that it became a realistic possibility. There was also, if I remember correctly, a series of random images and noise that in retrospect seem like an orchestra tuning up before launching into the main event, but at the time were rather alarming. Alarming as in, "They're not going to do this for an hour and a half, are they?"

They were not. It was when the cacophony briefly paused to watch a bare-midriffed young woman wriggle suggestively on a chair, pushing down on the top of her jeans (I think) that I first thought, "Hey, did these guys get these people's permission?" A few minutes later, watching the phone?-absorbed woman ambling along the poolside seemed less disturbing, as her back was turned to the camera. Until: after she fell the splashing became fainter and fainter, and after a little while she did not reappear and the water calmed. Did we just watch the recording of a live drowning?

Apparently so. At various times during the rest of the movie we return to a police control room where officers puzzle over that same footage much the way we in the audience were puzzling over Xu's film. Was it suicide? the police ponder while replaying the footage.

Following the plot was sufficiently confusing that I'm grateful that Variety explains it. Ke Fan, an agricultural technician, meets a former Buddhist-in-training, Qing Ting, while they bare both working at a dairy farm and follows her when she moves to a new city. There, she gets fired from her job at a dry cleaner's for failing to be sufficiently servile to an unpleasant, but wealthy and valuable customer. Angered by the situation, Ke Fan repeatedly rams the unpleasant customer's car; this footage is taken from inside the car being rammed, so he appears to be attacking you directly. Three years later, when he gets out of prison, he finds (or possibly just believes he finds) that Qing Ting has had plastic surgery and under a new name is now a singing webcam celebrity who makes her living by soliciting gifts and compliments from her viewers, who turn nasty when she insults a more popular rival...

The characters and narration are voiced by Chinese actors, but the pictures, as one sees from the long list of camera locations and GPS coordinates included in the credits, are taken from 10,000 hours of real-world found imagery, which Xu and his assistants edited down to 81 minutes. Given this patchwork, it's understandably hard to reliably follow the characters through the storyline; the cues we usually rely on - actors and locations that become familiar - simply aren't clear. Some sequences are tagged with the results of image recognition and numbering; very Person of Interest. About a third of the way through, however, the closer analogue that occurred to me is Woody Allen's 1966 movie What's Up, Tiger Lily?, which Allen constructed by marrying the footage from a Japanese spy film to his own unrelated dialogue. It was funny, in 1966.

While Variety calls the storyline "run-of-the-mill melodramatic", in reality the plot is supererogatory. Much more to the point - and indicated in the director's preamble - is that all this real-life surveillance footage can be edited into any "reality" you want. We sort of knew this from reality TV, but the casts of those shows signed up to perform, even if they didn't quite expect the extent to which they'd be exploited. The people captured on Xu's extracts from China's estimated 200 million surveillance cameras, are...just living. The sense of that dissonance never leaves you at any time during the movie.

I can't spoil the movie's ending by telling you whether Ke Fan finds Qing Ting because it matters so little that I don't remember. The important spoiler is this: the filmmaker has managed to obtain permission from 90% of the people who appear in the fragments of footage that make up the film (how he found them would be a fascinating story in itself), and advertises a contact address for the rest to seek him out. In one sense, whew! But then: this is the opt-out, "ask forgiveness, not permission" approach we're so fed up with from Silicon Valley. The fact that Chinese culture is different and the camera streams were accessible via the Internet doesn't make it less disturbing. Yes, that is the point.


Illustrations: Dragonfly Eyes poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 17, 2018

Redefinition

Robber-barons2-bosses-senate.pngOnce upon a nearly-forgotten time, the UK charged for all phone calls via a metered system that added up frighteningly fast when you started dialing up to access the Internet. The upshot was that early Internet services like the now-defunct Demon Internet could charge a modest amount (£10) per month, secure that the consciousness of escalating phone bills would drive subscribers to keep their sessions short. The success of Demon's business model, therefore, depended on the rapaciousness of strangers.

I was reminded of this sort of tradeoff by a discussion in the LA Times (proxied for EU visitors) of cable-cutters. Weary of paying upwards of $100 a month for large bundles of TV channels they never watch, Americans are increasingly dumping them in favor of cheaper streaming subscriptions. As a result, ISPs that depend on TV package revenues are raising their broadband prices to compensate, claiming that the money is needed to pay for infrastructure upgrades. In the absence of network neutrality requirements, those raised prices could well be complemented by throttling competitors' services.

They can do this, of course, because so many areas of the US are lucky if they have two choices of Internet supplier. That minimalist approach to competition means that Americans pay more to access the Internet than many other countries - for slower speeds. It's easy to raise prices when your customers have no choice.

The LA Times holds out hope that technology will save them; that is, the introduction of 5G, which promises better speeds and easier build-out, will enable additional competition from AT&T, Verizon, and Sprint - or, writer David Lazarus adds, Google, Facebook, and Amazon. In the sense of increasing competition, this may be the good news Lazarus thinks it is, even though he highlights AT&T's and Verizon's past broken promises. I'm less sure: physics dictates that despite its greater convenience the fastest wireless will never be as fast as the fastest wireline.

5G has been an unformed mirage on the horizon for years now, but apparently no longer: CNBC says Verizon's 5G service will begin late this year in Houston, Indianapolis, Los Angeles, and Sacramento and give subscribers TV content in the form of an Apple TV and a YouTube subscription. A wireless modem will obviate the need for cabling.

The potential, though, is to entirely reshape competition in both broadband and TV content, a redefinition that began with corporate mergers such as Verizon's acquisition of AOL and Yahoo (now gathered into its subsidiary, "Oath") and AT&T's whole-body swallowing of Time Warner, which includes HBO. Since last year's withdrawal of privacy protections passed during the Obama administration, ISPs have greater latitude to collect and exploit their customers' online data trails. Their expansion into online content makes AT&T and Verizon look more like competitors to the online behemoths. For consumers, greater choice in bandwidth provider is likely to be outweighed by the would-you-like-spam-with-that complete lack of choice about data harvesting. If the competition 5G opens up is provided solely by avid data miners who all impose the same terms and conditions...well, which robber baron would you like to pay?

There's a twist. The key element that's enabled Amazon and, especially, Netflix to succeed in content development is being able to mine the data they collect about their subscribers. Their business models differ - for Amazon, TV content is a loss-leader to sell subscriptions to its premium delivery service; for Netflix, TV production is a bulwark against dependence on third-party content creators and their licensing fees - but both rely on knowing what their customers actually watch. Their ambitions, too, are changing. Amazon has canceled much of its niche programming to chase HBO-style blockbusters, while Netflix is building local content around the world. Meanwhile, AT&T wants HBO to expand worldwide and focus less on its pursuit of prestige; Apple is beginning TV production; and Disney is pulling its content from Netflix to set up its own streaming service.

The idea that many of these companies will be directly competing in all these areas is intriguing, and its impact will be felt outside the US. It hardly matters to someone in London or Siberia how much Internet users in Indianapolis pay for their broadband service or how good it is. But this reconfiguration may well end the last decade's golden age of US TV production, particularly but not solely for drama. All the new streaming services began by mining the back catalogue to build and understand an audience and then using creative freedom to attract talent frustrated by the legacy TV networks' micromanagement of every last detail, a process the veteran screenwriter Ken Levine has compared to being eaten to death by moths.

However, one last factor could provide an impediment to the formation of this landscape: on June 28, California adopted the Consumer Privacy Act, which will come into force in 2020. As Nick Confessore recounts in the New York Times Magazine, this "overnight success" required years of work. Many companies opposed the bill: Amazon, Google, Microsoft, Uber, Comcast, AT&T, Cox, Verizon, and several advertising lobbying groups; Facebook withdrew its initial opposition.. EFF calls it "well-intentioned but flawed", and is proposing changes. ISPs and technology companies also want (somewhat different) changes. EPIC's Mark Rotenberg called the bill's passage a "milestone moment". It could well be.


Illustrations: Robber barons overseeing the US Congress (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 20, 2018

Competing dangerously

Thumbnail image for Conversation_with_Margrethe_Vestager,_European_Commissioner_for_Competition_(17222242662).jpgIt is just over a year since the EU fined Google what seemed a huge amount, and here we are again: this week the EU commissioner for competition Margrethe Vestager levied an even bigger €4.34 billion fine over "serious illegal behavior". At issue was Google's licensing terms for its Android apps and services, which essentially leveraged its ownership of the operating system to ensure its continued market dominance in search as the world moved to mobile. Google has said it will appeal; it is also appealing the 2017 fine. The present ruling gives the company 90 days to change behaviour or face further fines of up to 5% of daily worldwide turnover.

Google's response is to say that Google's rules have enabled it not to charge manufacturers to use Android, made Android phones easier to use, and are efficient for both developers and consumers. The ruling, writes CEO Sundar Pichai, will "upset the balance of the Android ecosystem".

Google's claim that users are free to install other browsers and search engines and are used to downloading apps is true but specious. It's widely known that 95% of users never change default settings. Defaults *matter*, and Google certainly knows this. When you reach a certain size - Android holds 80% of European and worldwide smart mobile devices, and 95% of the licensable mobile market outside of China - the decisions you make about choice architecture determine the behavior of large populations.

Also, the EU's ruling isn't about a user's specific choice on their individual smartphone. Instead, it's based on three findings: 1) Google's licensing terms made access to the Play Store contingent on pre-installing Google's search app and Chrome; 2) Google paid some large manufacturers and network operators to exclusively pre-install Google's search app; 3) Google prevented manufacturers that pre-install Google apps from selling *any* devices using non-Google-approved ("forked") versions of Android. It puts the starting date at 2011, "when Google became dominant".

There are significant similarities here to the US's 1998 ruling against Microsoft over tying Internet Explorer to Windows. Back then, Microsoft was the Big Evil on the block, and there were serious concerns that it would use Internet Explorer as a vector for turning the web into a proprietary system under its control. For a good account, see Charles H. Ferguson's 1999 book, High St@kes, No Prisoners. Ferguson would know: his web page design start-up, Vermeer, was the subject of an acquisition battle between Microsoft and Netscape. Google, which was founded in 1998, ultimately benefited from this ruling, because it helped keep the way open for "alternative" browsers such as Google's own Chrome.

There are also similarities to the EU's 2004 ruling against Microsoft, which required the company to stop bundling its media player with Windows and to disclose the information manufacturers needed to integrate non-Microsoft networking and streaming software. The EU's fine was the largest-ever at the time: €497 million. At that point, media players seemed like important gateways to content. The significant gateway drug turned out to be Web browsers; either way, Microsoft and streaming have both prospered.

Since 1998, however, in another example of EU/US divergence, the US has largely abandoned enforcing anti-competition law. As Lina M. Khan pointed out last year, it's no longer the case that waiting will produce two guys in a garage with a new technology that up-ends the market and its biggest players. The EU explains carefully in its announcement that Android is different from Apple's iOS or Blackberry because as vertically integrated companies that do not license their products they are not part of the same market. In the Android market, however, it says, "...it was Google - and not users, app developers, and the market - that effectively determined which operating systems could prosper."

Too little, too late, some are complaining, and more or less correctly: the time for this action was 2009; even better, says the New York Times, block in advance the mergers that are creating these giants. Antitrust actions against technology companies are almost always a decade late. Others buy Google's argument that consumers will suffer, but Google is a smart company full of smart engineers who are entirely capable of figuring out well-designed yet neutral ways to present choices, just as Microsoft did before it.

There's additional speculation that Google might have to recoup lost revenues by charging licensing fees; that Samsung might be the big winner, since it already has its own full competitive suite of apps; and that the EU should fine Apple, too, on the basis that the company's closed system bars users from making *any* unapproved choices.

Personally, I wish the EU had applied more attention to the ways Google leverages the operating system to enable user tracking to fuel its advertising business. The requirement to tie every phone to a Gmail address is an obvious candidate for regulatory disruption; so is the requirement to use it to access the Play Store. The difficulty of operating a phone without being signed into Google has ratcheted up over time - and it seems wholly unnecessary *unless* the purpose is to make it easier to do user tracking. This issue may yet find focus under GDPR.

Illustrations: Margrethe Vestager.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 6, 2018

This is us

Thumbnail image for ACTA_Protest_Crowd_in_London.JPGAfter months of anxiety among digital rights campaigners such as the Open Rights Group and the Electronic Frontier Foundation, the European Parliament has voted 318-278 against fast-tracking a particularly damaging set of proposed changes to copyright law.

There will be a further vote on September 10, so as a number of commentators are reminding us on Twitter, it's not over yet.

The details of the European Commission's alarmingly wrong-headed approach have been thoroughly hashed out for the last year by Glyn Moody. The two main bones of contention are euphoniously known as Article 11 and Article 13. Article 11 (the "link tax") would give publishers the right to require licenses (that is, payment) for the text accompanying links shared on social media, and Article 13 (the "upload filter") would require sites hosting user content to block uploads of copyrighted material.

In a Billboard interview with MEP Helga Trüpel, Muffett quite rightly points out the astonishing characterization of the objections to Articles 11 and 13 as "pro-Google". There's a sudden outburst of people making a similar error: Even the Guardian's initial report saw the vote as letting tech giants (specifically, YouTube) off the hook for sharing their revenues. Paul McCartney's last-minute plea hasn't helped this perception. What was an argument about the open internet is now being characterized as a tussle over revenue share between a much-loved billionaire singer/songwriter and a greedy tech giant that exploits artists.

Yet, the opposition was never about Google. In fact, probably most of the active opponents to this expansion of copyright and liability would be lobbying *against* Google on subjects like privacy, data protection, tax avoidance, and market power, We just happen to agree with Google on this particular topic because we are aware that forcing all sites to assume liability for the content their users post will damage the internet for everyone *else*. Google - and its YouTube subsidiary - has both the technology and the financing to play the licensing game.

But licensing and royalties are a separate issue from mandating that all sites block unauthorized uploads. The former is about sharing revenues; the latter is about copyright enforcement, and conflating them helps no one. The preventive "copyright filter" that appears essential for compliance with Article 13 would fail the "prior restraint" test of the US First Amendment - not that the EU needs to care about that. As copyright-and-technology consultant Bill Rosenblatt writes, licensing is a mess that this law will do nothing to fix. If artists and their rights holders want a better share of revenues, they could make it a *lot* easier for people to license their work. This is a problem they have to fix themselves, rather than requiring lawmakers to solve it for them by placing the burden on the rest of us. The laws are what they are because for generations they made them.

Article 11, which is or is not a link tax depending who you listen to, is another matter. Germany (2013) and Spain (2014) have already tried something similar, and in both cases it was widely acknowledged to have been a mistake. So much so that one of the opponents to this new attempt is the Spanish newspaper El País.

My guess is that those who want these laws passed are focusing on Google's role in lobbying against them - for example, Digital Music News reports that Google spent more than $36 million on opposing Article 13 - is preparation for the next round in September. Google and Facebook are increasingly the targets people focus on when they're thinking about internet regulation. Therefore, if you can recast the battle as being one between deserving artists and a couple of greedy American big businesses, they think it will be an easier sell to legislators.

But there are two of them and billions of us, and the opposition to Articles 11 and 13 was never about them. The 2012 SOPA and PIPA protests and the street protests against ACTA were certainly not about protecting Google or any other large technology company. No one goes out on the street or dresses up their website in protest banners in order to advocate for *Google*. They do it because what's been proposed threatens to affect them personally.

There's even a sound economic argument: had these proposed laws been in place in 1998, when Sergey Brin and Larry Page were meeting in dorm rooms, Google would not exist. Nor would thousands of other big businesses. Granted, most of these have not originated in the EU, but that's not a reason to wreck the open internet. Instead, that's a reason to find ways to make the internet hospitable to newcomers with bright ideas.

This debate is about the rest of us and our access to the internet. We - for some definition of "we" - were against these kinds of measures when they first surfaced in the early 1990s, when there were no tech giants to oppose them, and for the same reasons: the internet should be open to all of us.

Let the amendments begin.

Illustrations: Protesters against ACTA in London, 2012 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 6, 2018

Leverage

Facebook-76536_640.pngWell, what's 37 million or 2 billion scraped accounts more or less among friends? The exploding hairball of the Facebook/Cambridge Analytica scandal keeps getting bigger. And, as Rana Dasgubta writes in the Guardian, we are complaining now because it's happening to us, but we did not notice when these techniques were tried out first in third-world countries. Dasgupta has much to say about how nation-states will have to adapt to these conditions.

Given that we will probably never pin down every detail of how much data and where it went, it's safest to assume that all of us have been compromised in some way. The smug "I've never used Facebook" population should remember that they almost certainly exist in the dataset, by either reference (your sister posts pictures of "my brother's birthday") or inference (like deducing the existence, size, and orbit of an unseen planet based on its gravitational pull on already-known objects).

Downloading our archives tells us far less than people recognize. My own archive had no real surprises (my account dates in 2007, but I post little and adblock the hell out of everything). The shock many people have experienced of seeing years of messages and photographs laid out in front of them, plus the SMS messages and call records that Facebook shouldn't have been retaining in the first place, hides the fact that these archives are a very limited picture of what Facebook knows about us. It shows us nothing about information posted about us by others, photos others have posted and tagged, or comments made in response to things we've posted.

The "me-ness" of the way Facebook and other social media present themselves was called out by Christian Fuchs in launching his book Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter. "Twitter is a me-centred medium. 'Social media' is the wrong term, because it's actually anti-social, Me media. It's all about individual profiles, accumulating reputation, followers, likes, and so on."

Saying that, however, plays into Facebook's own public mythology about itself. Facebook's actual and most significant holdings about us are far more extensive, and the company derives its real power from the complex social graphs it has built and the insights that can be gleaned from them. None of that is clear from the long list of friends. Even more significant is how Facebook matches up user profiles to other public records and social media services and with other brokers' datasets - but the archives give us no sense of that either. Facebook's knowledge of you is also greatly enhanced - as is its ability to lock you in as a user - if you, like many people, have opted to use Facebook credentials to log into third-party sites. Undoing that is about as easy and as much fun as undoing all your direct debit payments in order to move your bank account.

Facebook and the other tech companies are only the beginning. There's a few people out there trying to suggest Google is better, but Zeynep Tufekci discovered it had gone on retaining her YouTube history even though she had withdrawn permission to do so. As Tufekci then writes, if a person with a technical background whose job it is to study such things could fail to protect her data, how could others hope to do so?

But what about publishers and the others dependent on that same ecosystem? As Doc Searls writes, the investigative outrage on display in many media outlets glosses over the fact that they, too, are compromised. Third party trackers, social media buttons, Google analytics, and so on all deliver up readers to advertisers in increasing detail, feeding the business plans of thousands of companies all aimed at improving precision and targeting.

And why stop with publishers? At least they have the defense of needing to make a living. Government sites, libraries, and other public services do the same thing, without that justification. The Richmond Council website shows no ads - but it still uses Google Analytics, which means sending a steady stream of user data Google's way. Eventbrite, which everyone now uses for event sign-ups, is constantly exhorting me to post my attendance to Facebook. What benefit does Eventbrite get from my complying? It never says.

Meanwhile, every club, member organization, and creative endeavor begs its adherents to "like my page on Facebook" or "follow me on Twitter". While they see that as building audience and engagement, the reality is that they are acting as propagandists for those companies. When you try to argue against doing this, people will say they know, but then shrug helplessly and say they have to go where the audience is. If the audience is on Facebook, and it takes page likes to make Facebook highlight your existence, then what choice is there? Very few people are willing to contemplate the hard work of building community without shortcuts, and many seem to have come to believe that social media engagement as measured in ticks of approval is community, like Mark Zuckerberg tried to say last year.

For all these reasons, it's not enough to "fix Facebook". We must undo its leverage.


Illustrations: Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 23, 2018

Aspirational intelligence

2001-hal.png"All commandments are ideals," he said. He - Steven Croft, the Bishop of Oxford - had just finished reading out to the attendees of Westminster Forum's seminar (PDF) his proposed ten commandments for artificial intelligence. He's been thinking about this on our behalf: Croft malware writers not to adopt AI enhancements. Hence the reply.

The first problem is: what counts as AI? Anders Sandberg has quipped that it's only called AI until it starts working, and then it's called automation. Right now, though, to many people "AI" seems to mean "any technology I don't understand".

Croft's commandment number nine seems particularly ironic: this week saw the first pedestrian killed by a self-driving car. Early guesses are that the likely weakest links were the underemployed human backup driver and the vehicle's faulty LIDAR interpretation of a person walking a bicycle. Whatever the jaywalking laws are in Arizona, most of us instinctively believe that in a cage match between a two-ton automobile and an unprotected pedestrian the car is always the one at fault.

Thinking locally, self-driving cars ought to be the most ethics-dominated use of AI, if only because people don't like being killed by machines. Globally, however, you could argue that AI might be better turned to finding the best ways to phase out cars entirely.

We may have better luck at persuading criminal justice systems to either require transparency, fairness, and accountability in machine learning systems that predict recidivism and who can be helped or drop them entirely.

The less-tractable issues with AI are on display in the still-developing Facebook and Cambridge Analytica scandals. You may argue that Facebook is not AI, but the platform certainly uses AI in fraud detection and to determine what we see and decide which of our data parts to use on behalf of advertisers. All on its own, Facebook is a perfect exemplar of all the problems Australian privacy advocate foresaw in 2004 after examining the first social networks. In 2012, Clark wrote, "From its beginnings and onward throughout its life, Facebook and its founder have demonstrated privacy-insensitivity and downright privacy-hostility." The same could be said of other actors throughout the tech industry.

Yonatan Zunger is undoubtedly right when he argues in the Boston Globe that computer science has an ethics crisis. However, just fixing computer scientists isn't enough if we don't fix the business and regulatory environment built on "ask forgiveness, not permission". Matthew Stoll writes in the Atlantic about the decline since the 1970s of American political interest in supporting small, independent players and limiting monopoly power. The tech giants have widely exported this approach; now, the only other government big enough to counter it is the EU.

The meetings I've attended of academic researchers considering ethics issues with respect to big data have demonstrated all the careful thoughtfulness you could wish for. The November 2017 meeting of the Research Institute in Science of Cyber Security provided numerous worked examples in talks from Kat Hadjimatheou at the University of Warwick, C Marc Taylor from the the UK Research Integrity Office, and Paul Iganski the Centre for Research and Evidence on Security Threats (CREST). Their explanations of the decisions they've had to make about the practical applications and cases that have come their way are particularly valuable.

On the industry side, the problem is not just that Facebook has piles of data on all of us but that the feedback loop from us to the company is indirect. Since the Cambridge Analytica scandal broke, some commenters have indicated that being able to do without Facebook is a luxury many can't afford and that in some countries Facebook *is* the internet. That in itself is a global problem.

Croft's is one of at least a dozen efforts to come up with an ethics code for AI. The Open Data Institute has its Data Ethics Canvas framework to help people working with open data identify ethical issues. The IEEE has published some proposed standards (PDF) that focus on various aspects of inclusion - language, cultures, non-Western principles. Before all that, in 2011, Danah Boyd and Kate Crawford penned Six Provocations for Big Data, which included a discussion of the need for transparency, accountability, and consent. The World Economic Forum published its top ten ethical issues in AI in 2016. Also in 2016, a Stanford University Group published a report trying to fend off regulation by saying it was impossible.

If the industry proves to be right and regulation really is impossible, it won't be because of the technology itself but because of the ecosystem that nourishes amoral owners. "Ethics of AI", as badly as we need it, will be meaningless if the necessary large piles of data to train it are all owned by just a few very large organizations and well-financed criminals; it's equivalent to talking about "ethics of agriculture" when all the seeds and land are owned by a child's handful of global players. The pre-emptive antitrust movement of 2018 would find a way to separate ownership of data from ownership of the AI, algorithms, and machine learning systems that work on them.


Illustrations: HAL.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 9, 2018

Signaling intelligence

smithsam-ASIdemo-slides.pngLast month, the British Home Office announced that it had a tool that can automatically detect 94% of Daesh propaganda with 99.995% accuracy. Sophos summarizes the press release to say that only 50 out of 1 million videos would require human review.

"It works by spotting subtle patterns in the extremist videso that distinguish them from normal content..." Mark Werner, CEO of London-based ASI Data Science, the company that developed the classifier, told Buzzfeed.

Yesterday, ASI, which numbers Skype co-founder Jaan Tallinn among its investors, presented its latest demo day in front of a packed house. Most of the lightning presentations focused on various projects its Fellows have led using its tools in collaboration with outside organizations such as Rolls Royce and the Financial Conduct Authority. Warner gave a short presentation of the Home Office extremism project that included little more detail than the press reports a month ago, to which my first reaction was: it sounds impossible.

That reaction is partly due to the many problems with AI, machine learning, and big data that have surfaced over the last couple of years. Either there are hidden biases, or the media reports are badly flawed, or the system appears to be telling us only things we already know.

Plus, it's so easy - and so much fun! - to mock the flawed technology. This week, for example, neural network trainer Janelle Shane showed off the results of some of her pranks. After confusing image classifiers with sheep that don't exist, goats in trees (birds! or giraffes!) and sheep painted orange (flowers!), she concludes, "...even top-notch algorithms are relying on probability and luck." Even more than humans, it appears that automated classifiers decide what they see based on what they expect to see and apply probability. If a human is holding it, it's probably a cat or dog; if it's in a tree it's not going to be a goat. And so on. The experience leads Shane to surmise that surrealism might be the way to sneak something past a neural net.

Some of this approach appears to be what ASI's classifier probably also does (we were shown no details). As Sophos suggests, a lot of the signals ASI's algorithm is likely to use have nothing to do with the computer "seeing" or "interpreting" the images. Instead, it likely looks for known elements such as logos and facial images matched against known terrorism photos or videos. In addition it can assess the cluster of friends surrounding the account that's posted the video and look for profile information that shows the source is one that has been known to post such material in the past. And some will be based on analyzing the language used in the video. From what ASI was saying, it appears that the claim the company is making is fairly specific: the algorithm is supposed to be able to detect (specifically) Daesh videos, with a false positive rate of 0.005%, and 94% of true positives.

These numbers - assuming they're not artifacts of computerish misunderstanding about what it's looking for - of course represent tradeoffs, as Patrick Ball explained to us last year. Do we want the algorithm to block all possible Daesh videos? Or are we willing to allow some through in the interests of honoring the value of freedom of expression and not blocking masses of perfectly legal and innocent material? That policy decision is not ASI's job.

What was more confusing in the original reports is that the training dataset was said to have been "over 1,000 videos". That seems an incredibly small sample for testing a classifier that's going to be turned loose on a dataset of millions. At the demonstration, Warner's one new piece of information is that because that training set was indeed small, the project developed "synthetic data" to enlarge the training set to sufficient size. As gaming-the-system as that sounds, creating synthetic data to augment training data is a known technique. Without knowing more about the techniques ASI used to create its synthetic data it's hard to assess that work.

We would feel a lot more certain of all of these claims if the classifier had been through an independent peer review. The sensitivity of the material involved makes this tricky; and if there has been an outside review we haven't been told about it.

But beyond that, the project to remove this material rests on certain assumptions. As speakers noted at the first conference run by VOX-Pol, an academic research network studying violent online political extremism, the "lone wolf" theory posits that individuals can be radicalized at home by viewing material on the internet. The assumption that this is true underpins the UK's censorship efforts. Yet this theory is contested: humans are highly social animals. Radicalization seems unlikely to take place in a vacuum. What - if any - is the pathway from viewing Daesh videos to becoming a terrorist attacker?

All these questions are beyond ASI's purview to answer. They'd probably be the first to say: they're only a hill of technology beans being asked to solve a mountain of social problems.

Illustrations: Slides from the demonstration (Sam Smith).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 2, 2018

In sync

Discarding images-King David music.jpgUntil Wednesday, I was not familiar with the use of "sync" to stand for a music synchronization license - that is, a license to use a piece of music in a visual setting such as a movie, video game, or commercial. The negotiations involved can be Byzantine and very, very slow, in part because the music's metadata is so often wrong or missing. In one such case, described at Music 4.5's seminar on developing new deals and business models for sync (Flash), it took ten years to get the wrong answer from a label to the apparently simple question: who owns the rights to this track on this compilation album?

The surprise: this portion of the music business is just as frustrated as activists with the state of online copyright enforcement. They don't love the Digital Millennium Copyright Act (2000) any more than we do. We worry about unfair takedowns of non-infringing material and bans on circumvention tools; they hate that the Act's Safe Harbor grants YouTube and Facebook protection from liability as long as they remove content when told it's infringing. Google's automated infringement detection software, ContentID, I heard Wednesday, enables the "value gap", which the music industry has been fretting about for several years now because the sites have no motivation to create licensing systems. There is some logic there.

However, where activists want to loosen copyright, enable fair use, and restore the public domain, they want to dump Safe Harbor, either by developing a technological bypass; or change the law; or by getting FaceTube to devise a fairer, more transparent revenue split. "Instagram," said one, "has never paid the music industry but is infringing copyright every day."

To most of us, "online music" means subscription-based streaming services like Spotify or download services like Amazon and iTunes. For many younger people, especially Americans though, YouTube is their jukebox. Pex estimates that 84% of YouTube videos contain at least ten seconds of music. Google says ContentID matches 99.5% of those, and then they are either removed or monetized. But, Pex argues, 65% of those videos remain unclaimed and therefore provide no revenue. Worse, as streaming grows, downloads are crashing. There's a detectable attitude that if they can fix licensing on YouTube they will have cracked it for all sites hosting "creator-generated content".

It's a fair complaint that ContentID was built to protect YouTube from liability, not to enable revenues to flow to rights holders. We can also all agree that the present system means millions of small-time creators are locked out of using most commercial music. The dancing baby case took eight years to decide that the background existence of a Prince song in a 29-second home video of a toddler dancing was fair use. But sync, too, was designed for businesses negotiating with businesses. Most creators might indeed be willing to pay to legally use commercial music if licensing were quick, simple, and cheap.

There is also a question of whether today's ad revenues are sustainable; a graphic I can't find showed that the payout per view is shrinking. Bloomberg finds that increasingly winning YouTubers are taking all with little left for the very long tail.

The twist in the tale is this. MP3 players unbundled albums into songs as separate marketable items. Many artists were frustrated by the loss of control inherent in enabling mix tapes at scale. Wednesday's discussion heralded the next step: unbundling the music itself, breaking it apart into individual beats, phrases and bars, each licensable.

One speaker suggested scenarios. The "content" you want to enjoy is 42 minutes long but your commute is only 38 minutes. You might trim some "unnecessary dialogue" and rearrange the rest so now it fits! My reaction: try saying "unnecessary dialogue" to Aaron Sorkin and let's see how that goes.

I have other doubts. I bet "rearranging" will take longer than watching the four minutes. Speeding up the player slightly achieves the same result, and you can do that *now* for free (try really blown it. More useful was the suggestion that hearing-impaired people could benefit from being able to tweak the mix to fade the background noise and music in a pub scene to make the actors easier to understand. But there, too, we actually already have closed captions. It's clear, however, that the scenarios may be wrong, but the unbundling probably isn't.

In this world, we won't be talking about music, but "music objects". Many will be very low-value...but the value of the total catalogue might rise. The BBC has an experiment up already: The Mermaid's Tears, an "object-based radio drama" in which you can choose to follow any one of the three characters to experience the story.

Smash these things together, and you see a very odd world coming at us. It's hard to see how fair use survives a system that aims to license "music objects" rather than "music". In 1990, Pamela Samuelson warned about copyright maximlism. That agenda does not appear to have gone away.


Illustrations: King David dancing before the Ark of the Covenant, 'Maciejowski Bible', Paris ca. 1240 (via Discarding Images.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 8, 2017

Plastures of plenty

Thumbnail image for windows-xp-hilltop.jpegIt was while I was listening to Isabella Henriques talk about children and consumerism at this week's Children's Global Media Summit that it occurred to me that where most people see life happening advertisers see empty space.

Henriques, like Kathryn Montgomery earlier this year, is concerned about abusive advertising practices aimed at children. So much UK rhetoric around children and the internet focuses on pornography and extremism - see, for example, this week's Digital Childhood report calling for a digital environment that is "fit for childhood" - that it's refreshing to hear someone talk about other harms. Such as: teaching kids "consumerism". Under 12, Henriques said, children do not understand the persuasiveness and complexity of advertising. Under six, they don't identify ads (like the toddler who watched 12 minutes of Geico commercials). And even things that are *effectively* ads aren't necessarily easily identifiable as such, even by adults: unboxing videos, product placement, YouTube kids playing with branded toys, and in-app "opportunities" to buy stuff. Henriques' research finds that children influence family purchases by up to 80%. That's not a baby you're expecting; it's a sales promoter.

When we talk about the advertising arms race, we usually mean the expanding presence and intrusiveness of ads in places where we're already used to seeing them. That escalation has been astonishing.

To take one example: a half-hour sitcom episode on US network television in 1965 - specifically, the deservedly famous Coast to Coast Big Mouth episode of The Dick Van Dyke Show - was 25:30 minutes long. A 2017 episode of the top-rated US comedy, The Big Bang Theory, barely ekes out 18. That's over a third less content, double the percentage of time watching ads, or simply seven and a half extra minutes. No wonder people realized automatic ad marking and fast-forwarding would sell.

The internet kicked this into high gear. The lack of regulation and the uncertainty about business models led to legitimate experimentation. But it also led to today's complaints, both about maximally intrusive and attention-demanding ads and the data mining advertisers and their agencies use to target us, and also to increasingly powerful ad blockers - and ad blocker blockers.

The second, more subtle version of the arms race is the one where advertisers see every open space where people congregate as theirs to target. This was summed up for me once at a lunchtime seminar run by the UK's Internet Advertising Bureau in 2003, when a speaker gave an enthusiastic tutorial on marketing via viral email: "It gets us into the office. We've never been able to go there before." You could immediately see what office inboxes looked like to them: vast green fields just waiting to be cultivated. You know, the space we thought of as "work". And we were going to be grateful.

Childhood, as listening to Henriques, Montgomery, and the Campaign for a Commercial-Free Childhood makes plain, is one of those green fields advertisers have long fought to cultivate. On broadcast media, regulators were able to exercise some control. Even online, the Childhood Online Privacy Protection Act has been of some use.

Thumbnail image for isabella-henriques.jpegAdvertisers, like some religions, aim to capture children's affections young, on the basis that the tastes and habits you acquire in childhood are the hardest for an interloper to disrupt. The food industry has long been notorious unhealthy foods into finding ways around regulations that limit how they target children on broadcast and physical-world media. But the internet offers new options: "Smart" toys are one set of examples; Facebook's new Messenger Kids app is another. This arms race variant will escalate as the Internet of Things offers advertisers access to new areas of our lives.

Part of this story is the vastly increased quantities of data that will be available to sell to advertisers for data mining. On the web, "free" has long meant "pay with data". With the Internet of Things, no device will be free, but we will pay with data anyway. The cases we wrote about last week are early examples. As hardware becomes software, replacement life cycles become the manufacturer's choice, not yours. "My" mobile phone is as much mine as "my library book" - and a Tesla is a mobile phone with a chassis and wheels. Think of the advertising opportunities when drivers are superfluous to requirements, , beginning with the self-driving car;s dashboard and windshield. The voice-operated Echo/Home/Dot/whatever is clearly intended to turn homes into marketplaces.

A more important part is the risk of turning our homes into walled gardens, as Geoffrey A. Fowler writes in the Washington Post of his trial of Amazon Key. During the experiment, Fowler found strangers entering his house less disturbing than his sense of being "locked into an all-Amazon world". The Key experiment is, in Fowler's estimation, the first stab at Amazon's goal of becoming "the operating system for your home". Will Amazon, Google, and Apple homes be interoperable?

Henriques is calling for global regulation to limit the targeting of children for food and other advertising. It makes sense: every country is dealing with the same multinational companies, and most of us can agree on what "abusive advertising" means. But then you have to ask: why do they get a pass on the rest of us?


Illustrations: Windows XP start-up screen

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017

Counterfactuals

Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.


Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 13, 2017

Cost basis

Thumbnail image for Social_Network_Diagram_(segment).svg.pngThere's plenty to fret about in the green paper released this week outlining the government's Internet Safety Strategy (PDF) under the Digital Economy Act (2017). The technical working group is predominantly made up of child protection folks, with just one technical expert and no representatives of civil society or consumer groups. It lacks definitions: what qualifies as "social media"? And issues discussed here before persist, such as age verification and the mechanisms to implement it. Plus there's picky details, like requiring parental consent for the use of information services by children under 13, which apparently fails to recognize how often parents help their kids lie about their ages. However.

The attention-getting item we hadn't noticed before is the proposal of an "industry-wide levy which could in the future be underpinned with legislation" in order to "combat online harms". This levy is not, the paper says, "a new tax on social media" but instead "a way of improving online safety that helps businesses grow in a sustainable way while serving the wider public good".

The manifesto commitment on which this proposal is based compares this levy to those in the gambling and alcohol industries. The The Gambling Act 2005 provides for legislation to support such a levy, though to date the industry's contributions, most of which go to GambleAware to help problem gamblers, are still voluntary. Similarly, the alcohol industry funds the Drinkaware Trust.

The problem is that these industries aren't comparable in business model terms. Alcohol producers and retailers make and sell a physical product. The gambling industry's licensed retailers also sell a product, whether it's physical (lottery tickets or slot machine rolls) or virtual (online poker). Either way, people pay up front and the businesses pay their costs out of revenues. When the government raises taxes or adds a levy or new restriction that has to be implemented, the costs are passed on directly to consumers.

No such business model applies in social media. Granted, the profits accruing to Facebook and Google (that is, Alphabet) look enormous to us, especially given the comparatively small amounts of tax they pay to the UK - 5% of UK profits for Facebook and a controversial but unclear percentage for Alphabet. But no public company adds costs without planning how to recoup them, so then the question is: how do companies that offer consumers a pay-with-data service do that, given that they can't raise prices?

The first alternative is to reduce costs. The problem is how. Reducing staff won't help with the kinds of problems we're complaining about, such as fake news and bad behavior, which require humans to solve. Machine learning and AI are not likely to improve enough to provide a substitute in the near term, though no doubt the companies hope they will in the longer term.

The second is to increase revenues, which would mean either raising prices to advertisers or finding new ways to exploit our data. The need to police user behavior doesn't seem like a hot selling point to convince advertisers that it's worth paying more. That leaves the likelihood that applying a levy will create a perverse incentive to gather and crunch yet more user data. That does not represent a win; nor does it represent "taking back control" in any sense.

It's even more unclear who would be paying the levy. The green paper says the intention is to make it "proportionate" and ensure that it "does not stifle growth or innovation, particularly for smaller companies and start-ups". It's not clear, however, that the government understands just how vast and varied "social media" are. The term includes everything from the services people feel they have little choice about using (primarily Facebook, but also Google to some extent) to the web boards on news and niche sites, to the comments pages on personal blogs, to long-forgotten precursors of the web like Usenet and IRC. Designing a levy to take account of all business models and none while not causing collateral damage is complex.

Overall, there's sense in the principle that industries should pay for the wider social damage they cause to others. It's a long-standing approach for polluters, for example, and some have suggested there's a useful comparison to make between privacy and the environment. The Equifax breach will be polluting the privacy waters for years to come as the leaked data feeds into more sophisticated phishing attacks, identity fraud, and other widespread security problems. Treating Equifax the way we treat polluters makes sense.

It's less clear how to apply that principle to sites that vary from self-expression to publisher to broadcaster to giant data miners. Since the dawn of the internet any time someone's created a space for free expression someone else has come along and colonized a corner of it where people could vent and be mean and unacceptable; 4chan has many ancestors. In 1994, Wired captured an early example: The War Between alt.tasteless and rec.pets.cats. Those Usenet newsgroups created revenue for no one, while Facebook and Google have enough money to be the envy of major governments.

Nonetheless, that doesn't make them fair targets for every social problem the government would like to dump off onto someone else. What the green paper needs most is a clear threat model, because it's only after you have one that you can determine the right tools for solving it.


Illustrations:: Social network diagram.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 24, 2015

When content wanted to be free

A long-running motif on the TV show Mad Men has been the conflict between the numbers guys - Harry Crane (Rich Sommer) and Jim Cutler (Harry Hamlin) - and the creative folks - Don Draper (Jon Hamm) and Peggy Olson (Elisabeth Moss) - who want to do inventive work that inspires emotional connection. As the discussion on the WELL concluded, the success of Google's all-text contextual ads says the numbers guys have won. For now.

This week, two German publishers lost in court against the creator of the browser plug-in Adblock Plus, which, like you'd think, blocks web ads for an increasing number of users worldwide. The publishers' contention: that Adblock Plus is "illegal" and "anti-competitive". Adblock Plus's project manager, Ben Willliams, welcomed the precedent on his blog, hoping it will help his company avoid future expense and resource drain "defending what we feel is an obvious consumer right: giving people the ability to control their own screens by letting them block annoying ads and protect their privacy".

Williams concludes by suggesting that publishers should work with Adblock Plus to develop non-intrusive forms of advertising and "create a more sustainable Internet ecosystem for everyone". Adblock Plus implements this by whitelisting sites (the largest of which pay for the privilege) that run acceptable ads. Cue the arms race: the fork Adblock Edge still removes all ads. As of June 2014, PageFair counted 150 million ad blocker users (PDF), up 69% from 2013.

I have to admit to some inner conflict here, because those who argue that blocking ads is theft have a point. I am indeed accessing content whose existence (and whose writers) is being financed by advertisers without the quid pro quo of my attention. If everyone does this, the whole shebang - including a chunk of how I make my own living - is unsustainable. I should be wracked with guilt. It's just that the ads make me hate the companies that pay for them, and I can't read a web page full of fine print with animations in my face. Similarly, it's hard to enjoy - or even follow - a US television show when it's interrupted by eight minutes of ads per half hour and each one is delivered at a volume easily 1/3 higher than the program I'm there to see. I plead in return that I buy DVDs, magazine subscriptions, and books, and contribute my own share of free content to the web, but that doesn't pay the same content providers. What seems particularly unreasonable to me is double-dipping: ads in situations where we already pay for admission. That would include DVDs; movie theaters; premium TV channels; the Transport for London phone app; sports stadia during live events; and on purchased clothing.

So the question remains: for the large chunk of the web that is financed solely by advertising, do we want professional content or not? If we do, how do we propose to pay people to create it?

It turns out that this question was considered in 2012 by Tim Hwang (last seen at We Robot 2015) and Adi Kamdar in their paper: Peak Advertising (PDF). The paper makes the explicit analogy between the diminishing effectiveness of online advertising and the diminishing returns after peak oil, if the energy required for extraction is greater than we can retrieve. The authors consider four indications that we might have reached the point of diminishing returns, and go on to speculate about how content on the Internet would have to evolve if it can no longer rely on advertising support as its dominant financial model. I found it a few months ago when I had the same thought: for many quarters now Google's revenues per click have been dropping (its latest results, released yesterday, continue the trend), and overall it seems impossible that there can be enough advertising in the world to pay for all the things people want to support that way.

Hwang and Kamdar highlighted three problems with the status quo in addition to the constant rise in ad blocking: demographics - advertising tends to reach the oldest (read: least desirable) customers; the click fraud; and escalating ad density (the kind of saturation that sends Americans to fast-forwarding DVRs rather than watch eight minutes of ads per TV half hour). Hwang and Kamdar predicted that over the next decade falling revenues will encourage consolidation and monopolistic markets for online services because only the largest vendors will have sufficient inventory to remain profitable. In addition, they predicted an increasing interest on the part of advertisers in collecting more and more (and more privacy-invasive) data about users. Finally, they predicted a rise in essentially unblockable content - that is, "sponsored" stories and product placement. As evidence they were on the right track, I offer the UK Internet Advertising Bureau's discussion of "native ads" ("make advertising part [of] the content experience").

web-firstbanner-1994-10-27.jpg"The end of the Internet as we know it," they said on Usenet when the first ad went up. Recalcitrant users: a disruptive technology.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2012

Democracy theater

So Facebook is the latest to discover that it's hard to come up with a governance structure online that functions in any meaningful way. This week, the company announced plans to disband the system of voting on privacy changes that it put in place in 2009. To be honest, I'm surprised it took this long.

Techcrunch explains the official reasons. First, with 1 billion users, it's now too easy to hit the threshold of 7,000 comments that triggers a vote on proposed changes. Second, with 1 billion users, amassing the 30 percent of the user base necessary to make the vote count has become...pretty much impossible. (Look, if you hate Facebook's policy changes, it's easier to simply stop using the system. Voting requires engagement.) The company also complained that the system as designed encourages comments' "quantity over quality". Really, it would be hard to come up with an online system that didn't unless it was so hard to use that no one would bother anyway.

The fundamental problem for any kind of online governance is that no one except some lawyers thinks governmance is fun. (For an example of tedious meetings producing embarrassing results, see this week's General Synod.) Even online, where no one can tell you're a dog watching the Outdoor Channel while typing screeds of debate, it takes strong motivation to stay engaged. That in turn means that ultimately the people who participate, once the novelty has worn off, are either paid, obsessed, or awash in free time.

The people who are paid - either because they work for the company running the service or because they work for governments or NGOs whose job it is to protect consumers or enforce the law - can and do talk directly to each other. They already know each other, and they don't need fancy online governmental structures to make themselves heard.

The obsessed can be divided into two categories: people with a cause and troublemakers - trolls. Trolls can be incredibly disruptive, but they do eventually get bored and go away, IF you can get everyone else to starve them of the oxygen of attention by just ignoring them.

That leaves two groups: those with time (and patience) and those with a cause. Both tend to fall into the category Mark Twain neatly summed up in: "Never argue with a man who buys his ink by the barrelful." Don't get me wrong: I'm not knocking either group. The cause may be good and righteous and deserving of having enormous amounts of time spent on it. The people with time on their hands may be smart, experienced, and expert. Nonetheless, they will tend to drown out opposing views with sheer volume and relentlessness.

All of which is to say that I don't blame Facebook if it found the comments process tedious and time-consuming, and as much of a black hole for its resources as the help desk for a company with impenetrable password policies. Others are less tolerant of the decision. History, however, is on Facebook's side: democratic governance of online communities does not work.

Even without the generic problems of online communities which have been replicated mutatis mutandem since the first modem uploaded the first bit, Facebook was always going to face problems of scale if it kept growing. As several stories have pointed out, how do you get 300 million people to care enough to vote? As a strategy, it's understandable why the company set a minimum percentage: so a small but vocal minority could not hijack the process. But scale matters, and that's why every democracy of any size has representative government rather than direct voting, like Greek citizens in the Acropolis. (Pause to imagine the complexities of deciding how to divvy up Facebook into tribes: would the basic unit of membership be nation, family, or circle of friends, or should people be allocated into groups based on when they joined or perhaps their average posting rate?)

The 2009 decision to allow votes came a time when Facebook was under recurring and frequent pressure over a multitude of changes to its privacy policies, all going one way: toward greater openness. That was the year, in fact, that the system effectively turned itself inside out. EFF has a helpful timeline of the changes from 2005 to 2010. Putting the voting system in place was certainly good PR: it made the company look like it was serious about listening to its users. But, as the Europe vs Facebook site says, the choice was always constrained to old policy or new policy, not new policy, old policy, or an entirely different policy proposed by users.

Even without all that, the underlying issue is this: what company would want democratic governance to succeed? The fact is that, as Roger Clarke observed before Facebook even existed, social networks have only one business model: to monetize their users. The pressure to do that has only increased since Facebook's IPO, even though founder Mark Zuckerberg created a dual-class structure that means his decisions cannot be effectively challenged. A commercial company- especially a *public* commercial company - cannot be run as a democracy. It's as simple as that. No matter how much their engagement makes them feel they own the place, the users are never in charge of the asylum. Not even on the WELL.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

August 3, 2012

Social advertising

It only takes two words to sum up Facebook's sponsored stories, the program under which you click the "Like" button on a brand's page and the system picks up your name and photograph and includes it in ads seen by your friends. The two words: social engineering.

The cooption of that phrase into the common language and the workings of time mean that the origins of that phrase are beginning to be lost. In fact, it came from 1980s computer hacking, and was, to the best of my knowledge, created by Kevin Mitnick in the days when he was the New York Times's most dangerous hacker. (Compared to today's genuinely criminal hacking enterprises, Mitnick was almost absurdly harmless; but he scared the wrong people at the wrong time.) The thing itself, of course, is basically the confidence game that is probably as old as consciousness: you, the con man, get the mark to trust you so you can then manipulate that trust to your benefit. By the time the mark figures out the game, you yourself expect to be long gone and out of reach. Trust can be abruptly severed, but the results of having granted it in the first place can't be so easily undone.

Where Facebook messed up was in that last bit: it's hard for a company to leave town, opening the way for the inevitable litigation. Naturally, there was litigation, and now there's a settlement under consideration that would require the company to pay millions to privacy advocacy organisations.

This hasn't, of course, been a good week for Facebook for other reasons: it released its first post-IPO financial statements last week. And, for the same reasons we gave when the IPO failed to impress us, as predicted, those earnings were disappointing, At the same time, the company admitted that 83 million of its user accounts are fakes or duplicates (so the service's user base is maybe 912 million instead of 995 million). And, a music company complains that it was paying for ads clicked on by bots, a claim Facebook says it can't substantiate. Small wonder the shares have halved in price since the IPO - and I'd say they're still too expensive.

The comment that individuals whose faces and names were used were being used as spokespeople without being paid, however, sparks some interesting thoughts about the democratization of celebrity endorsements and product placement. Ever since I first encountered MIT's work on wearable computing in the mid 1990s, I've wondered when we would start seeing people wearing clothing that's not just branded but displaying video ads. In the early 2000s, I recall attending an Internet Advertising Bureau event, where one of the speakers talked baldly about the desirability of getting messages into the workplace, which until then had been a no-go area. Well, I say no-go; to them I think it seemed more like a green field or an unbroken pasture of fresh snow.

Spammers were way ahead on this one, invading people's email inboxes and instant messaging and then, when filtering got good, spoofing the return addresses of people you know and trust in order to get you to click on the bad stuff. It's hard not to see Facebook's sponsored stories as the corporate version of this.

But what if they did pay, as that blog posting suggested? What if instead of casually telling your friends how great Lethal Police Hogwarts XXII is, you could get paid to do so? You wouldn't get much, true, but if sports stars can be paid millions of dollars to endorse tennis racquets (which are then customized to the point where they bear little resemblance to the mass market product sold to the rest of us) why shouldn't we be paid a few cents? Of course, after a while you wouldn't be able to trust your friends' opinions any more, but is that too high a price?

Recently, I've spent some time corresponding with a couple of people from Premiumlinkadvertising.com, who contacted me with the offer to pay me to insert a link to Musician's Friend into one of the music pages on my Web site. Once I realized that the deal was that the link could not be identified in any way as a paid link - it couldn't be put in a box, or a different font, or include the text, paid for, or anything like that - I bailed. They then offered more money. Last offer was $250 for a year, I think. I do allow ads on my site - a few pages have AdSense, and in the past a couple had paid-for text ads clearly labeled as such - but not masquerading as personal recommendations. I imagine there's some price at which I could be bought, but $250 is several orders of magnitude too low.


Week links:

- Excellent debunking of the "cybercrime costs $1 trillion" urban legend (is that including Facebook's vanishing market cap?)

- The Federated Artists Coalition has an interesting proposal to give artists and creators some rights in the proposed Universal/EMI merger.

- Wouldn't you think people would test their software before unleashing it on an unsuspecting stock market?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


July 27, 2012

Retcons and reversals

Reversals - in which a twist of plot or dialogue reverses what's gone before it - make for great moments in fiction, both comic and tragic. Retcons, in which the known history of a character or event is rewritten or ignored are typically a sign of writer panic: they're out of ideas and are desperate enough to betray the characters and enrage the fans.

This week real-life Internet-related news has seen so many of both that if it were a TV series the showrunner would demand that the writers slow the pace. To recap:

Reversal: Paul Chambers' acquittal on appeal in the so-called Twitter joke trial is good news for everyone: common sense has finally prevailed, albeit at great cost to Chambers, whose life was (we hope temporarily) wrecked by the original arrest and guilty verdict. The decision should go a long way toward establishing that context matters; that what is said online and in public may still be intended only for a relatively small audience who give it its correct meaning; and that when the personnel responsible for airport security, the police, and everyone else up the chain understand there was no threat intended the Crown Prosecution Service should pay attention. What we're trying to stop is people blowing up airports, not people expressing frustration on Twitter. The good news is that everyone except the CPS and the original judge could accurately tell the difference.

Retcon: The rewrite of British laws to close streets and control street signs, retailers, individual behavior, and other public displays for the next month, all to make the International Olympic Committee happy is both wrong and ironic. While the athletes are required to appear to be amateurs who participate purely for the love of sport (no matter what failed drug tests indicate), the IOC and its London delegate, LOCOG, are trying to please their corporate masters by behaving like bullies. This should not have been a surprise, given both the list of high-level corporate sponsors and the terms of the 2006 Act the British Parliament passed in their shameful eagerness to *get* the Olympics. No sporting event, no matter how prominent, no matter how much politicians hope it will bring luster to their country and keep them in office, should override national laws, norms, and standards.

In 1997 I predicted for Salon.com the top ten new jobs for 2002. Number one was copyright protection officer, which I imagined as someone who visited schools to ensure that children complied with trademark, copyright, and other intellectual property requirements. Today, according to CNN and the New York Times, 280 "brand police" are scouring London for marketers who are violating the London Olympic Games and Paralympic Games Act 2006 by using words that might conjure up an association with the Olympics in people's minds. Even Michael Payne, the marketing director who formulated the IOC's branding strategy, complains that LOCOG has gone too far. The Olympics of Discontent, indeed.

Reversal: Eleven-year-old Liam Corcoran managed to get through security and onto a plane, all without a ticket, boarding pass, or passport, apparently more or less by accident. The story probably shouldn't be the occasion for too much hand-wringing about security. The fixes are simple and cheap. And it's not as if the boy got through with 3D printer and enough material to make a functioning gun. (Doubtless to be banned from Olympic events in 2016, alongside wireless hubs.

Retcon: If you're going to (let's call it) reinterpret history to suit an agenda, you should probably stick with events far enough back that the people are all dead. There is by now plenty of high-quality debunking of Gordon Crovitz's claim in the Wall Street Journal that government involvement in the invention of the Internet is a "myth". Ha. Not only was the development of the Internet largely supported by the US government (and championed by Al Gore), so was that of the rest of the computer industry. That conservatives would argue this wasn't true is baffling; isn't the military supposed to be the one part of government anti-big-government people actually like? Another data point left out of the (largely American) discussion: the US government wasn't the only one involved. Much of the early work on internetworking involved international teamwork. The term "packet" in "packet switching", the fundamental way the Internet transmits data, came from the British efforts; its inventor was the Welsh computer scientist Donald Davies at the UK's National Physical Laboratory. Not that Mitt Romney will want to know this.

For good historical accounts of the building of the Internet, see Katie Hafner and Matthew Lyon's Where Wizards Stay Up Late: The Origins of the Internet (1998) and (especially for a more international view) Janet Abbate's Inventing the Internet. As for the Romney/Obama spat over who built what, I suspect that what President Obama was trying to get across was a point similar to that made by the writer Paulina Borsook in 1996: that without good roads, clean water, good schools, and all the other infrastructure First Worlders take for granted, big, new companies have a hard time emerging.

It's all part of that open, free infrastructure we so often like to talk about that's necessary for the commons to thrive. And for that, you need governments to do the right things.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 20, 2012

In the country of the free

About a year and a half ago, I suddenly noticed that The Atlantic was posting a steady stream of interesting articles to Twitter (@theatlantic) and realized it was time to resubscribe. In fact, I would argue that the magazine is doing a lot of what Wired used to do in its digital coverage.

I don't, overall, regret it. But this month's issue is severely marred by this gem, from Elizabeth Wurtzel (the woman who got famous for taking Prozac and writing about it:

Of the Founders' genius ideas, few trump intellectual-property rights. At a time when Barbary pirates still concerned them, the Framers penned an intellectual-property clause--the world's first constitutional protection for copyrights and patents. In so doing, they spawned Hollywood, Silicon Valley, Motown, and so on. Today, we foolishly flirt with undoing that. In a future where all art is free (the future as pined for by Internet pirates and Creative Commons zealots), books, songs, and films would still get made. But with nobody paying for them, they'd be terrible. Only people who do lousy work do it for free.

Wurtzel's piece, entitled "Charge for Your Ideas", is part of a larger section on innovative ideas; other than hers, most of them are at least reasonable suggestions. I hate to make the editors happy by giving additional attention to something that should have been scrapped, but still: there are so many errors in that one short paragraph that need rebuttal.

Very, very few people - the filmmaker Nina Paley being the only one who springs rapidly to mind (do check out her fabulous film Sita Sings the Blues) - actually want to do away with copyright. And even most of those would like to be paid for their work. Paley turned Sita over to her audience to distribute freely because the deals she was being offered by distributors were so terrible and demanded so much lock-in that she thought she could do better. And she has, including fees for TV and theatrical showings and sales of DVDs and other items. More important from her perspective, she's built an audience for the film that it probably never would have found through traditional channels and that will support and appreciate her future work. As so many of us have said, obscurity is a bigger threat to most artists than loss of revenues.

Neither Creative Commons, nor its founder, Larry Lessig, nor the Open Rights Group, nor the Electronic Frontier Foundation, nor anyone else I can think of among digital rights campaigners has ever said that copyright should be abolished. The Pirate Party, probably the most radical among politically active groups pushing for copyright reform, wants to cut it way back, true - but not to abolish it. Even free software diehard Richard Stallman finds copyright useful as a way of blocking people from placing restrictions on free software.

Creative Commons' purpose in life is to make it easy for anyone who creates online content to attach to it a simple, easy-to-understand license that makes clear what rights to the content are reserved and which are available. One of those licenses blocks all uses without permission; others allow modification, redistribution, or commercial use, or require attribution.

Wurtzel fails to grasp that one may wish to reform something without wishing to terminate its existence. It was radical to campaign for copyright reform 20 years ago; today even the British government agrees copyright reform is needed (though we may all disagree about the extent and form that reform should take).

The Framers did not invent copyright. It was that pesky country they left, Britain, that enacted the first copyright law, the Statute of Anne, in 1710. We will, however, allow the "first constitutional" bit to stand. That still does not mean that the copyright status of Mickey Mouse should dictate national law.

As for pirates - the seafaring kind, not the evil downloader with broadband - they are far from obsolete. In fact, piracy is on the increase, and 1 major concern to both governments and shipping businesses. In May, the New York Times highlighted the growing problem of Somali pirates off the Horn of Africa.

Her final claim, that "Only people who do lousy work do it for free" was the one that got me enraged enough to write this. It's an insult to every volunteer, every generous podcaster, every veteran artist who blogs to teach others, every beginning artist finding their voice, every intern, and every person who has a passion for something and pursues it for love, whether they're an athlete in an unpopular sport or an amateur musician who plays only for his friends because he doesn't want his relationship with music to be damaged by making it his job. It is certainly true that much of what we imagine is "free" is paid for in other ways: bloggers whose blogs are part of the output their employer pays for, free/open source software writers who like the credit and stature their contributions give them, and so on. But imagine the miserable, miserly, misanthropic society we'd be living in if her claim were true? We'd need that Prozac.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 29, 2012

Artificial scarcity

A couple of weeks ago, while covering the tennis at Eastbourne for Daily Tennis, I learned that there is an ongoing battle between the International Tennis Writers Association and the sport at large over the practice of posting interview transcripts online.

What happens is this. Tournaments - the top few layers of the men's (ATP) and women's (WTA) tours - pay stenographers from ASAP Sports to attend players' press conferences and produce transcripts, which are distributed to the journalists on-site to help them produce accurate copy. It's a fast service; the PR folks come around the press room with hard copies of the transcript perhaps 10-15 minutes after the press session ends.

Who gives press conferences? At Eastbourne, like most smaller events, the top four seeds all are required to do media on the first day. After that, every day's match winners are required to oblige if the press asks for them; losers have more discretion but the top players generally understand that with their status and success level comes greater responsibility to publicize the game by showing up to answer questions. The stenographer at Eastbourne was a highly trained court reporter who travels the golf and tennis worlds taking down these questions and answers verbatim on a chord keyboard.

It turns out the transcripts particular battle has been going on for a while; witness this unhappy blogger's comment from June, 2011, after discovering that the French Open had bowed to pressure and stopped publishing interviews on its Web site. The same blogger had earlier posted ITWA's response to the complaints.

ITWA's arguments are fairly simple. It's a substantial investment to travel the tour (true; per year full-time you're talking at least $50,000). If interview transcripts are posted on the Web before journalists have had a chance to write their stories, it won't be worth spending that money because anyone can write stories based on them (true). Newspapers are in dire straits as it is (true). The questions journalists ask the players are informed by their experience and professional expertise; surely they should have the opportunity to exploit the responses they generate before everyone else does - all those pesky bloggers, for example, who read the transcripts and compare them to the journalists' reports and spot the elisions and changes of context.

Now, I don't believe for a second that there will be no coverage of tennis if the press stop traveling the tour. What there won't be is *independent* coverage. Except for the very biggest events, the players will be interviewed by the tours' PR people, and everything published about them will be as sanitized as their Wimbledon whites. Plus some local press, asking things like, "Talk about how much you like Eastbourne." The result will be like the TV stations now that provide their live match commentary by dropping a couple of people in a remote studio. No matter how knowledgeable those people are, their lack of intimate contact with the players and local conditions deadens their commentary and turns it into a recital of their pet peeves. (Note to Eurosport: any time a commentator says, "We talk so often about..." that commentator needs to shut up..)

This is the same argument they used to have about TV: if people can see the match on TV they won't bother to travel to it (and sometimes you do still find TV blackouts of local games). That hasn't really turned out to be true - TV has indeed changed this and every other sport, but by creating international stars and bringing in a lot of money in both payment for TV rights and sponsorship.

My response to the person who told me about this issue was that I didn't think basing your business model on artificial scarcity was going to work, the way the world is going. But this is not the only example of such restrictions; a number of US tournaments do not allow fans to carry professional-quality cameras onto the ground (to protect the interests of professional photographers).

What intrigued me about the argument - which at heart is merely a variant of the copyright wars - is that it pits the interests of fans and bloggers against those of the journalists who cover them. For the tournaments and tours themselves it's an inner conflict: they want both newspaper and magazine coverage *and* fan engagement. "Personal" contact with the players is a key part of that - and it is precisely what has diminished. Veteran tennis journalists will tell you that 20 years ago they got to know the players because they'd all be traveling the circuit together and staying in the same hotels. Today, the barriers are up; the players' lounge is carefully sited well away from the media centre.

Yet this little spat reflects the reality that the difference between writing a fan blog and working for a major media outlet is access. There is only so much time the stars in any profession - TV, sports, technology, business - can give to answering outsiders' questions before it eats into their real work. So this isn't really a story of artificial scarcity, though there's no lack of people who want to write about tennis. It's a story of real scarcity - but scarcity that one day soon is going to be differently distributed.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


Artificial scarcity

A couple of weeks ago, while covering the tennis at Eastbourne for Daily Tennis, I learned that there is an ongoing battle between the International Tennis Writers Association and the sport at large over the practice of posting interview transcripts online.

What happens is this. Tournaments - the top few layers of the men's (ATP) and women's (WTA) tours - pay stenographers from ASAP Sports to attend players' press conferences and produce transcripts, which are distributed to the journalists on-site to help them produce accurate copy. It's a fast service; the PR folks come around the press room with hard copies of the transcript perhaps 10-15 minutes after the press session ends.

Who gives press conferences? At Eastbourne, like most smaller events, the top four seeds all are required to do media on the first day. After that, every day's match winners are required to oblige if the press asks for them; losers have more discretion but the top players generally understand that with their status and success level comes greater responsibility to publicize the game by showing up to answer questions. The stenographer at Eastbourne was a highly trained court reporter who travels the golf and tennis worlds taking down these questions and answers verbatim on a chord keyboard.

It turns out the transcripts particular battle has been going on for a while; witness this unhappy blogger's comment from June, 2011, after discovering that the French Open had bowed to pressure and stopped publishing interviews on its Web site. The same blogger had earlier posted ITWA's response to the complaints.

ITWA's arguments are fairly simple. It's a substantial investment to travel the tour (true; per year full-time you're talking at least $50,000). If interview transcripts are posted on the Web before journalists have had a chance to write their stories, it won't be worth spending that money because anyone can write stories based on them (true). Newspapers are in dire straits as it is (true). The questions journalists ask the players are informed by their experience and professional expertise; surely they should have the opportunity to exploit the responses they generate before everyone else does - all those pesky bloggers, for example, who read the transcripts and compare them to the journalists' reports and spot the elisions and changes of context.

Now, I don't believe for a second that there will be no coverage of tennis if the press stop traveling the tour. What there won't be is *independent* coverage. Except for the very biggest events, the players will be interviewed by the tours' PR people, and everything published about them will be as sanitized as their Wimbledon whites. Plus some local press, asking things like, "Talk about how much you like Eastbourne." The result will be like the TV stations now that provide their live match commentary by dropping a couple of people in a remote studio. No matter how knowledgeable those people are, their lack of intimate contact with the players and local conditions deadens their commentary and turns it into a recital of their pet peeves. (Note to Eurosport: any time a commentator says, "We talk so often about..." that commentator needs to shut up..)

This is the same argument they used to have about TV: if people can see the match on TV they won't bother to travel to it (and sometimes you do still find TV blackouts of local games). That hasn't really turned out to be true - TV has indeed changed this and every other sport, but by creating international stars and bringing in a lot of money in both payment for TV rights and sponsorship.

My response to the person who told me about this issue was that I didn't think basing your business model on artificial scarcity was going to work, the way the world is going. But this is not the only example of such restrictions; a number of US tournaments do not allow fans to carry professional-quality cameras onto the ground (to protect the interests of professional photographers).

What intrigued me about the argument - which at heart is merely a variant of the copyright wars - is that it pits the interests of fans and bloggers against those of the journalists who cover them. For the tournaments and tours themselves it's an inner conflict: they want both newspaper and magazine coverage *and* fan engagement. "Personal" contact with the players is a key part of that - and it is precisely what has diminished. Veteran tennis journalists will tell you that 20 years ago they got to know the players because they'd all be traveling the circuit together and staying in the same hotels. Today, the barriers are up; the players' lounge is carefully sited well away from the media centre.

Yet this little spat reflects the reality that the difference between writing a fan blog and working for a major media outlet is access. There is only so much time the stars in any profession - TV, sports, technology, business - can give to answering outsiders' questions before it eats into their real work. So this isn't really a story of artificial scarcity, though there's no lack of people who want to write about tennis. It's a story of real scarcity - but scarcity that one day soon is going to be differently distributed.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 28, 2012

Interview with Lawrence Lessig

This interview was originally intended for a different publication; I only discovered recently that it hadn't run. Lessig and I spoke in late January, while the fate of the Research Works Act was still unknown (it's since been killed.

"This will be the grossest money election we've seen since Nixon," says the law professor Lawrence Lessig, looking ahead to the US Presidential election in November. "As John McCain said, this kind of spending level is certain to inspire a kind of scandal. What's needed is scandals."

It's not that Lessig wants electoral disaster; it's that scandals are what he thinks it might take to wake Americans up to the co-option of the country's political system. The key is the vast, escalating sums of money politicians need to stay in the game. In his latest book, Republic, Lost, Lessig charts this: in 1982 aggregate campaign spending for all House and Senate candidates was $343 million; in 2008 it was $1.8 billion. Another big bump upward is expected this year: the McCain quote he references was in response to the 2010 Supreme Court decision in Citizens United legalising Super-PACs. These can raise unlimited campaign funds as long as they have no official contact with the candidates. But as Lessig details in Republic, Lost, money-hungry politicians don't need things spelled out.

Anyone campaigning against the seemingly endless stream of anti-open Internet, pro-copyright-tightening policies and legislation in the US, EU, and UK - think the recent protests against the US's Stop Internet Piracy (SOPA) and Protect Intellectual Property (PIPA) Acts and the controversy over the Digital Economy Act and the just-signed Anti-Counterfeiting Trade Agreement (ACTA) treaty - has experienced the blinkered conviction among many politicians that there is only one point of view on these issues. Years of trying to teach them otherwise helped convince Lessig that it was vital to get at the root cause, at least in the US: the constant, relentless need to raise escalating sums of money to fund their election campaigns.

"The anti-open access bill is such a great example of the money story," he says, referring to the Research Works Act (H.R. 3699), which would bar government agencies from mandating that the results of publicly funded research be made accessible to the public. The target is the National Institutes of Health, which adopted such a policy in 2008; the backers are journal publishers.

"It was introduced by a Democrat from New York and a Republican from California and the single most important thing explaining what they're doing is the money. Forty percent of the contributions that Elsevier and its senior executives have made have gone to this one Democrat." There is also, he adds, "a lot to be done to document the way money is blocking community broadband projects".

Lessig, a constitutional scholar, came to public attention in 1998, when he briefly served as a special master in Microsoft's antitrust case. In 2000, he wrote the frequently cited book Code and Other Laws of Cyberspace, following up by founding Creative Commons to provide a simple way to licence work on the Internet. In 2002, he argued Eldred v. Ashcroft against copyright term extension in front of the Supreme Court, a loss that still haunts him. Several books later - The Future of Ideas, Free Culture, and Remix - in 2008, at the Emerging Technology conference, he changed course into his present direction, "coding against corruption". The discovery that he was writing a book about corruption led Harvard to invite him to run the Edmond J. Safra Foundation Center for Ethics, where he fosters RootStrikers, a network of activists.

Of the Harvard centre, he says, "It's a bigger project than just being focused on Congress. It's a pretty general frame for thinking about corruption and trying to think in many different contexts." Given the amount of energy and research, "I hope we will be able to demonstrate something useful for people trying to remedy it." And yet, as he admits, although corruption - and similar copyright policies - can be found everywhere his book and research are resolutely limited to the US: "I don't know enough about different political environments."

Lessig sees his own role as a purveyor of ideas rather than an activist.

"A division of labour is sensible," he says. "Others are better at organising and creating a movement." For similar reasons, despite a brief flirtation with the notion in early 2008, he rules out running for office.

"It's very hard to be a reformer with idealistic ideas about how the system should change while trying to be part of the system," he says. "You have to raise money to be part of the system and engage in the behaviour you're trying to attack."

Getting others - distinguished non-politicians - to run on a platform of campaign finance reform is one of four strategies he proposes for reclaiming the republic for the people.

"I've had a bunch of people contact me about becoming super-candidates, but I don't have the infrastructure to support them. We're talking about how to build that infrastructure." Lessig is about to publish a short book mapping out strategy; later this year he will update incorporating contributions made on a related wiki.

The failure of Obama, a colleague at the University of Illinois at Chicago in the mid-1990s, to fulfil his campaign promises in this area is a significant disappointment.

"I thought he had a chance to correct it and the fact that he seemed not to pay attention to it at all made me despair," he says.

Discussion is also growing around the most radical of the four proposals, a constitutional convention under Article V to force through an amendment; to make it happen 34 state legislatures would have to apply.

"The hard problem is how you motivate a political movement that could actually be strong enough to respond to this corruption," he says. "I'm doing everything I can to try to do that. We'll see if I can succeed. That's the objective."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this seriesand one of other interviews.


February 17, 2012

Foul play

You could have been excused for thinking you'd woken up in a foreign country on Wednesday, when the news broke about a new and deliberately terrifying notice replacing the front page of a previously little-known music site, RnBXclusive.

ZDNet has a nice screenshot of it; it's gone from the RnBXclusive site now, replaced by a more modest advisory.

It will be a while before the whole story is pieced together - and tested in court - but the gist so far seems to be that the takedown of this particular music site was under the fraud laws rather than the copyright laws. As far as I'm aware - and I don't say this often - this is the first time in the history of the Net that the owner of a music site has been arrested on suspicion of conspiracy to defraud (instead of copyright infringement ). It seems to me this is a marked escalation of the copyright wars.

Bearing in mind that at this stage these are only allegations, it's still possible to do some thinking about the principles involved.

The site is accused of making available, without the permission of the artists or recording companies, pre-release versions of new music. I have argued for years that file-sharing is not the economic enemy of the music industry and that the proper answer to it is legal, fast, reliable download services. (And there is increasing evidence bearing this out.) But material that has not yet been officially released is a different matter.

The notion that artists and creators should control the first publication of new material is a long-held principle and intuitively correct (unlike much else in copyright law). This was the stated purpose of copyright: to grant artists and creators a period of exclusivity in which to exploit their ideas. Absolutely fundamental to that is time in which to complete those ideas and shape them into their final form. So if the site was in fact distributing unreleased music as claimed, especially if, as is also alleged, the site's copies of that music were acquired by illegally hacking into servers, no one is going to defend either the site or its owner.

That said, I still think artists are missing a good bet here. The kind of rabid fan who can't wait for the official release of new music is exactly the kind of rabid fan who would be interested in subscribing to a feed from the studio while that music is being recorded. They would also, as a friend commented a few years ago, be willing to subscribe to a live feed from the musicians' rehearsal studio. Imagine, for example, being able to listen to great guitarists practice. How do they learn to play with such confidence and authority? What do they find hard? How long does it take to work out and learn something like Dave van Ronk's rendition, on guitar, of Scott Joplin rags with the original piano scoring intact?

I know why this doesn't happen: an artist learning a piece is like a dog with a wound (or maybe a bone): you want to go off in a forest by yourself until it's fixed. (Plus, it drives everyone around you mad.) The whole point of practicing is that it isn't performance. But musicians aren't magicians, and I find it hard to believe that showing the nuts and bolts of how the trick of playing music is worked would ruin the effect. For other types of artists - well, writers with works in progress really don't do much worth watching, but sculptors and painters surely do, as do dance troupes and theatrical companies.

However, none of that excuses the site if the allegations are true: artists and creators control the first release.

But also clearly wrong was the notice SOCA placed on the site, which displayed visitors' IP address, warned that downloading music from the site was a crime bearing a maximum penaltde y of up to ten years in prison, and claimed that SOCA has the capacity to monitor and investigate you with no mention of due process or court orders. Copyright infringement is a civil offense, not a criminal one; fraud is a criminal offense, but it's hard to see how the claim that downloading music is part of a conspiracy to commit fraud could be made to stick. (A day later, SOCA replaced the notice.) Someone browsing to The Pirate Bay and clicking on a magnet link is not conspiring to steal TV shows any more than someone buying a plane ticket is conspiring to destroy the ozone layer. That millions of people do both things is a contributing factor to the existence of the site and the airline, but if you accuse millions of people the term "organized crime" loses all meaning.

This was a bad, bad blunder on the part of authorities wishing to eliminate file-sharing. Today's unworkable laws against file-sharing are bringing the law into contempt already. Trying to scare people by misrepresenting what the law actually says at the behest of a single industry simply exacerbates the effect. First they're scared, then they're mad, and then they ignore you. Not a winning strategy - for anyone.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 10, 2012

Media cop

The behavior of The Times in the 2009 NightJack case, in which the paper outed an anonymous policeman blogging about his job, was always baffling since one of the key freedoms of the press is protecting sources. On occasion, journalists have gone to jail rather than give up a source's name, although it happens rarely enough that when it does, as in the Judith Miller case linked above, Hollywood makes movies about it. The principle at work here, writes NPR reporter David Folkenflik, who covered that case, is that, "You have to protect all of your sources if you want any of them to speak to you again."

Briefly, the background. In 2009, the first winner of the prestigious Orwell Prize for political blogging was a unidentified policeman. Blogging under the soubriquet of "NightJack", the blogger declined all interviews (I am not a media cop, he wrote ), sent a friend to deliver his acceptance speech, and had his prize money sent directly to charity. Shortly afterwards, he took The Times to court to prevent it from publishing his real-life identity. Controversially, Justice David Eady ruled for The Times on the basis that NightJack had no expectation of privacy - and freedom of expression was important. Ironic, since the upshot was to stifle NightJack's speech: his real-life alter ego, Richard Horton, was speedily reprimanded by his supervisor and the blog was deleted.

This is the case that has been reinvestigated this week by the Leveson inquiry into media phone hacking in the media. Justice Eady's decision seems to have rested on two prongs: first, that the Times had identified Horton from public sources, and second, that publication was in the public interest because Horton's blog posts disclosed confidential details about his police work. It seems clear from Times editor James Harding's testimony (PDF) that the first of these prongs was bent. The second seems to have been also: David Allen Green, who has followed this case closely, is arguing over at New Statesman (see the comments) that The Times's court testimony is the only source of the allegations that Horton's blog posts gave enough information that the real people in the cases he talked about could be identified. (In fact, I'd expect the cases are much more identifiable *after* his Times identification than before it.)

So Justice Eady's decision was not animated by research into the difficulty of real online anonymity. Instead, he was badly misled by incomplete, false evidence. Small wonder that Horton is suing.

One of the tools journalists use to get sources to disclose information they don't want tracked back to them is the concept of off-the-record background. When you are being briefed "on background", the rule is that you can't use what you're told unless you can find other sources to tell you the same thing on the record for publication. This is entirely logical because once you know what you're looking for you have a better chance of finding it. You now know where to start looking and what questions to ask.

But there should be every difference in an editor's mind between information willingly supplied under a promise not to publish and information obtained illegally. We can argue about whether NightJack's belief that he could remain anonymous was well-founded and whether he, like many people, did a poor job at securing his email account, but few would think he should have been outed as the result of a crime.

Once Foster knew Horton's name he couldn't un-know it - and, as noted, it's a lot easier to find evidence backing up things you already know. What should have happened is that Foster's managers should have barred him from pursuing or talking about the story. The paper should then either have dropped it or, if the editors really thought it sufficiently importance, assigned a different, uncontaminated reporter to start over with no prior knowledge and try to find the name from legal sources. Sounds too much like hard work? Yes. That this did not happen says a lot about the newsroom's culture: a focus on cheap, easy, quick, attention-getting stories acquired by whatever means. "I now see it was wrong" suggests that Harding and his editorial colleagues had lost all perspective.

Horton was, of course, not a source giving confidential information to one or more Times reporters. But it's so easy to imagine the Times - or any other newspaper - deciding to run a column written by "D.C. Plod" to give an intimate insight into how the police work. A newspaper running such a column would boast about it, especially if it won the Orwell Prize. And likely the only reason a rival paper would expose the columnist's real identity was if the columnist was a fraud.

Imagine Watergate if it had been investigated by this newsroom instead of that of the 1972 Washington Post. Instead of the President's malfeasance in seeking re-election, the story would be the identity of Deep Throat. Mark Felt would have gone to jail and Richard Milhous Nixon would have gone down in history as an honest man.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 27, 2012

Principle failure

The right to access, correct, and delete personal information held about you and the right to bar data collected for one purpose from being reused for another are basic principles of the data protection laws that have been the norm in Europe since the EU adopted the Privacy Directive in 1995. This is the Privacy Directive that is currently being updated; the European Commission's proposals seem, inevitably, to please no one. Businesses are already complaining compliance will be unworkable or too expensive (hey, fines of up to 2 percent of global income!). I'm not sure consumers should be all that happy either; I'd rather have the right to be anonymous than to be forgotten (which I believe will prove technically unworkable), and the jurisdiction for legal disputes with a company to be set to my country rather than theirs. Much debate lies ahead.

In the meantime, the importance of the data protection laws has been enhanced by Google's announcement this week that it will revise and consolidate the more than 60 privacy policies covering its various services "to create one beautifully simple and intuitive experience across Google". It will, the press release continues, be "Tailored for you". Not the privacy policy, of course, which is a one-size-fits-all piece of corporate lawyer ass-covering, but the services you use, which, after the fragmented data Google holds about you has been pooled into one giant liquid metal Terminator, will be transformed into so-much-more personal helpfulness. Which would sound better if 2011 hadn't seen loud warnings about the danger that personalization will disappear stuff we really need to know: see Eli Pariser's filter bubble and Jeff Chester's worries about the future of democracy.

Google is right that streamlining and consolidating its myriad privacy policies is a user-friendly thing to do. Yes, let's have a single policy we can read once and understand. We hate reading even one privacy policy, let alone 60 of them.

But the furore isn't about that, it's about the single pool of data. People do not use Google Docs in order to improve their search results; they don't put up Google+ pages and join circles in order to improve the targeting of ads on YouTube. This is everything privacy advocates worried about when Gmail was launched.

Australian privacy campaigner Roger Clarke's discussion document sets out the principles that the decision violates: no consultation, retroactive application; no opt out.

Are we evil yet?

In his 2011 book, In the Plex, Steven Levy traces the beginnings of a shift in Google's views on how and when it implements advertising to the company's controversial purchase of the DoubleClick advertising network, which relied on cookies and tracking to create targeted ads based on Net users' browsing history. This $3.1 billion purchase was huge enough to set off anti-trust alarms. Rightly so. Levy writes, "...sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match." Between DoubleClick's dominance in display advertising on large, commercial Web sites and Google AdSense's presence on millions of smaller sites, the company could track pretty much all Web users. "No law prevented it from combining all that information into one file," Levy writes, adding that Google imposed limits, in that it didn't use blog postings, email, or search behavior in building those cookies.

Levy notes that Google spends a lot of time thinking about privacy, but quotes founder Larry Page as saying that the particular issues the public chooses to get upset about seem randomly chosen, the reaction determined most often by the first published headline about a particular product. This could well be true - or it may also be a sign that Page and Brin, like Facebook's Mark Zuckberg and some other Silicon Valley technology company leaders, are simply out of step with the public. Maybe the reactions only seem random because Page and Brin can't identify the underlying principles.

In blending its services, the issue isn't solely privacy, but also the long-simmering complaint that Google is increasingly favoring its own services in its search results - which would be a clear anti-trust violation. There, the traditional principle is that dominance in one market (search engines) should not be leveraged to achieve dominance in another (social networking, video watching, cloud services, email).

SearchEngineLand has a great analysis of why Google's Search Plus is such a departure for the company and what it could have done had it chosen to be consistent with its historical approach to search results. Building on the "Don't Be Evil" tool built by Twitter, Facebook, and MySpace, among others, SEL demonstrates the gaps that result from Google's choices here, and also how the company could have vastly improved its service to its search customers.

What really strikes me in all this is that the answer to both the EU issues and the Google problem may be the same: the personal data store that William Heath has been proposing for three years. Data portability and interoperability, check; user control, check. But that is as far from the Web 2.0 business model as file-sharing is from that of the entertainment industry.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


November 18, 2011

The write stuff

The tenth anniversary of the first net.wars column slid by quietly on November 2. This column wasn't born of 9/11 - net.wars-the-book was published in 1998 - but it did grow out of anger over the way the grief and shock over 9/11 was being hijacked to justify policies that were unacceptable in calmer times Ever since, the column has covered the various border wars between cyberspace and real life, with occasional digressions. This week's column is a digression. I feel I've earned it.

A few weeks ago I had this conversation with a friend:

wg: My friend's son is a writer on The Daily Show.
Friend, puzzled: Jon Stewart needs writers? I thought he did his own jokes.

For the record, Stewart has 12 to 14 staff writers. For a simple reason: comedy is hard, and even the vaudeville-honed joke machine that was Morey Amsterdam would struggle to devise two hours of original material every week.

Which is how we arrive at the enduring mystery of the sitcom. Although people may disagree about exactly when that is, when the form works, says the veteran sitcom writer and showrunner Ken Levine, it is TV's most profitable money machine. Sitcom writing requires not only a substantial joke machine but the ability to create an underlying storyline scaffold of recognizably human reality. And you must do all that under pressure, besieged by conflicting notes from the commissioning network and studio, and conforming to constraints as complex and specific as those of a sonnet: budgets, timing, and your actors' abilities. It takes a village. Or, since today most US sitcoms are written by a roomful of writers working together, a "gang-banging" village.

It is this experience that Levine decided, five years ago. to emulate. The ability to thrive in that environment is an essential skill, but beginning writers work alone until they are thrown in at the deep end on their first job. He calls his packed weekend event The Sitcom Room, and, having spent last weekend taking part in the fifth of the series, I can say the description is accurate. After a few hours of introduction about the inner workings of writers' rooms, scripts, and comedy in general, four teams of five people watch a group of actors perform a Levine-written scene with some obvious and some not-so-obvious things wrong with it. Each team then goes off to fix the scene in its designated room, which comes appropriately equipped with junk food, sodas, and a whiteboard. You have 12 hours (more if you're willing to make your own copies). Go.

After five seminars and 20 teams, Levine says every rewritten script has been different, a reminder that sitcom writing is a treasure hunt where the object of the search is unknown. Levine kindly describes each result as "magical"; attendees were more critical of other groups' efforts. (I liked ours best, although the ending still needed some work.)

I felt lucky: my group were all professionals used to meeting deadlines and working to specification, and all displayed a remarkable lack of ego in pitching and listening to ideas. We packed up around 1am, feeling that any changes we made after that point were unlikely to be improvements. On the other hand, if the point was to experience a writers' room, we failed utterly: both Levine and Sunday panelist Jane Espenson (see her new Web series, Husbands) talked about the brutally competitive environment of many of the real-life versions. Others were less blessed by chemistry: one team wrangled until 3am before agreeing on a strategy, then spent the rest of the night writing their script and getting their copies made. Glassy-eyed, on Sunday they disagreed when asked individually about what went wrong: publicly, their appointed "showrunner" blamed himself for not leading effectively. I imagine them indelibly bonded by their shared suffering.

What happens at this event is catalysis. "You will learn a lot about yourselves," Levine said on that first morning. How do you respond when your best ideas are not good enough to be accepted? How do you take to the discipline of delivering jokes and breaking stories on deadline? How do you function under pressure as part of a team creative effort? Less personally, can you watch a performance and see, instead of the actors' skills, the successes and flaws in your script? Can you stay calm when the "studio executive" (played by Levine's business partner, Dan O'Day) produces a laundry list of complaints and winds up with, "Except for a couple of things I wouldn't change anything"? And, not in the syllabus, can you help Dan play practical jokes on Ken? By the end of the weekend, everyone is on a giddy adrenaline high, exacerbated in our case by the gigantic anime convention happening all around us at the same hotel. (Yes. The human-sized fluffy yellow chick getting on the elevator is real. You're not hallucinating from lack of sleep. Check.)

I found Levine's blog earlier this year after he got into cross-fire with the former sitcom star Roseanne Barr over Charlie Sheen's meltdown. His blog reminds me of William Goldman's books on screenwriting: the same combination of entertainment and education. I think of Goldman's advice every day in everything I write. Now, I will think of Levine's, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 14, 2011

Think of the children

Give me smut and nothing but! - Tom Lehrer

Sex always sells, which is presumably why this week's British headlines have been dominated by the news that the UK's ISPs are to operate an opt-in system for porn. The imaginary sales conversations alone are worth any amount of flawed reporting:

ISP Customer service: Would you like porn with that?

Customer: Supersize me!

Sadly, the reporting was indeed flawed. Cameron, it turns out was merely saying that new customers signing up with the four major consumer ISPs would be asked if they want parental filtering. So much less embarrassing. So much less fun.

Even so, it gave reporters such as Violet Blue, at ZDNet UK, a chance to complain about the lack of transparency and accountability of filtering systems.

Still, the fact that so many people could imagine that it's technically possible to turn "Internet porn" on and off as if operated a switch is alarming. If it were that easy, someone would have a nice business by now selling strap-on subscriptions the way cable operators do for "adult" TV channels. Instead, filtering is just one of several options for which ISPs, Web sites, and mobile phone operators do not charge.

One of the great myths of our time is that it's easy to stumble accidentally upon porn on the Internet. That, again, is television, where idly changing channels on a set-top box can indeed land you on the kind of smut that pleased Tom Lehrer. On the Internet, even with safe search turned off, it's relatively difficult to find porn accidentally - though very easy to find on purpose. (Especially since the advent of the .xxx top-level domain.)

It is, however, very easy for filtering systems to remove non-porn sites from view, which is why I generally turn off filters like "Safe search" or anything else that will interfere with my unfettered access to the Internet. I need to know that legitimate sources of information aren't being hidden by overactive filters. Plus, if it's easy to stumble over pornography accidentally I think that as a journalist writing about the Net and in general opposing censorship I think I should know that. I am better than average at constraining my searches so that they will retrieve only the information I really want, which is a definite bias in this minuscule sample of one. But I can safely say that the only time I encounter unwanted anything-like-porn is in display ads on some sites that assume their primary audience is young men.

Eli Pariser, whose The Filter Bubble: What the Internet is Hiding From You I reviewed recently for ZDNet UK, does not talk in his book about filtering systems intended to block "inappropriate" material. But surely porn filtering is a broad-brush subcase of exactly what he's talking about: automated systems that personalize the Net based on your known preferences by displaying content they already "think" you like at the expense of content they think you don't want. If the technology companies were as good at this as the filtering people would like us to think, this weekend's Singularity Summit would be celebrating the success of artificial intelligence instead of still looking 20 to 40 years out.

If I had kids now, would I want "parental controls"? No, for a variety of reasons. For one thing, I don't really believe the controls keep them safe. What keeps them safe is knowing they can ask their parents about material and people's behavior that upsets them so they can learn how to deal with it. The real world they will inhabit someday will not obligingly hide everything that might disturb their equanimity.

But more important, our children's survival in the future will depend on being able to find the choices and information that are hidden from view. Just as the children of 25 years ago should have been taught touch typing, today's children should be learning the intricacies of using search to find the unknown. If today's filters have any usefulness at all, it's as a way of testing kids' ability to think ingeniously about how to bypass them.

Because: although it's very hard to filter out only *exactly* the material that matches your individual definition of "inappropriate", it's very easy to block indiscriminately according to an agenda that cares only about what doesn't appear. Pariser worries about the control that can be exercised over us as consumers, citizens, voters, and taxpayers if the Internet is the main source of news and personalization removes the less popular but more important stories of the day from view. I worry that as people read and access only the material they already agree with our societies will grow more and more polarized with little agreement even on basic facts. Northern Ireland, where for a long time children went to Catholic or Protestant-owned schools and were taught that the other group was inevitably going to Hell, is a good example of the consequences of this kind of intellectual segregation. Or, sadly, today's American political debates, where the right and left have so little common basis for reasoning that the nation seems too polarized to solve any of its very real problems.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 9, 2011

The final countdown

The we-thought-it-was-dead specter of copyright term extension in sound recordings has done a Diabolique maneuver and been voted alive by the European Council. In a few days, the Council of Ministers could make it EU law because, as can happen under the inscrutable government structures of the EU, opposition has melted away.

At stake is the extension of copyright in sound recordings from 50 years to 70, something the Open Rights Group has been fighting since it was born. The push to extend it above 50 years has been with us for at least five years; originally the proposal was to take it to 95 years. An extension from 50 to 70 years is modest by comparison, but given the way these things have been going over the last 50 years, that would buy the recording industry 20 years in which to lobby for the 95 years they originally wanted, and then 25 years to lobby for the line to be moved further. Why now? A great tranche of commercially popular recordings is up for entry into the public domain: Elvis Presley's earliest recordings date to 1956, and The Beatles' first album came out in 1963; their first singles are 50 years old this year. It's not long after that to all the great rock records of the 1970s.

My fellow Open Rights Group advisory council member Paul Sanders, has up a concise little analysis about what's wrong here. Basically, it's never jam today for the artists, but jam yesterday, today, and tomorrow for the recording companies. I have commented frequently on the fact that the more record companies are able to make nearly pure profit on their back catalogues whose sunk costs have long ago been paid, the more new, young artists are required to compete for their attention with an ever-expanding back catalogue. I like Sanders' language on this: "redistributive, from younger artists to older and dead ones".

In recent years, we've heard a lof of the mantra "evidence-based policy" from the UK government. So, in the interests of ensuring this evidence-based policy the UK government is so keen on, here is some. The good news is they commissioned it themselves, so it ought to carry a lot of weight with them. Right? Right.

There have been two major British government reports studying the future of copyright and intellectual property law generally in the last five years: the Gowers Review, published in 2006, and the Hargreaves report was commissioned in November 2010 and released in May 2011.

From Hargreaves:

Economic evidence is clear that the likely deadweight loss to the economy exceeds any additional incentivising effect which might result from the extension of copyright term beyond its present levels.14 This is doubly clear for retrospective extension to copyright term, given the impossibility of incentivising the creation of already existing works, or work from artists already dead.

Despite this, there are frequent proposals to increase term, such as the current proposal to extend protection for sound recordings in Europe from 50 to 70 or even 95 years. The UK Government assessment found it to be economically detrimental. An international study found term extension to have no impact on output.

And further:

Such an extension was opposed by the Gowers Review and by published studies commissioned by the European Commission.

Ah, yes, Gowers and its 54 recommendations, many or most of which have been largely ignored. (Government policy seems to have embraced "strengthening of IP rights, whether through clamping down on piracy" to the exclusion of things like "improving the balance and flexibility of IP rights to allow individuals, businesses, and institutions to use content in ways consistent with the digital age".

To Gowers:

Recommendation 3: The European Commission should retain the length of protection on sound recordings and performers' rights at 50 years.

And:

Recommendation 4: Policy makers should adopt the principle that the term and scope of protection for IP rights should not be altered retrospectively.

I'd use the word "retroactive", myself, but the point is the same. Copyright is a contract with society: you get the right to exploit your intellectual property for some number of years, and in return after that number of years your work belongs to the society whose culture helped produce it. Trying to change an agreed contract retroactively usually requires you to show that the contract was not concluded in good faith, or that someone is in breach. Neither of those situations applies here, and I don't think these large companies with their in-house lawyers, many of whom participated in drafting prior copyright law, can realistically argue that they didn't understand the provisions. Of course, this recommendation cuts both ways: if we can't put Elvis's earliest recordings back into copyright, thereby robbing the public domain, we also can't shorten the copyright protection that applies to recordings created with the promise of 50 years' worth of protection.

This whole mess is a fine example of policy laundering: shopping the thing around until you either wear out the opposition or find sufficient champions. The EU, with its Hampton Court maze of interrelated institutions, could have been deliberately designed to facilitate this. You can write to your MP, or even your MEP - but the sad fact is that the shiny, new EU government is doing all this in old-style backroom deals.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 15, 2011

Dirty digging

The late, great Molly Ivins warns (in Molly Ivins Can't Say That, Can She?) about the risk to journalists of becoming "power groupies" who identify more with the people they cover than with their readers. In the culture being exposed by the escalating phone hacking scandals the opposite happened: politicians and police became "publicity groupies" who feared tabloid wrath to such an extent that they identified with the interests of press barons more than those of the constituents they are sworn to protect. I put the apparent inconsistency between politicians' former acquiescence and their current baying for blood down to Stockholm syndrome: this is what happens when you hold people hostage through fear and intimidation for a few decades. When they can break free, oh, do they want revenge.

The consequences are many and varied, and won't be entirely clear for a decade or two. But surely one casualty must have been the balanced view of copyright frequently argued for in this column. Murdoch's media interests are broad-ranging. What kind of copyright regime do you suppose he'd like?

But the desire for revenge is a really bad way to plan the future, as I said (briefly) on Monday at the Westminster Skeptics.

For one thing, it's clearly wrong to focus on News International as if Rupert Murdoch and his hired help were the only contaminating apple. In the 2006 report What price privacy now? the Information Commissioner listed 30 publications caught in the illegal trade in confidential information. News of the World was only fifth; number one, by a considerable way, was the Daily Mail (the Observer was number nine). The ICO wanted jail sentences for those convicted of trading in data illegally, and called on private investigators' professional bodies to revoke or refuse licenses to PIs who breach the rules. Five years later, these are still good proposals.

Changing the culture of the press is another matter.
When I first began visiting Britain in the late 1970s, I found the tabloid press absolutely staggering. I began asking the people I met how the papers could do it.

"That's because *we* have a free press," I was told in multiple locations around the country. "Unlike the US." This was only a few years after The Washington Post backed Bob Woodward and Carl Bernstein's investigation of Watergate, so it was doubly baffling.

Tom Stoppard's 1978 play Night and Day explained a lot. It dropped competing British journalists into an escalating conflict in a fictitious African country. Over the course of the play, Stoppard's characters both attack and defend the tabloid culture.

"Junk journalism is the evidence of a society that has got at least one thing right, that there should be nobody with power to dictate where responsible journalism begins," says the naïve and idealistic new journalist on the block.

"The populace and the popular press. What a grubby symbiosis it is," complains the play's only female character, whose second marriage - "sex, money, and a title, and the parrots didn't harm it, either" - had been tabloid fodder.

The standards of that time now seem almost quaint. In the movie Starsuckers, filmmaker Chris Atkins fed fabricated celebrity stories to a range of tabloids. All were published. That documentary also showed in action illegal methods of obtaining information. In 2009, right around the time The Press Complaints Commission was publishing a report concluding, "there is no evidence that the practice of phone message tapping is ongoing".

Someone on Monday asked why US newspapers are better behaved despite First Amendment protection and less constraint by onerous libel laws. My best guess is fear of lawsuits. Conversely, Time magazine argues that Britain's libel laws have encouraged illegal information gathering: publication requires indisputable evidence. I'm not completely convinced: the libel laws are not new, and economics and new media are forcing change on press culture.

A lot of dangers lurk in the calls for greater press regulation. Phone hacking is illegal. Breaking into other people's computers is illegal. Enforce those laws. Send those responsible to jail. That is likely to be a better deterrent than any regulator could manage.

It is extremely hard to devise press regulations that don't enable cover-ups. For example, on Wednesday's Newsnight, the MP Louise Mensch, head of the DCMS committee conducting the hearings, called for a requirement that politicians disclose all meetings with the press. I get it: expose too-cosy relationships. But whistleblowers depend on confidentiality, and the last thing we want is for politicians to become as difficult to access as tennis stars and have their contact with the press limited to formal press conferences.

Two other lessons can be derived from the last couple of weeks. The first is that you cannot assume that confidential data can be protected simply by access rules. The second is the importance of alternatives to commercial, corporate journalism. Tom Watson has criticized the BBC for not taking the phone hacking allegations seriously. But it's no accident that the trust-owned Guardian was the organization willing to take on the tabloids. There's a lesson there for the US, as the FBI and others prepare to investigate Murdoch and News Corp: keep funding PBS.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 20, 2011

The world we thought we lived in

If one thing is more annoying than another, it's the fantasy technology on display in so many TV shows. "Enhance that for me!" barks an investigator. And, obediently, his subordinate geek/squint/nerd pushes a button or few, a line washes over the blurry image on screen, and now he can read the maker's mark on a pill in the hand of the target subject that was captured by a distant CCTV camera. The show 24 ended for me 15 minutes into season one, episode one, when Kiefer Sutherland's Jack Bauer, trying to find his missing daughter, thrust a piece of paper at an underling and shouted, "Get me all the Internet passwords associated with that telephone number!" Um...

But time has moved on, and screenwriters are more likely to have spent their formative years online and playing computer games, and so we have arrived at The Good Wife, which gloriously wrapped up its second season on Tuesday night (in the US; in the UK the season is still winding to a close on Channel 4). The show is a lot of things: a character study of an archetypal humiliated politician's wife (Alicia Florrick, played by Julianna Margulies) who rebuilds her life after her husband's betrayal and corruption scandal; a legal drama full of moral murk and quirky judges ( Carob chip?); a political drama; and, not least, a romantic comedy. The show is full of interesting, layered men and great, great women - some of them mature, powerful, sexy, brilliant women. It is also the smartest show on television when it comes to life in the time of rapid technological change.

When it was good, in its first season, Gossip Girl cleverly combined high school mean girls with the citizen reportage of TMZ to produce a world in which everyone spied on everyone else by sending tips, photos, and rumors to a Web site, which picks the most damaging moment to publish them and blast them to everyone's mobile phones.

The Good Wife goes further to exploit the fact that most of us, especially those old enough to remember life before CCTV, go on about our lives forgetting that everywhere we leave a trail. Some are, of course, old staples of investigative dramas: phone records, voice messages, ballistics, and the results of a good, old-fashioned break-in-and-search. But some are myth-busting.

One case (S2e15, "Silver Bullet") hinges on the difference between the compressed, digitized video copy and the original analog video footage: dropped frames change everything. A much earlier case (S1e06, "Conjugal") hinges on eyewitness testimony; despite a slightly too-pat resolution (I suspect now, with more confidence, it might have been handled differently), the show does a textbook job of demonstrating the flaws in human memory and their application to police line-ups. In a third case (S1e17, "Heart"), a man faces the loss of his medical insurance because of a single photograph posted to Facebook showing him smoking a cigarette. And the disgraced husband's (Peter Florrick, played by Chris Noth) attempt to clear his own name comes down to a fancy bit of investigative work capped by camera footage from an ATM in the Cayman Islands that the litigator is barely technically able to display in court. As entertaining demonstrations and dramatizations of the stuff net.wars talks about every week and the way technology can be both good and bad - Alicia finds romance in a phone tap! - these could hardly be better. The stuffed lion speaker phone (S2e19, "Wrongful Termination") is just a very satisfying cherry topping of technically clever hilarity.

But there's yet another layer, surrounding the season two campaign mounted to get Florrick elected back into office as State's Attorney: the ways that technology undermines as well as assists today's candidates.

"Do you know what a tracker is?" Peter's campaign manager (Eli Gold, played by Alan Cumming) asks Alicia (S2e01, "Taking Control"). Answer: in this time of cellphones and YouTube, unpaid political operatives follow opposing candidates' family and friends to provoke and then publish anything that might hurt or embarrass the opponent. So now: Peter's daughter (Makenzie Vega) is captured praising his opponent and ham-fistedly trying to defend her father's transgressions ("One prostitute!"). His professor brother-in-law's (Dallas Roberts) in-class joke that the candidate hates gays is live-streamed over the Internet. Peter's son (Graham Phillips) and a manipulative girlfriend (Dreama Walker), unknown to Eli, create embarrassing, fake Facebook pages in the name of the opponent's son. Peter's biggest fan decides to (he thinks) help by posting lame YouTube videos apparently designed to alienate the very voters Eli's polls tell him to attract. (He's going to post one a week; isn't Eli lucky?) Polling is old hat, as are rumors leaked to newspaper reporters; but today's news cycle is 20 minutes and can we have a quote from the candidate? No wonder Eli spends so much time choking and throwing stuff.

All of this fits together because the underlying theme of all parts of the show is control: control of the campaign, the message, the case, the technology, the image, your life. At the beginning of season one, Alicia has lost all control over the life she had; by the end of season two, she's in charge of her new one. Was a camera watching in that elevator? I guess we'll find out next year.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 13, 2011

Lay down the cookie

British Web developers will be spending the next couple of weeks scrambling to meet the May 26 deadline after which new legislation require users to consent before a cookie can be placed on their computers. The Information Commissioner's guidelines allow a narrow exception for cookies that are "strictly necessary for a service requested by the user"; the example given is a cookie used to remember an item the user has chosen to buy so it's there when they go to check out. Won't this be fun?

Normally, net.wars comes down on the side of privacy even when it's inconvenient for companies, but in this case we're prepared to make at least a partial exception. It's always been a little difficult to understand the hatred and fear with which some people regard the cookie. Not the chocolate chip cookie, which of course we know is everything that is good, but the bits of code that reside on your computer to give Web pages the equivalent of memory. Cookies allow a server to assemble a page that remembers what you've looked at, where you've been, and which gewgaw you've put into your shopping basket. At least some of this can be done in other ways such as using a registration scheme. But it's arguably a greater invasion of privacy to require users to form a relationship with a Web site they may only use once.

The single-site use of cookies is, or ought to be, largely uncontroversial. The more contentious usage is third-party cookies, used by advertising agencies to track users from site to site with the goal of serving up targeted, rather than generic, ads. It's this aspect of cookies that has most exercised privacy advocates, and most browsers provide the ability to block cookies - all, third-party, or none, with a provision to make exceptions.

The new rules, however, seem overly broad.

In the EU, the anti-cookie effort began in 2001 (the second-ever net.wars), seemed to go quiet, and then revived in 2009, when I called the legislation "masterfully stupid". That piece goes into some detail about the objections to the anti-cookie legislation, so we won't review that here. At the time, reader email suggested that perhaps making life unpleasant for advertisers would force browser manufacturers to design better privacy controls. 'Tis a consummation devoutly to be wished, but so far it hasn't happened, and in the meantime that legislation has become an EU directive and now UK law.

The chief difference is moving from opt-out to opt-in: users must give consent for cookies to be placed on their machines; the chief flaw is banning a technology instead of regulating undesirable actions and effects. Besides the guidelines above, the ICO refers people to All About Cookies for further information.

Pete Jordan, a Hull-based Web developer, notes that when you focus legislation on a particular technology, "People will find ways around it if they're ingenious enough, and if you ban cookies or make it awkward to use them, then other mechanisms will arise." Besides, he says, "A lot of day-to-day usage is to make users' experience of Web sites easier, more friendly, and more seamless. It's not life-threatening or vital, but from the user's perception it makes a difference if it disappears." Cookies, for example, are what provide the trail of "breadcrumbs" at the top of a Web page to show you the path by which you arrived at that page so you can easily go back to where you were.

"In theory, it should affect everything we do," he says of the legislation. A possible workaround may be to embed tokens in URLs, a strategy he says is difficult to manage and raises the technical barrier for Web developers.

The US, where competing anti-tracking bills are under consideration in both houses of Congress, seems to be taking a somewhat different tack in requiring Web sites to honor the choice if consumers set a "Do Not Track" flag. Expect much more public debate about the US bills than there has been in the EU or UK. See, for example, the strong insistence by What Would Google Do? author Jeff Jarvis that media sites in particular have a right to impose any terms they want in the interests of their own survival. He predicts paywalls everywhere and the collapse of media economics. I think he's wrong.

The thing is, it's not a fair contest between users and Web site owners. It's more or less impossible to browse the Web with all cookies turned off: the complaining pop-ups are just too frequent. But targeting the cookie is not the right approach. There are many other tracking technologies that are invisible to consumers which may have both good and bad effects - even Web bugs are used helpfully some of the time. (The irony is, of course, regulating the cookie but allowing increases in both offline and online surveillance by police and government agencies.)

Requiring companies to behave honestly and transparently toward their customers would have been a better approach for the EU; one hopes it will work better in the US.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 6, 2011

Double exposure

So finally we know. Ever since Wikileaks began releasing diplomatic cables copyright activists have been waiting to see if the trove would expose undue influence on national laws. And this week there it was: a 2005 cable from the US Embassy in New Zealand requesting $386,158 to fund start-up costs and the first year of an industry-backed intellectual property enforcement unit and a 2009 cable offering "help" when New Zealand was considering a "three-strikes" law. Much, much more on this story has been presented and analyzed by the excellent Michael Geist, who also notes similar US lobbying pressure on Canada to "improve" its "lax" copyright laws.

My favorite is this bit, excerpted from the cable recounting an April 2007 meeting between Embassy officials and Geist himself:

His acknowledgement that Canada is a net importer of copyrighted materials helps explain the advantage he would like to hold on to with a weaker Canadian UPR protection regime. His unvoiced bias against the (primarily U.S. based) entertainment industry also reflects deeply ingrained Canadian preferences to protect and nurture homegrown artists.

In other words, Geist's disagreement with US copyright laws is due to nationalist bias, rather than deeply held principles. I wonder how they explain to themselves the very similar views of such diverse Americans as Macarthur award winner Pamela Samuelson, John Perry Barlow, Lawrence Lessig. The latter in fact got so angry over the US's legislative expansion of copyright that he founded a movement for Congressional reform, expanding to a Harvard Law School center to research broader questions of ethics.

It's often said that a significant flaw in the US Constitution is that it didn't - couldn't, because they didn't exist yet - take account of the development of multinational corporations. They have, of course, to answer to financial regulations, legal obligations covering health and safety, and public opinion, but in many areas concerning the practice of democracy there is very little to rein those in. They can limit their employees' freedom of speech, for example, without ever falling afoul of the First Amendment, which, contrary to often-expressed popular belief, limits only the power of Congress in this area.

There is also, as Lessig pointed out in his first book, Code: and Other Laws of Cyberspace, no way to stop private companies from making and implementing technological decisions that may have anti-democratic effects. Lessig's example at the time was AOL, which hard-coded a limit of 23 participants per chat channel; try staging a mass protest under those limits. Today's better example might be Facebook, which last week was accused of unfairly deleting the profiles of 51 anti-cuts groups and activists. (My personal guess is that Facebook's claim to have simply followed its own rules is legitimate; the better question might be who supplied Facebook with the list of profiles and why.) Whether or not Facebook is blameless on this occasion, there remains a legitimate question: at what point does a social network become so vital a part of public life that the rules it implements and the technological decisions it makes become matters of public policy rather than questions for it to consider on its own? Facebook, like almost all of the biggest Internet companies, is a US corporation, with its mores and internal culture largely shaped by its home country.

We have often accused large corporate rights holders of being the reason why we see the same proposals for tightening and extending copyright popping up all over the world in countries whose values differ greatly and whose own national interests are not necessarily best served by passing such laws. More recently written constitutions could consider such influences. To the best of my knowledge they haven't, although arguably this is less of an issue in places that aren't headquarters to so many of them and where they are therefore less likely to spend large amounts backing governments likely to be sympathetic to their interests.

What Wikileaks has exposed instead is the unpleasant specter of the US, which likes to think of itself as spreading democracy around the world, behaving internationally in a profoundly anti-democratic way. I suppose we can only be grateful they haven't sent Geist and other non-US copyright reform campaigners exploding cigars. Change Congress, indeed: what about changing the State Department?

It's my personal belief that the US is being short-sighted in pursuing these copyright policies. Yes, the US is currently the world's biggest exporter of intellectual property, especially in, but not limited to, the area of entertainment. But that doesn't mean it always will be. It is foolish to think that down the echoing corridors of time (to borrow a phrase from Jean Kerr) the US will never become a net importer of intellectual property. It is sheer fantasy - even racism - to imagine that other countries cannot write innovative software that Americans want to use or produce entertainment that Americans want to enjoy. Even if you dispute the arguments made by campaigning organizations such as the Electronic Frontier Foundation and the Open Rights Group that laws like "three strikes" unfairly damage the general public, it seems profoundly stupid to assume that the US will always enjoy the intellectual property hegemony it has now.

One of these days, the US policies exposed in these cables are going to bite it in the ass.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 22, 2011

Applesauce

Modern life is full of so many moments when you see an apparently perfectly normal person doing something that not so long ago was the clear sign of a crazy person. They're walking down the street talking to themselves? They're *on the phone*. They think the inanimate objects in their lives are spying on them? They may be *right*.

Last week's net.wars ("The open zone") talked about the difficulty of finding the balance between usability, on the one hand, and giving users choice, flexibility, and control, on the other. And then, as if to prove this point, along comes Apple and the news that the iPhone has been storing users' location data, perhaps permanently.

The story emerged this week when two researchers presenting at O'Reilly's Where 2.0 conference presented an open-source utility they'd written to allow users to get a look at the data the iPhone was saving. But it really begins last year, when Alex Levinson discovered the stored location data as part of his research on Apple forensics. Based on his months of studying the matter, Levinson contends that it's incorrect to say that Apple is gathering this data: rather, the device is gathering the data, storing it, and backing it up when you sync your phone. Of course, if you sync your phone to Apple's servers, then the data is transferred to your account - and it is also migrated when you purchase a new iPhone or iPad.

So the news is not quite as bad as it first sounded: your device is spying on you, but it's not telling anybody. However: the data is held in unencrypted form and appears never to expire, and this raises a whole new set of risks about the devices that no one had really focused on until now.

A few minutes after the story broke, someone posted on Twitter that they wondered how many lawyers handling divorce cases were suddenly drafting subpoenas for copies of this file from their soon-to-be-exes' iPhones. Good question (although I'd have phrased it instead as how many script ideas the wonderful, tech-savvy writers of The Good Wife are pitching involving forensically recovered location data). That is definitely one sort of risk; another, ZDNet's Adrian Kingsley-Hughes points out is that the geolocation may be wildly inaccurate, creating a false picture that may still be very difficult to explain, either to a spouse or to law enforcement, who, as Declan McCullagh writes know about and are increasingly interested in accessing this data.

There are a bunch of other obvious privacy things to say about this, and Privacy International has helpfully said them in an open letter to Steve Jobs.

"Companies need openness and procedures," PI's executive director, Simon Davies, said yesterday, comparing Apple's position today to Google's a couple of months before the WiFi data-sniffing scandal.

The reason, I suspect, that so many iPhone users feel so shocked and betrayed is that Apple's attention to the details of glossy industrial design and easy-to-understand user interfaces leads consumers to cuddle up to Apple in a way they don't to Microsoft or Google. I doubt Google will get nearly as much anger directed at it for the news that Android phones also collect location data (the Android saves only the last 50 mobile masts and 200 WiFi networks). In either event, the key is transparency: when you post information on Twitter or Facebook about your location or turn on geo-tagging you know you're doing it. In this case, the choice is not clear enough for users to understand what they've agreed to.

The question is: how best can consumers be enabled to make informed decisions? Apple's current method - putting a note saying "Beware of the leopard" at the end of a 15,200-word set of terms and conditions (which are in any case drafted by the company's lawyer to protect the company, not to serve consumers) that users agree to when they sign up for iTunes - is clearly inadequate. It's been shown over and over again that consumers hate reading privacy policies, and you have only to look at Facebook's fumbling attempts to embed these choices in a comprehensible interface to realize that the task is genuinely difficult. This is especially true because, unlike the issue of user-unfriendly sysstems in the early 1990s, it's not particularly in any of these companies' interests to solve this intransigent and therefore expensive problem. Make it easy for consumers to opt out and they will, hardly an appetizing proposition for companies supported in whole or in part by advertising.

The answer to the question, therefore, is going to involve a number of prongs: user interface design, regulation, contract law, and industry standards, both technical and practical. The key notion, however, is that it should be feasible - even easy - for consumers to tell what information gathering they're consenting to. The most transparent way of handling that is to make opting out the default, so that consumers must take a positive action to turn these things on.

You can say - as many have - that this particular scandal is overblown. But we're going to keep seeing dust-ups like this until industry practice changes to reflect our expectations. Apple, so sensitive to the details of industrial design that will compel people to yearn to buy its products, will have to develop equal sensitivity for privacy by design.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 8, 2011

Brought to book

JK Rowling is seriously considering releasing the Harry Potter novels as ebooks, while Amanda Hocking, who's sold a million or so ebooks has signed a $2 million contract with St. Martin's Press. In the same week. It's hard not to conclude that ebooks are finally coming of age.

And in many ways this is a good thing. The economy surrounding the Kindle, Barnes and Noble's Nook, and other such devices is allowing more than one writer to find an audience for works that mainstream publishers might have ignored. I do think hard work and talent will usually out, and it's hard to believe that Hocking would not have found herself a good career as a writer via the usual routine of looking for agents and publishers. She would very likely have many fewer books published at this point, and probably wouldn't be in possession of the $2 million it's estimated she's made from ebook sales.

On the other hand, assuming she had made at least a couple of book sales by now, she might be much more famous: her blog posting explaining her decision notes that a key factor is that she gets a steady stream of complaints from would-be readers that they can't buy her books in stores. She expects to lose money on the St. Martin's deal compared to what she'd make from self-publishing the same titles. To fans of disintermediation, of doing away with gatekeepers and middle men and allowing artists to control their own fates and interact directly with their audiences, Hocking is a self-made hero.

And yet...the future of ebooks may not be so simply rosy.

This might be the moment to stop and suggest reading a little background on book publishing from the smartest author I know on the topic, science fiction writer Charlie Stross. In a series of blog postings he's covered common misconceptions about publishing, why the Kindle's 2009 UK launch was bad news for writers, and misconceptions about ebooks. One of Stross's central points: epublishing platforms are not owned by publishers but by consumer electronics companies - Apple, Sony, Amazon.

If there's one thing we know about the Net and electronic media generally it's that when the audience for any particular new medium - Usenet, email, blogs, social networks - gets to be a certain size it attracts abuse. It's for this reason that every so often I argue that the Internet does not scale well.

In a fascinating posting on Patrick and Theresa Nielsen-Hayden's blog Making Light, Jim Macdonald notes the case of Canadian author S K S Perry, who has been blogging on LiveJournal about his travails with a thief. Perry, having had no luck finding a publisher for his novel Darkside, had posted it for free on his Web site, where a thief copied it and issued a Kindle edition. Macdonald links this sorry tale (which seems now to have reached a happy-enough ending) with postings from Laura Hazard Owen and Mike Essex that predict a near future in which we are awash in recycled ebook...spam. As all three of these writers point out, there is no system in place to do the kind of copyright/plagiarism checking that many schools have implemented. The costs are low; the potential for recycling content vast; and the ease of gaming the ratings system extraordinary. And either way, the ebook retailer makes money.

Macdonald's posting primarily considers this future with respect to the challenge for authors to be successful*: how will good books find audiences if they're tiny islands adrift in a sea of similar-sounding knock-offs and crap? A situation like that could send us all scurrying back into the arms of people who publish on paper. That wouldn't bother Amazon-the-bookseller; Apple and others without a stake in paper publishing are likely to care more (and promising authors and readers due care and diligence might help them build a better, differentiated ebook business).

There is a mythology that those who - like the Electronic Frontier Foundation or the Open Rights Group - oppose the extension and tightening of copyright are against copyright. This is not the case: very few people want to do away with copyright altogether. What most campaigners in this area want is a fairer deal for all concerned.

This week the issue of term extension for sound recordings in the EU revived when Denmark changed tack and announced it would support the proposals. It's long been my contention that musicians would be better served by changes in the law that would eliminate some of the less fair terms of typical contracts, that would provide for the reversion of rights to musicians when their music goes out of commercial availability, and that would alter the balance of power, even if only slightly, in favor of the musicians.

This dystopian projected future for ebooks is a similar case. It is possible to be for paying artists and even publishers and still be against the imposition of DRM and the demonization of new technologies. This moment, where ebooks are starting to kick into high gear, is the time to find better ways to help authors.

*Successful: an author who makes enough money from writing books to continue writing books.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 1, 2011

Equal access

It is very, very difficult to understand the reasoning behind the not-so-secret plan to institute Web blocking. In a http://www.openrightsgroup.org/blog/2011/minister-confirms-voluntary-site-blocking-discussionsletter to the Open Rights Group, Ed Vaizey, the minister for culture, communications, and creative industries, confirmed that such a proposal emerged from a workshop to discuss "developing new ways for people to access content online". (Orwell would be so proud.)

We fire up Yes, Minister once again to remind everyone the four characteristics of proposals ministers like: quick, simple, popular, cheap. Providing the underpinnings of Web site blocking is not likely to be very quick, and it's debatable whether it will be cheap. But it certainly sounds simple, and although it's almost certainly not going to be popular among the 7 million people the government claims engage in illegal file-sharing - a number PC Pro has done a nice job of dissecting - it's likely to be popular with the people Vaizey seems to care most about, rights holders.

The four opposing kiss-of-death words are: lengthy, complicated, expensive, and either courageous or controversial, depending how soon the election is. How to convince Vaizey that it's these four words that apply and not the other four?

Well, for one thing, it's not going to be simple, it's going to be complicated. Web site blocking is essentially a security measure. You have decided that you don't want people to have access to a particular source of data, and so you block their access. Security is, as we know, not easy to implement and not easy to maintain. Security, as Bruce Schneier keeps saying, is a process, not a product. It takes a whole organization to implement the much more narrowly defined IWF system. What kind of infrastructure will be required to support the maintenance and implementation of a block list to cover copyright infringement? Self-regulatory, you say? Where will the block list, currently thought to be about 100 sites come from? Who will maintain it? Who will oversee it to ensure that it doesn't include "innocent" sites? ISPs have other things to do, and other than limiting or charging for the bandwidth consumption of their heaviest users (who are not all file sharers by any stretch) they don't have a dog in this race. Who bears the legal liability for mistakes?

The list is most likely to originate with rights holders, who, because they have shown over most of the last 20 years that they care relatively little if they scoop innocent users and sites into the net alongside infringing ones, no one trusts to be accurate. Don't the courts have better things to do than adjudicate what percentage of a given site's traffic is copyright-infringing and whether it should be on a block list? Is this what we should be spending money on in a time of austerity? Mightn't it be...expensive?

Making the whole thing even more complicated is the obvious (to anyone who knows the Internet) fact that such a block list will - according to Torrentfreak already has - start a new arms race.

And yet another wrinkle: among blocking targets are cyberlockers. And yet this is a service that, like search, is going mainstream: Amazon.com has just launched such a service, which it calls Cloud Drive and for which it retains the right to police rather thoroughly. Encrypted files, here we come.

At least one ISP has already called the whole idea expensive, ineffective, and rife with unintended consequences.

There are other obvious arguments, of course. It opens the way to censorship. It penalizes innocent uses of technology as well as infringing ones; torrent search sites typically have a mass of varied material and there are legitimate reasons to use torrenting technology to distribute large files. It will tend to add to calls to spy on Internet users in more intrusive ways (as Web blocking fails to stop the next generation of file-sharing technologies). It will tend to favor large (often American) services and companies over smaller ones. Google, as IsoHunt told the US Court of Appeals two weeks ago, is the largest torrent search engine. (And, of course, Google has other copyright troubles of its own; last week the court rejected the Google Books settlement.)

But the sad fact is that although these arguments are important they're not a good fit if the main push behind Web blocking is an entrenched belief that only way to secure economic growth is to extend and tighten copyright while restricting access to technologies and sites that might be used for infringement. Instead, we need to show that this entrenched belief is wrong.

We do not block the roads leading to car boot sales just because sometimes people sell things at them whose provenance is cloudy (at best). We do not place levies on the purchase of musical instruments because someone might play copyrighted music on them. We should not remake the Internet - a medium to benefit all of society - to serve the interests of one industrial group. It would make more sense to put the same energy and financial resources into supporting the games industry which, as Tom Watson (Lab - Bromwich) has pointed out has great potential to lift the British economy.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 25, 2011

Return to the red page district

This week's agreement to create a .xxx generic top-level domain (generic in the sense of not being identified with a particular country) seems like a quaint throwback. Ten or 15 years ago it might have made mattered. Now, for all the stories rehashing the old controversies, it seems to be largely irrelevant to anyone except those who think they can make some money out of it. How can it be a vector for censorship if there is no prohibition on registering pornography sites elsewhere? How can it "validate" the porn industry any more than printers and film producers did? Honestly, if it didn't have sex in the title, who would care?

I think it was about 1995 when a geekish friend said, probably at the Computers, Freedom, and Privacy conference, "I think I have the solution. Just create a top-level domain just for porn."

It sounded like a good idea at the time. Many of the best ideas are simple - with a kind of simplicity mathematicians like to praise with the term "elegant". Unfortunately, many of the worst ideas are also simple - with a kind of simplicity we all like to diss with the term "simplistic". Which this is depends to some extent on when you're making the judgement..

In 1995, the sense was that creating a separate pornography domain would provide an effective alternative to broad-brush filtering. It was the era of Time magazine's Cyberporn cover story, which Netheads thoroughly debunked and leading up to the passage of the Communications Decency Act in 1996. The idea that children would innocently stumble upon pornography was entrenched and not wholly wrong. At that time, as PC Magazine points out while outlining the adult entertainment industry's objections to the new domain, a lot of Web surfing was done by guesswork, which is how the domain whitehouse.com became famous.

A year or two later, I heard that one of the problems was that no one wanted to police domain registrations. Sure. Who could afford the legal liability? Besides, limiting who could register what in which domain was not going well: .com, which was intended to be for international commercial organizations, had become the home for all sorts of things that didn't fit under that description, while the .us country code domain had fallen into disuse. Even today, with organizations controlling every top-level domain, the rules keep having to adapt to user behavior. Basically, the fewer people interested in registering under your domain the more likely it is that your rules will continue to work.

No one has ever managed to settle - again - the question of what the domain name system is for, a debate that's as old as the system itself: its inventor, Paul Mockapetris, still carries the scars of the battles over whether to create .com. (If I remember correctly, he was against it, but finally gave on in that basis that: "What harm can it do?") Is the domain name system a directory, a set of mnemonics, a set of brands/labels, a zoning mechanism, or a free-for-all? ICANN began its life, in part, to manage the answers to this particular controversy; many long-time watchers don't understand why it's taken so long to expand the list of generic top-level domains. Fifteen years ago, finding a consensus and expanding the list would have made a difference to the development of the Net. Now it simply does not matter.

I've written before now that the domain name system has faded somewhat in importance as newer technologies - instant messaging, social networks, iPhone/iPad apps - bypass it altogether. And that is true. When the DNS was young, it was a perfect fit for the Internet applications of the day for which it was devised: Usenet, Web, email, FTP, and so on. But the domain name system enables email and the Web, which are typically the gateways through which people make first contact with those services (you download the client via the Web, email your friend for his ID, use email to verify your account).

The rise of search engines - first Altavista, then primarily Google - did away with much of consumers' need for a directory. Also a factor was branding: businesses wanted memorable domain names they could advertise to their customers. By now, though probably most people don't bother to remember more than a tiny handful of domain names now - Google, Facebook, perhaps one or two more. Anything else they either put into a search engine or get from either a bookmark or, more likely, their browser history.

Then came sites like Facebook, which take an approach akin to CompuServe in the old days or mobile networks now: they want to be your gateway to everything online (Facebook is going to stream movies now, in competition with NetFlix!) If they succeed, would it matter if you had - once - to teach your browser a user-unfriendly long, numbered address?

It is in this sense that the domain name system competes with Google and Facebook as the gateway to the Net. Of all the potential gateways, it is the only one that is intended as a public resource rather than a commercial company. That has to matter, and we should take seriously the threat that all the Net's entrances could become owned by giant commercial interests. But .xxx missed its moment to make history.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 28, 2011

Stuffed

"You don't need this old math work," said my eighth grade geography teacher, paging through my loose-leaf notebook while I watched resentfully. It was 1967, the math work was no more than a couple of months old, and she was ahead of her time. She was an early prototype of that strange, new species littering the media these days: the declutterer.

People like her - they say "professional organizer", I say bully - seem to be everywhere. Their sudden visibility is probably due, at least in part, to the success of the US TV series Hoarders, in which mentally disordered people are forced to confront their pathological addiction to keeping and/or acquiring so much stuff that their houses are impassable, often hazardous. Of course, one person's pathological hoarder is another's more-or-less normal slob, packrat, serious collector, or disorganized procrastinator. Still, Newsweek's study of kids who are stuck with the clean-up after their hoarder parents die is decidedly sad.

But much of what I'm reading seems aimed at perfectly normal people who are being targeted with all the zealotry of an early riser insisting that late sleepers and insomniacs are lazy, immoral slugs who need to be reformed.

Some samples. LifeHacker profiles a book to help you estimate how much your clutter is costing you. The latest middle-class fear is that schools' obsession with art work will turn children into hoarders. The New York Times profiles a professional declutterer who has so little sympathy for attachment to stuff that she tosses out her children's party favors after 24 hours. At least she admits she's neurotic, and is just happy she's made it profitable to the tune of $150 an hour (well, Manhattan prices).

But take this comment from LifeHacker:

For example, look in your bedroom and consider the cost of unworn clothes and shoes, unread books, unworn jewelry, or unused makeup.

And this, from the Newsweek piece:

While he's thrown out, recycled, and donated years' worth of clothing, costume jewelry, and obvious trash, he's also kept a lot--including an envelope of clothing tags from items [his mother] bought him in 1972, hundreds of vinyl records, and an outdated tape recorder with corroded batteries leaking out the back.

OK, with her on the corroded batteries. (What does she mean, outdated? If it still functions for its intended purpose it's just old.) Little less sure about the clothing tags, which might evoke memories. But unread books? Unless you're talking 436 copies of The DaVinci Code, unread books aren't clutter. Unread books are mental food. They are promises of unknown worlds on a rainy day when the electricity goes bang. They are cultural heritage. Ditto vinyl records. Not all books and LPs are equally valuable, of course, but they should be presumed innocent until proven to be copies of Jeffrey Archer novels. Books are not shoeboxes marked "Pieces of string - too small to save".

Leaving aside my natural defensiveness at the suggestion that thousands of books, CDs, DVDs, and vinyl LPs are "clutter", it strikes me that one reason for this trend is that there is a generational shift taking place. Anyone born before about 1970 grew up knowing that the things they liked might become unavailable at any time. TV shows were broadcast once, books and records went out of print, and the sweater that sold out while you were saving up for it didn't reappear later on eBay. If you had any intellectual or artistic aspirations, building your own library was practically a necessity.

My generation also grew up making and fixing things: we have tools. (A couple of years ago I asked a pair of 20-somethings for a soldering iron; they stared as if I'd asked for a manual typewriter.) Plus, in the process of rebelling against our parents' largely cautious and thrifty lifestyles, Baby Boomers were the first to really exploit consumer credit. Put it together: endemic belief that the availability of any particular item was only temporary, unprecedented array of goods to choose from, extraordinary access to funding. The result: stuff.

To today's economically stressed-out younger generation, raised on reruns and computer storage, the physical manifestations of intellectual property must seem peculiarly unnecessary. Why bother when you can just go online and click a button? One of my 50-something writer friends loves this new world; he gives away or sells books as soon as he's read them, and buys them back used from Amazon or Alibris if he needs to consult them again. Except for the "buying it used" part, this is a business model the copyright industries ought to love, because you can keep selling the same thing over and over again to the same people. Essentially, it's rental, which means it may eventually be an even better business than changing the media format every decade or two so that people have to buy new copies. When 3D printers really get going, I imagine there will be people arguing that you really don't need to keep furniture around - just print it when you need it. Then the truly modern home environment will be just a bare floor and walls. If you want to live like that, fine, but on behalf of my home libraries, I say: ick.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 31, 2010

Good, bad, ugly...the 2010 that was

Every year deserves its look back, and 2010 is no exception. On the good side, the younger generation beginning to enter politics is bringing with it a little more technical sense than we've had in government before. On the bad side, the year's many privacy scandals reminded us all how big a risk we take in posting as much information online as we do. The ugly...we'd have to say the scary new trends in malware. Happy New Year.

By the numbers:

$5.3 billion: the Google purchase offer that Groupon turned down. Smart? Stupid? Shopping and social networks ought to mix combustibly (and could hit local newspapers and their deal flyers), but it's a labor-intensive business. The publicity didn't hurt: Groupon has now managed to raise half a billion dollars on its own. They aren't selling anything we want to buy, but that doesn't seem to hurt Wal-Mart or McDonalds.

$497 million: the amount Harvard scientists Tyler Moore and Benjamin Edelman estimate that Google is earning from "typosquatting". Pocket change, really: Google's 2009 revenues were $23 billion. But still.

15 million (estimated): number of iPads sold since its launch in May. It took three decades of commercial failures for someone to finally launch a successful tablet computer. In its short life the iPad has been hailed and failed as the savior of print publications, and halved Best Buy's laptop sales. We still don't want one - but we're keyboard addicts, hardly its target market.

250,000: diplomatic cables channeled to Wikileaks. We mention this solely to enter The Economist's take on Bruce Sterling's take into the discussion. Wikileaks isn't at all the crypto-anarchy that physicist Timothy C. May wrote about in 1992. May's essay imagined the dark uses of encrypted secrecy; Wikileaks is, if anything, the opposite of it.

500: airport scanners deployed so far in the US, at an estimated cost of $80 million. For 2011, Obama has asked for another $88 million for the next round of installations. We'd like fewer scanners and the money instead spent on...well, almost anything else, really. Intelligence, perhaps?

65: Percentage of Americans that Pew Internet says have paid for Internet content. Yeah, yeah, including porn. We think it's at least partly good news.

58: Number of investigations (countries and US states) launched into Google's having sniffed approximately 600Gb of data from open WiFi connections, which the company admitted in May. The progress of each investigation is helpfully tallied by SearchEngineLand. Note that the UK's ICO's reaction was sufficiently weak that MPs are complaining.

24: Hours of Skype outage. Why are people writing about this as though it were the end of Skype? It was a lot more shocking when it happened to AT&T in 1990 - in those days, people only had one phone number!

5: number of years I've wished Google would eliminate useless shopping aggregator sites from its search results listings. Or at least label them and kick them to the curb.

2: Facebook privacy scandals that seem to have ebbed leaving less behavorial change than we'd like in their wake. In January, Facebook founder and CEO Mark Zuckerberg opined that privacy is no longer a social norm; in May the revamped its privacy settings to find an uproar in response (and not for the first time). Still, the service had 400 million users at the beginning of 2010 and has more than 500 million now. Resistance requires considerable anti-social effort, though the cool people have, of course, long fled.

1: Stuxnet worm. The first serious infrastructure virus. You knew it had to happen.

In memoriam:

- Kodachrome. The Atlantic reports that December 30, 2010 saw the last-ever delivery of Kodak's famous photographic film. As they note, the specific hues and light-handling of Kodachrome defined the look of many decades of the 20th century. Pause to admire The Atlantic's selection of the 75 best pictures they could find: digital has many wonderful qualities, but these seem to have a three-dimensional roundness you don't see much any more. Or maybe we just forget to look.

- The 3.5in floppy disk. In April, Sony announced it would stop making the 1.4Mb floppy disk that defined the childhoods of today's 20-somethings. The first video clip I ever downloaded, of the exploding whale in Oregon (famed of Web site and Dave Barry column), required 11 floppy disks to hold it. You can see why it's gone.

- Altavista: A leaked internal memo puts Altavista on Yahoo!'s list of services due for closure. Before Google, Altavista was the best search engine by a long way, and if it had focused on continuing to improve its search algorithms instead of cluttering up its front page in line with the 1995 fad for portals it might be still. Google's overwhelming success had as much to do with its clean, fast-loading design as it did with its superior ability to find stuff. Altavista also pioneered online translation with its Babelfish (and don't you have to love a search engine that quotes Douglas Adams?).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 10, 2010

Payback

A new word came my way while I was reviewing the many complaints about the Transportation Security Administration and its new scanner toys and pat-down procedures: "Chertoffed". It's how "security theater" (Bruce Schneier's term) has transformed the US since 2001.

The description isn't entirely fair to Chertoff, who was only the *second* head of the Bush II-created Department of Homeland Security and has now been replaced: he served from 2005-2009. But since he's the guy who began the scanner push and also numbers scanner manufacturers among the clients of his consultancy company, The Chertoff Group - it's not really unfair either.

What do you do after defining the travel experience of a generation? A little over a month ago, Chertoff showed up at London's RSA Data Security conference to talk about what he thought needed to happen in order to secure cyberspace. We need, he said, a doctrine to lay out the rules of the road for dealing with cyber attacks and espionage - the sort of thing that only governments can negotiate. The analogy he chose was to the doctrine that governed nuclear armament, which he said (at the press Q&A) "gave us a very stable, secure environment over the next several decades."

In cyberspace, he argued, such a thing would be valuable because it makes clear to a prospective attacker what the consequences will be. "The greatest stress on security is when you have uncertainty - the attacker doesn't know what the consequences will be and misjudges the risk." The kinds of things he wants a doctrine to include are therefore things like defining what is a proportionate response: if your country is on the receiving end of an attack from another country that's taking out the electrical power to hospitals and air traffic control systems with lives at risk, do you have the right to launch a response to take out the platform they're operating from? Is there a right of self-defence of networks?

"I generally take the view that there ought to be a strong obligation on countries, subject to limitations of practicality and legal restrictions, to police the platforms in their own domains," he said.

Now, there are all sorts of reasons many techies are against government involvement - or interference - in the Internet. First and foremost is time: the World Summit on the Information Society and its successor, the Internet Governance Forum, have taken years to do...no one's quite sure what, while the Internet's technology has gone on racing ahead creating new challenges. But second is a general distrust, especially among activists and civil libertarians. Chertoff even admitted that.

"There's a capability issue," he said, "and a question about whether governments put in that position will move from protecting us from worms and viruses to protecting us from dangerous ideas."

This was, of course, somewhat before everyone suddenly had an opinion about Wikileaks. But what has occurred since makes that distrust entirely reasonable: give powerful people a way to control the Net and they will attempt to use it. And the Net, as in John Gilmore's famous aphorism, "perceives censorship as damage and routes around it". Or, more correctly, the people do.

What is incredibly depressing about all this is watching the situation escalate into the kind of behavior that governments have quite reasonably wanted to outlaw and that will give ammunition to those who oppose allowing the Net to remain an open medium in which anyone can publish. The more Wikileaks defenders organize efforts like this week's distributed denial-of-service attacks, the more Wikileaks and its aftermath will become the justification for passing all kinds of restrictive laws that groups like the Electronic Frontier Foundation and the Open Rights Group have been fighting against all along.

Wikileaks itself is staying neutral on the subject, according to the statement on its (Swiss) Web site: Wikileaks spokesman Kristinn Hrafnsson said: "We neither condemn nor applaud these attacks. We believe they are a reflection of public opinion on the actions of the targets."

Well, that's true up to a point. It would be more correct to say that public opinion is highly polarized, and that the attacks are a reflection of the opinion of a relatively small section of the public: people who are at the angriest end of the spectrum and have enough technical expertise to download and install software to make their machines part of a botnet - and not enough sense to realize that this is a risky, even dangerous, thing to do. Boycotting Amazon.com during its busiest time of year to express your disapproval of its having booted Wikileaks off its servers would be an entirely reasonable protest. Vandalism is not. (In fact the announced attack on Amazon's servers seems not to have succeeded, though others have.

I have written about the Net and what I like to call the border wars between cyberspace and real life for nearly 20 years. Partly because it's fascinating, partly because when something is new you have a real chance to influence its development, and partly because I love the Net and want it to fulfill its promise as a democratic medium. I do not want to have to look back in another 20 years and say it's been "Chertoffed". Governments are already mad about the utterly defensible publication of the cables; do we have to give them the bullets to shoot us with, too?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 3, 2010

Open diplomacy

Probably most people have by now lived through the embarrassment of having a (it was intended to be) private communication made public. The email your fingers oopsishly sent to the entire office instead of your inamorata; the drunken Usenet postings scooped into Google's archive; the direct Tweet that wound up in the public timeline; the close friend your cellphone pocket-dialed while you were trashing them.

Most of these embarrassments are relatively short-lived. The personal relationships that weren't already too badly damaged recover, if slowly. Most of the people who get the misdirected email are kind enough to delete it and never mention it again. Even the stock market learns to forgive those drunken Usenet postings; you may be a CEO now but you were only a frat boy back then.

But the art of government-level diplomacy is creating understanding, tolerance, and some degree of cooperation among people who fundamentally distrust each other and whose countries may have substantial, centuries-old reasons why that is utterly rational. (Sometimes these internecine feuds are carried to extremes: would you buy from a store that filed Greek and Turkish DVDs in the same bin?) It's hardly surprising if diplomats' private conversations resemble those of Hollywood agents, telling each person what they want to hear about the others and maneuvering them carefully to get the desired result. And a large part of that desired result is avoiding mass destruction through warfare.

For that reason, it's hard to simply judge Wikileaks' behavior by the standard of our often-expressed goal of open data, transparency, accountability, and net.freedoms. Is there a line? And where do you draw it?

In the past, it was well-established news organizations who had to make this kind of decision - the New York Times and the Washington Post regarding the Pentagon Papers, for example. Those organizations, rooted in a known city in a single country, knew that mistakes would see them in court; they had reputations, businesses, and personal liberty to lose. As Jay Rosen: the world's first stateless news organization. (culture, laws, norms) - contract with those who have information that can submit - will encrypt to disguise source from us as well as others - and publish - can't subpoena because stateless. Failure of the watchdog press under George Bush and anxiety on part of press derived from denial of their own death.

Wikileaks wasn't *exactly* predicted by Internet pioneers, but it does have its antecedents and precursors. Before collaborative efforts - wikis - became commonplace on the Web there was already the notion of bypassing the nation-state to create stores of data that could not be subjected to subpoenas and other government demands. There was the Sealand data bunker. There was physicist Timothy May's Crypto Anarchist Manifesto, which posited that, "Crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded."

Note, however, that a key element of these ideas was anonymity. Julian Assange has told Guardian readers that in fact he originally envisioned Wikileaks as an anonymous service, but eventually concluded that someone must be responsible to the public.

Curiously, the strand of Internet history that is the closest to the current Wikileaks situation is the 1993-1997 wrangle between the Net and Scientology, which I wrote about for Wired in 1995. This particular net.war did a lot to establish the legal practices still in force with respect to user-generated content: notice and takedown, in particular. Like Wikileaks today, those posting the most closely guarded secrets of Scientology found their servers under attack and their material being taken down and, in response, replicated internationally on mirror sites to keep it available. Eventually, sophisticated systems were developed for locating the secret documents wherever they were hosted on a given day as they bounced from server to server (and they had to do all that without the help of Twitter. Today, much of the gist is on Wikipedia. At the time, however, calling it a "flame war with real bullets" wasn't far wrong: some of Scientology's fiercest online critics had their servers and/or homes raided. When Amazon removed Wikileaks from its servers because of "copyright", it operated according to practices defined in response to those Scientology actions.

The arguments over Wikileaks push at many other boundaries that have been hotly disputed over the last 20 years. Are they journalists, hackers, criminals, or heroes? Is Wikileaks important because, as NYU professor Jay Rosen points out, journalism has surrendered its watchdog role? Or because it is posing, as Techdirt says, the kind of challenge to governments that the music and film industries have already been facing? On a technical level, Wikileaks is showing us the extent to which the Internet can still resist centralised control.

A couple of years ago, Stefan Magdalinski noted the "horse-trading in a fairly raw form" his group of civic hackers discovered when they set out to open up the United Nations proceedings - another example of how people behave when they think no one is watching. Utimately governments will learn to function in a world in which they cannot trust that anything is secret, just as they had to learn to cope with CNN (PDF)

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 12, 2010

Just between ourselves

It is, I'm sure, pure coincidence that a New York revival of Vaclav Havel's wonderfully funny and sad 1965 play The Memorandum was launched while the judge was considering the Paul Chambers "Twitter joke trial" case. "Bureaucracy gone mad," they're billing the play, and they're right, but what that slogan omits is that the bureaucracy in question has gone mad because most of its members don't care and the one who does has been shut out of understanding what's going on. A new language, Ptydepe, has been secretly invented and introduced as a power grab by an underling claiming it will improve the efficiency of intra-office communications. The hero only discovers the shift when he receives a memorandum written in the new language and can't get it translated due to carefully designed circular rules. When these are abruptly changed the translated memorandum restores him to his original position.

It is one of the salient characteristics of Ptydepe that it has a different word for every nuance of the characters' natural language - Czech in the original, but of course English in the translation I read. Ptydepe didn't work for the organization in the play because it was too complicated for anyone to learn, but perhaps something like it that removes all doubt about nuance and context would assist older judges in making sense of modern social interactions over services such as Twitter. Clearly any understanding of how people talk and make casual jokes was completely lacking yesterday when Judge Jacqueline Davies upheld the conviction of Paul Chambers in a Doncaster court.

Chambers' crime, if you blinked and missed those 140 characters, was to post a frustrated message about snowbound Doncaster airport: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your shit together otherwise I'm blowing the airport sky high!" Everyone along the chain of accountability up to the Crown Prosecution Service - the airport duty manager, the airport's security personnel, the Doncaster police - seems to have understood he was venting harmlessly. And yet prosecution proceeded and led, in May, to a conviction that was widely criticized both for its lack of understanding of new media and for its failure to take Chambers' lack of malicious intent into account.

By now, everyone has been thoroughly schooled in the notion that it is unwise to make jokes about bombs, plane crashes, knives, terrorists, or security theater - when you're in an airport hoping to get on a plane. No one thinks any such wartime restraint need apply in a pub or its modern equivalent, the Twitter/Facebook/online forum circle of friends. I particularly like Heresy Corner's complaint that the judgement makes it illegal to be English.

Anyone familiar with online writing style immediately and correctly reads Chambers' Tweet for what it was: a perhaps ill-conceived expression of frustration among friends that happens to also be readable (and searchable) by the rest of the world. By all accounts, the judge seems to have read it as if it were a deliberately written personal telegram sent to the head of airport security. The kind of expert explanation on offer in this open letter apparently failed to reach her.

The whole thing is a perfect example of the growing danger of our data-mining era: that casual remarks are indelibly stored and can be taken out of context to give an utterly false picture. One of the consequences of the Internet's fundamental characteristic of allowing the like-minded and like-behaved to find each other is that tiny subcultures form all over the place, each with its own set of social norms and community standards. Of course, niche subcultures have always existed - probably every local pub had its own set of tropes that were well-known to and well-understood by the regulars. But here's the thing they weren't: permanently visible to outsiders. A regular who, for example, chose to routinely indicate his departure for the Gents with the statement, "I'm going out to piss on the church next door" could be well-known in context never to do any such thing. But if all outsiders saw was a ten-second clip of that statement and the others' relaxed reaction that had been posted to YouTube they might legitimately assume that pub was a shocking hotbed of anti-religiou slobs. Context is everything.

The good news is that the people on the ground whose job it was to protect the airport read the message, understood it correctly, and did not overreact. The bad news is that when the CPS and courts did not follow their lead it opened up a number of possibilities for the future, all bad. One, as so many have said, is that anyone who now posts anything online while drunk, angry, stupid, or sloppy-fingered is at risk of prosecution - with the consequence of wasting huge amounts of police and judicial time that would be better spent spotting and stopping actual terrorists. The other is that everyone up the chain felt required to cover their ass in case they were wrong.

Chambers still may appeal to the High Court; Stephen Fry is offering to pay his fine (the Yorkshire Post puts his legal bill at £3,000), and there's a fund accepting donations.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 5, 2010

Suicidal economics

Toxic sludge is GOOD for you, observed John Stauber and Sheldon Rampton in their 1995 book by the same name (or, more completely, Toxic Sludge is Good For You!: Lies, Damn Lies, and the Public Relations Industry). In that brilliantly researched, carefully reasoned, and humorous tome they laid out for inspection the inner workings of the PR industry. After reading it, you never look at the news the same way again.

Including, as we are not the first to say, this week's news that Rupert Murdoch's News International sees extracting subscription money from 105,000 readers of the online versions of the Times and Sunday Times as a success. Nieman Labs' round-up shows how much this particular characterization was greeted by skepticism elsewhere in the media. (My personal favorite is the analogy to >Spinal Tap's manager's defense of the band when it's suggested that its popularity is waning: "I just think...their appeal is becoming more selective.") If any of a few million blogs had 105,000 paying readers they'd be in fabulous shape; but given the uncertainty surrounding the numbers, for an organization the size of the Times it seems like pocket change.

I'm not sure that the huge drop in readership online is the worst news. Everyone predicted that, even Murdoch's own people (although it is interesting that the guy who is thought to have launched this scheme has left before the long-term results are in). The really bad news is that the paper's print circulation has declined in line with everyone else's since the paywall went up. It might have turned out, for example, that faced with paying £1 for a day's access a number of people might decide they'd just as soon have the nicely printed version that is, after all, still easier to read. Instead, what seems likely from these (unclear and incomplete) numbers is that online readers don't care nearly as much as offline ones about news sources. And in many cases they're right not to: it hardly matters which news site or RSS feed supplies you with the day's Reuters stories or which journalist dutifully copies down the quotes at the press briefing.

Today's younger generation also has - again, rightfully - a much deeper cynicism about "MSM" (mainstream media) than previous ones, who had less choice. They trust Jon Stewart and Stephen Colbert far than CNN (or the Onion more than the Times). They don't have to have read Stauber's and Rampton's detailed analysis to have absorbed the message: PR distortion is everywhere. If that's the case, why bother with the middleman? Why not just read the transparently biased source - a company's own spin - rather than the obscurely biased one? Or pick the opinion-former whose take on things is the most fun?

As Michael Wolff (who himself famously burned through many of someone else's millions in the dot-com boom) correctly points out, Murdoch's history online has been a persistent effort to recreate the traditional one-to-many publishing model. He likes satellite television and print newspapers - things where you control what's published and have to deal only with a handful of competitors and a back channel composed only of the great and the good. That desire is I think a fundamental mismatch with the Internet as we currently know it and it's not about free! information but about the two-way, many-to-many nature of the medium.

Not so long ago - 2002 - Murdoch's then COO insisted that you can't make money from content on the Internet; more recently, Times editor James Harding called giving away journalism for free a quite suicidal form of economics In a similar vein, this week Bruce Eisen, the US's Dish Network vice-president of online content development and strategy complained that the online streaming service Hulu is killing the TV industry.

Back in 2002, I argued that you can make money from online content but it needs to be some combination of a) low overheads, b) necessary, c) unusual if not unique, d) timely, and e) correctly priced. From what Slate is saying, it appears that Netflix is getting c, d, and e right and that the mix is giving the company enough of an advantage to let it compete successfully with free-as-in-file-sharing. But is the Times getting enough of those things right? And does it need to?

As Emily Bell points out, Murdoch's interest in the newspapers was more for their influence than their profitability, and that this influence and therefore their importance has largely waned. "Internationally, it has no voice," she writes. But therein lies a key difference between the Times and, say, the Guardian or the BBC: enlarging the international audience for and importance of the Times means competing with his own overseas titles. The Guardian has no such internal conflict of interest, and is therefore free to pursue its mission to become the world's leading liberal voice.

Of course, who knows? In a year's time maybe we'll all be writing the astonishing story of rising paid subscriber numbers and lauding Murdoch's prescience. But if we are, I'll bet that the big winner won't be the Times but Apple.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 1, 2010

Duty of care

"Anyone who realizes how important the Web is," Tim Berners-Lee said on Tuesday, "has a duty of care." He was wrapping up a two-day discussion meeting at the Royal Society. The subject: Web science.

What is Web science? Even after two days, it's difficult to grasp, in part because defining it is a work in progress. Here are some of the disciplines that contributed: mathematics, philosophy, sociology, network science, and law, plus a bunch of much more directly Webby things that don't fit easily into categories. Which of course is the point: Web science has to cover much more than just the physical underpinnings of computers and network wires. Computer science or network science can use the principles of mathematics and physics to develop better and faster machines and study architectures and connections. But the Web doesn't exist without the people putting content and applications on it, and so Web science must be as much about human behaviour as about physics.

"If we are to anticipate how the Web will develop, we will require insight into our own nature," Nigel Shadbolt, one of the event's convenors, said on Monday. Co-convenor Wendy Hall has said, similarly, "What creates the Web is us who put things on it, and that's not natural or engineered.". Neither natural (biological systems) or engineered (planned build-out like the telecommunications networks), but something new. If we can understand it better, we can not only protect it better, but guide it better toward the most productive outcomes, just as farmers don't haphazardly interbreed species of corn but use their understanding to select for desirable traits.

The simplest parts of the discussions to understand, therefore, were (ironically) the mathematicians. Particularly intriguing was the former chief scientist Robert May, whose approach to removing nodes from the network to make it non-functional applied equally to the Web, epidemiology, and banking risk.

This is all happening despite the recent Wired cover claiming the "Web is dead". Dead? Facebook is a Web site; Skype, the app store, IM clients, Twitter, and the New York Times all reach users first via the Web even if they use their iPhones for subsequent visits (and how exactly did they buy those iPhones, hey?) Saying it's dead is almost exactly the old joke about how no one goes to a particular restaurant any more because it's too crowded.

People who think the Web is dead have stopped seeing it. But the point of Web science is that for 20 years we've been turning what started as an academic playground into a critical infrastructure, and for government, finance, education, and social interaction to all depend on the Web it must have solid underpinnings. And it has to keep scaling - in a presentation on the state of deployment of IPv6 in China, Jianping Wu noted that Internet penetration in China is expected to jump from 30 percent to 70 percent in the next ten to 20 years. That means adding 400-900 million users. The Chinese will have to design, manage, and operate the largest infrastructure in the world - and finance it.

But that's the straightforward kind of scaling. IBMer Philip Tetlow, author of The Web's Awake (a kind of Web version of the Gaia hypothesis), pointed out that all the links in the world are a finite set; all the eyeballs in the world looking at them are a finite set...but all the contexts surrounding them...well, it's probably finite but it's not calculable (despite Pierre Levy's rather fanciful construct that seemed to suggest it might be possible to assign a URI to every human thought). At that level, Tetlow believes some of the neat mathematical tools, like Jennifer Chayes' graph theory, will break down.

"We're the equivalent of precision engineers," he said, when what's needed are the equivalent of town planners and urban developers. "And we can't build these things out of watches."

We may not be able to build them at all, at least not immediately. Helen Margetts outlined the constraints on the development of egovernment in times of austerity. "Web science needs to map, understand, and develop government just as for other social phenomena, and export back to mainstream," she said.

Other speakers highlighted gaps between popular mythology and reality. MIT's David Carter noted that, "The Web is often associated with the national and international but not the local - but the Web is really good at fostering local initiatives - that's something for Web science to ponder." Noshir Contractor, similarly, called out The Economist over the "death of distance": "More and more research shows we use the Web to have connections with proximate people."

Other topics will be far more familiar to net.wars readers: Jonathan Zittrain explored the ways the Web can be broken by copyright law, increasing corporate control (there was a lovely moment when he morphed the iPhone's screen into the old CompuServe main menu), the loss of uniformity so that the content a URL points to changes by geographic location. These and others are emerging points of failure.

We'll leave it to an unidentified audience question to sum up the state of Web science: "Nobody knows what it is. But we are doing it."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 10, 2010

Google, I want a divorce


Jamie: You're dating your mailman?
Lisa: Why not? He comes to see me every day. He's always bringing me things.
Jamie: Mail. He brings you mail.
Lisa: Don't judge him!

- from Mad About You, Season 3, Episode 1, "Escape From New York".

Two years ago, when Google turned ten years old I was called into a BBC studio to talk about the company. Why, I was asked, did people hate Microsoft so much? Would people ever hate Google, too? I said, I think, that because we're only aware of Microsoft when its software fails, our primary impression of the company is frustration: why does this software hate me?

Whereas, I went on to say, to most people Google is like the mailman: it's a nice Web site that keeps bringing you things you really want. Yes, Street View (privacy), Google Books (copyright), and other controversies, but search results! Right out of the oven!

This week I can actually say it: I hate Google. There was the annoying animated Buckyball. There was the enraging exploding animation. And now there's Google Instant - which I can turn off, to be sure, now I can't turn off Google's suggestions. Pause to scream.

I know life is different for normal people, and that people who can't touch type maybe actually like Google's behaving like a long-time spouse who finishes all their sentences, especially if they cannot spell correctly. But neither Instant nor suggestions is a help when your typical search is a weird mix of constraints intended to prod Google into tossing out hits on obscure topics. And you know what else isn't a help? Having stuff change before your eyes and disrupt the brain-fingers continuum. Changing displays, animations, word suggestions all distract you from what you're typing and make it hard to concentrate.

A different problem is the one posed by personalized results: journalists need to find the stuff they - and lots of other people - don't know about. Predictive and personalized results typically will show you the stuff you already do know about, which is fine if you're trying to find that guy who fixed your garage door that time but terrible if what you're trying to do is put together new information in new ways (like focus groups, as Don Draper's said in the recent Mad Men episode "The Rejected".)

There are a lot of things Google could do that would save me - and millions of other people - more time than Instant. The company could get expunge more of the link farms and useless aggregator shopping sites from its results. Intelligence could be better deployed for disaggregation - this Wendy Grossman or that one? I'd benefit from having the fade-in go away; it always costs me a few seconds.

There are some other small nuisances that also waste my time. On the News and some other pages, for example, you can't right-click on a URL and copy/paste it into a story because a few years ago doing that started returning an enormously long Google-adulterated URL. Simply highlighting and copying the URL into Word puts it in weird fonts you have to change. So the least slow way is to go to the page - which is very nice for the page but you're on deadline. And why can't Google read the page's date of last alteration (at least on static pages) and include that in the search listing? The biggest time-waster for me is having to plough through acres of old stuff because there's no way to differentiate it from the recent material. I also don't like the way the new Images search pages load. You would be this fussy, too, if you spent an hour or two a day on the site.

Lauren Weinstein has turned up some other, more serious, problems with Google Instant and the way it "thinks". Of course, it's still in beta, we all know this. Even though Yahoo! says hey, we had that back in 2005. (And does anyone else think the mention of "intellectual property" in that blog post sounds ominous?) Search Engine Watch has more detail (and a step-by-step critique; it's SEW's commentators' opinions that Yahoo! did not go ahead with its live offering because it had insufficient appetite for product risk - and insufficient infrastructure to support it.

So, for me personally the upshot is that I'm finally, after 11 years, in the market for a replacement search engine. Yahoo! is too cluttered. Ask.com's "question of the day" annoys me because, again, it's distracting. Altavista I abandoned gratefully (clutter!) in 1998 even though it invented the Babelfish. Dogpile has a stupid name, is hideous, and has a horoscope button on the front page. Webcrawler doesn't quick-glance differentiate its sponsored links. Cuil has too few results on a page and no option to increase them. Of course, mostly I want not to have to change.

Perhaps the most likely option is the one I saw recommended on Slashdot: Google near-clone DuckDuckGo, which seems to have a good attitude toward privacy and a lot of nifty shortcuts. I don't really love the shading in and out as you mouse over results, but I love that you can click anywhere in the shading to go to the page. I don't like having to wait for most of the listings to load; I like to skim all 100 listings on a page quickly before choosing anything. But I have to use something. I search to live.
So many options, yet none are really right. It may just be that as the main search engines increasingly compete for the mass-market they will be increasingly less fit for real research. There's an important niche here, folks.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 3, 2010

Beyond the zipline

When Aaron Sorkin (The West Wing, Sports Night) was signed to write the screenplay for a movie about Facebook, I think the general reaction was one of more or less bafflement. Sorkin has a great track record, sure, but how do you make a movie about a Web site, even if it's a social network? What are you going to show? People typing to each other?

Now that the movie is closer coming out (October 1 in the US) that we're beginning to see sneak peak trailers, and we can tell a lot more from the draft screenplay that's been floating around the Net. The copy I found is dated March 2009, and you can immediately tell it's the real thing: quality dialogue and construction, and the feel of real screenwriting expertise. Turns out, the way you write a screenplay about Facebook is to read the books, primarily the novelistic, not-so-admired Accidental Billionaires by Ben Mezrich, along with other published material and look for the most dramatic bit of the story: the lawsuits eventually launched by the characters you're portraying. Through which, as a framing device, you can tell the story of the little social network that exploded. Or rather, Sorkin can. The script is a compelling read. (It's actually not clear to me that it can be improved by actually filming it.)

Judging from other commentaries, everyone seems to agree it's genuine, though there's no telling where in the production process that script was, how many later drafts there were, or how much it changed in filming and post-production. There's also no telling who leaked it or why: if it was intentional it was a brilliant marketing move, since you could hardly ask for more word-of-mouth buzz.

If anyone wanted to design a moral lesson for the guy who keeps saying privacy is dead, it might be this: turn out your deepest secrets to portray you as a jerk who steals other people's ideas and codes them into the basis for a billion-dollar company, all because you want to stand out at Harvard and, most important, win the admiration of the girl who dumped you. Think the lonely pathos of the socially ostracized, often overlooked Jenny Humphrey in Gossip Girl crossed with the arrogant, obsessive intelligence of Sheldon Cooper in The Big Bang Theory. (Two characters I actually like, but they shouldn't breed.)

Neither the book nor the script is that: they're about as factual as 1978's The Buddy Holly Story or any other Hollywood biopic. Mezrich, who likes to write books about young guys who get rich fast (you can see why; he's gotten several bestsellers out of this approach), had no help from Facebook founder and CEO Mark Zuckerberg, What dialogue there is has been "re-created", and sources other than disaffected co-founder Eduardo Saverin are anonymous. Lacking sourcing (although of course the court testimony is public information), it's unclear how fictional the dramatization is. I'd have no problem with that if the characters weren't real people identified by their real names.

Places, too. Probably the real-life person/place/thing that comes off worst is Harvard, which in the book especially is practically a caricature of the way popular culture likes to depict it: filled with the rich, the dysfunctional, and the terminally arrogant who vie to join secretive, elite clubs that force them to take part in unsavoury hazing rituals. So much so that it was almost a surprise to read in Wikipedia that Mezrich actually went to Harvard.

Journalists and privacy advocates have written extensively about the consequences for today's teens of having their adolescent stupidities recorded permanently on Facebook or elsewhere, but Zuckerberg is already living with having his frat-boy early days of 2004 documented and endlessly repeated. Of course one way to avoid having stupid teenaged shenanigans reported is not to engage in them, but let's face it: how many of us don't have something in our pasts we'd just as soon keep out of the public eye? And if you're that rich that young, you have more opportunities than most people to be a jerk.

But if the only stories people can come up with about Zuckerberg date from before he turned 21, two thoughts occur. First, that Zuckerberg has as much right as anybody to grow up into a mature human being whose early bad judgement should be forgiven. To cite two examples: the tennis player Andre Agassi was an obnoxious little snert at 18 and a statesman of the game at 30; at 30 Bill Gates was criticized for not doing enough for charity but now at 54 is one of the world's most generous philanthropists. It is, therefore, somewhat hypocritical to demand that Zuckerberg protect today's teens from their own online idiocy while constantly republishing his follies.

Second, that outsized, hyperspeed business success might actually have forced him to grow up rather quickly. Let's face it, it's hard to make an interesting movie out of the hard work of coding and building a company.

And a third: by joining the 500 million and counting who are using Facebook we are collectively giving Zuckerberg enough money not to care either way.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 13, 2010

Pirate flags

Wednesday's Future Human - The Piracy Panacea event missed out on a few topics, among them network neutrality, an issue I think underlies many net.wars debates: content control, privacy, security. The Google-Verizon proposals sparked much online discussion this week. I can only reiterate my belief that net neutrality should be seen as an anti-trust issue. A basic principle of anti-trust law (Standard Oil, the movie studios) is that content owners should not be allowed to own the means of distribution, and I think this readily applies to cable companies that own TV stations and telephone companies that are carriers for other people's voice services.

But the Future Human event was extraordinary enough without that. Imagine: more than 150 people squished into a hot, noisy pub, all passionately interested in...copyright! It's only a few years ago that entire intellectual property law school classes would fit inside a broom cupboard. The event's key question: does today's "piracy" point the way to future innovation?

The basis of that notion seemed to be that historically pirates have forced large imperial powers to change and weren't just criminals. The event's light-speed introduction whizzed through functionally democratic pirate communities and pirate radio, and a potted history of authorship from Shakespeare and Newton to Lady Gaga. There followed mock trials of a series of escalating copyright infringements in which it became clear that the audience was polarized and more or less evenly divided.

There followed our panel: me, theoretically representing the Open Rights Group; Graham Linehan, creator of Father Ted and The IT Crowd; Jamie King, writer and director of Steal This Film; and economist Thierry Rayna. Challenged, of course, by arguers from the audience, one of whom declined to give her affiliation on the grounds that she'd get lynched (I doubt this). Partway through the panel someone complained on Twitter that we weren't answering the question the event had promised to tackle: how can the creative industries build on file-sharing and social networks to create the business models of the future?

It seems worth trying to answer that now.

First, though, I think it's important to point out that I don't think there's much that's innovative about downloading a TV show or MP3. The people engaged in downloading unauthorized copies of mainstream video/audio, I think, are not doing anything particularly brave. The people on the front lines are the ones running search engines and services. These people are indeed innovators, and some of them are doing it at substantial personal risk. And they cannot, in general, get legal licenses from rights holders, a situation that could be easily changed by the rights holders. Napster, which kicked the copyright wars into high gear and made digital downloads a mainstream distribution method, is now ten years ago. Yet rights holders are still trying to implement artificial scarcity (to replace real scarcity) and artificial geography (to replace real geography). The death of distance, as Economist writer Frances Cairncross called it in 1997, changes everything, and trying to pretend it doesn't is absurd. The download market has been created by everyone *but* the record companies, who should have benefited most.

Social networks - including the much-demonized P2P networks - provide the greatest mechanism for word of mouth in the history of human culture. And, as we all know, word of mouth is the most successful marketing available, at least for entertainment.

It also seems obvious that P2P and social networks are a way for companies to gauge the audience better before investing huge sums. It was obvious from day one, for example, that despite early low official ratings and mixed reviews, Gossip Girl was a hit. Why? Because tens of thousands of people were downloading it the instant it came online after broadcast. Shouldn't production company accountants be all over this? Use these things as a testbed instead of having the fall pilots guessed on by a handful of the geniuses who commissioned Cavemen and the US version of Coupling and cancelled Better Off Ted. They could have a lot clearer picture of what kind of audience a show might find and how quickly.

Trying to kill P2P and other technologies just makes them respawn like the Hydra. The death of Napster (central server) begat Gnutella and eDonkey (central indexes), lawsuits against whose software developers begat the even more decentralized BitTorrent. When millions and tens of millions of people are flocking to a new technology rights holders should be there, too.

The real threat is always going to be artists taking their business into their own hands. For every Lady Gaga there are thousands of artists who, given some basic help can turn their work into the kind of living wage that allows them to pursue their art full-time and professionally. I would think there is a real business in providing these artists with services - folksingers, who've never had this kind of help, have produced their own recordings for decades, and having done it myself I can tell you it's not easy. This was the impulse behind the foundation of CDBaby, and now of Jamie King's VoDo. In the long run, things like this are the real game-changers.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 4, 2010

Return to the hacker crackdown

Probably many people had forgotten about the Gary McKinnon case until the new government reversed their decision to intervene in his extradition. Legal analysis is beyond our expertise, but we can outline some of the historical factors at work.

By 2001, when McKinnon did his breaking and entering into US military computers, hacking had been illegal in the UK for just over ten years - the Computer Misuse Act was passed in 1990 after the overturned conviction of Robert Schifreen and Steve Gold for accessing Prince Philip's Prestel mailbox.

Early 1990s hacking (earlier, the word meant technological cleverness) was far more benign than today's flat-out crimes of identity fraud, money laundering, and raiding bank accounts. The hackers of the era - most famously Kevin Mitnick were more the cyberspace equivalent of teenaged joyriders: they wandered around the Net rattling doorknobs and playing tricks to get passwords, and occasionally copied some bit of trophy software for bragging rights. Mitnick, despite spending four and a half years in jail awaiting trial, was not known to profit from his forays.

McKinnon's claim that he was looking for evidence that the US government was covering up information about alternative energy and alien visitations seems to me wholly credible. There was and is a definite streak of conspiracy theorists - particularly about UFOs - among the hacker community.

People seemed more alarmed by those early-stage hackers than they are by today's cybercriminals: the fear of new technology was projected onto those who seemed to be its masters. The series of 1990 "Operation Sundown" raids in the US, documented in Bruce Sterling's book , inspired the creation of the Electronic Frontier Foundation. Among other egregious confusions, law enforcement seized game manuals from Steven Jackson Games in Austin, Texas, calling them hacking instruction books.

The raids came alongside a controversial push to make hacking illegal around the world. It didn't help when police burst in at the crack of dawn to arrest bright teenagers and hold them and their families (including younger children) at gunpoint while their computers and notebooks were seized and their homes ransacked for evidence.

"I think that in the years to come this will be recognized as the time of a witch hunt approximately equivalent to McCarthyism - that some of our best and brightest were made to suffer this kind of persecution for the fact that they dared to be creative in a way that society didn't understand," 21-year-old convicted hacker Mark Abene ("Phiber Optik") told filmmaker Annaliza Savage for her 1994 documentary, Unauthorized Access (YouTube).

Phiber Optik was an early 1990s cause célèbre. A member of the hacker groups Legion of Doom and Masters of Deception, he had an exceptionally high media profile. In January 1990, he and other MoD members were raided on suspicion of having caused the AT&T crash of January 15, 1990, when more than half of the telephone network ceased functioning for nine hours. Abene and others were eventually charged in 1991, with law enforcement demanding $2.5 million in fines and 59 years in jail. Plea agreements reduced that a year in prison and 600 hours of community service. The company eventually admitted the crash was due to its own flawed software upgrade.

There are many parallels between these early days of hacking and today's copyright wars. Entrenched large businesses (then AT&T; now RIAA, MPAA, BPI, et al) perceive mostly young, smart Net users as dangerous enemies and pursue them with the full force of the law claiming exaggeratedly large-figure sums in damages. Isolated, often young, targets were threatened with jail and/or huge sums in damages to make examples of them to deter others. The upshot in the 1990s was an entrenched distrust of and contempt for law enforcement on the part of the hacker community, exacerbated by the fact that back then so few law enforcement officers understood anything about the technology they were dealing with. The equivalent now may be a permanent contempt for copyright law.

In his 1990 essay Crime and Puzzlement examining the issues raised by hacking, EFF co-founder John Perry Barlow wrote of Phiber Optik, whom he met on the WELL: "His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves."

When McKinnon was first arrested in March 2002 and then indicted in a Virginia court in October 2002 for cracking into various US military computers - with damage estimated at $800,000 - all this history will still fresh. Meanwhile, the sympathy and good will toward the US engendered by the 9/11 attacks had been dissipated by the Bush administration's reaction: the PATRIOT Act (passed October 2001) expanded US government powers to detain and deport foreign citizens, and the first prisoners arrived at Guantanamo in January 2002. Since then, the US has begun fingerprinting all foreign visitors and has seen many erosions to civil liberties. The 2005 changes to British law that made hacking into an extraditable offense were controversial for precisely these reasons.

As McKinnon's case has dragged on through extradition appeals this emotional background has not changed. McKinnon's diagnosis with Asperger's Syndrome in 2008 made him into a more fragile and sympathetic figure. Meanwhile, the really dangerous cybercriminals continue committing fraud, theft, and real damage, apparently safe from prosecution.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 21, 2010

Trial by innocence

I don't think I ever chose a side on the subject of whether Floyd Landis was guilty or innocent. raised some legitimate issues about the anti-doping industry (as it's becoming). Given the considerable evidence that doping is endemic in cycling, it's hard to believe any winner in that sport is drug-free whether he's ever failed an anti-doping control or not. On the other hand, I really do believe in the presumption of innocence, and one must always allow for the possibility of technical, logistical, and personal errors. It would have been churlish to proclaim Landis's guilt before the tribunal hearing his case did. The blog Steroid Nation was always skeptical, but not condemning, of Landis's cries of innocence.

But I know how I'd feel if I'd believed in his innocence and contributed to the Floyd Fairness Fund that was set up to accept donations from fans to pay his legal fees: hella angry and betrayed. Of all the athletes who have protested their innocence down the years of anti-doping, Landis was the most vocal, the most insistent, and the most public. Landis even published a book, 2007's Positively False: The Real Story of How I Won the Tour de France that loudly proclaimed his innocence ("My case should never have happened"), laying out much the arguments and evidence (which he is accused of having by hacking the lab's computer system) he made on the Floyd Fairness Web site. It seems all but certain he'll "write" another, this one telling the blockbuster story of how he fooled family, fans, drug testers, and media for all those years.

I'll make sure to buy it used, so I don't help him profit from his crime.

By "crime" I don't mean his doping - although under the law it is in fact a crime, and it's an example of our cultural double-think on this issue that athletes are not prosecuted for doping the way crack, heroin, or even marijuana users are in most countries. I mean effectively defrauding his fans out of their hard-earned money to help him defend against charges that he now admits were true. If that's not a con trick, what is?

I also know how I'd feel if I were a non-doping athlete wrongfully accused - and however few of these there may be on the planet, the law of truly large numbers says there must be some somewhere. I would be absolutely enraged. High-profile cases like this - see also Marion Jones, Mark McGwire - make it impossible for any athlete to believed. And, as Agatha Christie wrote long ago in Ordeal by Innocence, "It's not the guilty who matter, it's the innocent." In her example, the innocent servant suffered the most when an expensive bit of jewelry was stolen from her employer's home. In sports, even if there are no false positives (which seems impossible), athletes suffer when they must regard all foods, supplements, and medical treatment with fear.

You may remember that late last year the tennis player Andre Agassi published Open, in which among other revelations (he wore a wig in the early 1990s, he hated tennis) he revealed that the Association of Tennis Professionals had accepted his utterly meretricious explanation of how he came to test positive for crystal meth and let him off any punishment. This humane behavior, although utterly against the rules and deplored by Agassi's competitors, most notably Marat Safin, arguably saved Agassi's career. Frightened out of his wits by his close brush with suspension and endorsement death, Agassi cleaned up his act, got to work, and over the next year or two raised his ranking from the depths of 140 to 1. Had the ATP followed the rules and suspended him, Agassi might now be in the record books as a huge but flaky talent that flamed out after three Slam wins and a gold medal. Instead, he's arguably the most versatile player in tennis history and member of a tiny, elite handful of players who won everything of significance in the game on every surface at least once.

Crystal meth, of course, was not a performance-enhancing drug; it was a performance-destroying drug. Agassi's ranking plummeted under its influence, and it's arguable that they had no business testing for it. But Safin's key point was that having successfully lied to the ATP, Agassi should now reward the ATP's confidence by keeping his mouth shut.

I'm not entirely sure I agree with that in Agassi's case; at least he produced a rare example of an athlete taking drugs and losing because of them. Also, the ATP is no longer in charge of the tennis tour's doping controls and the people who dealt with Agassi's positive test in 1997 have likely moved on.

But most of these cases, including Landis's, just keep repeating the same old lesson, and it's not the one the anti-doping authorities would like: winners dope. Then they lie about it for fame and glory. If and when they're caught, they lie some more. And then, when people are beginning to forget about them, they 'fess up and justify themselves by accusing their rivals and beginning the cycle anew. Something is badly broken here. Bring on undetectable gene doping.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 2, 2010

Not bogus!


"If I lose £1 million it's worth it for libel law reform," the science writer Simon Singh was widely reported as saying this week. That was even before yesterday's ruling in the libel case brought against him by the British Chiropractic Association.

Going through litigation, I was told once, is like having cancer. It is a grim, grueling, rollercoaster process that takes over your life and may leave you permanently damaged. In the first gleeful WE-WON! moments following yesterday's ruling it's easy to forget that. It's also easy to forget that this is only one stage in a complex series.

Yesterday's judgment was the ruling in Singh's appeal (heard on February 22) against the ruling of Justice David Eady last May, which itself was only a preliminary ruling on the meaning of the passage in dispute, with the dispute itself to be resolved in a later trial. In October Singh won leave to appeal Eady's ruling; February's hearing and today's judgment constituted that appeal and its results. It is now two years since the original article appeared, and the real case is yet to be tried. Are we at the beginning of Jarndyce and Jarndyce or SCO versus Everyone?

The time and costs of all this are why we need libel law reform. English libel cases, as Singh frequently reminds us, cost 144 times as much as similar cases in the rest of the EU.

But the most likely scenario is that Singh will lose more than that million pounds. It's not just that he will have to pay the costs of both sides if he loses whatever the final round of this case eventually turns out to be (even if he wins the costs awarded will not cover all his expenses). We must also count what businesses call "opportunity costs".

A couple of weeks ago, Singh resigned from his Guardian column because the libel case is consuming all his time. And, he says, he should have started writing his next book a year ago but can't develop a proposal and make commitments to publishers because of the uncertainty. These withdrawals are not just his loss; we all lose by not getting to read what he'd write next. At a time when politicians can be confused enough to worry that an island can tip over and capsize, we need our best popular science educators to be working. Today's adults can wait, perhaps; but I did some of my best science reading as a teenager: The Microbe Hunters; The Double Helix (despite its treatment of Rosalind Franklin); Isaac Asimov's The Human Body: Its Structure and Operation; and the pre-House true medical detection stories of Berton Roueché. If Singh v BCA takes five years that's an entire generation of teenagers.

Still, yesterday's ruling, in which three of the most powerful judicial figures in the land agreed - eloquently! - with what we all thought from the beginning deserves to be celebrated, not least for its respect for scientific evidence,

Some favorite quotes from the judgment, which makes fine reading:

Accordingly this litigation has almost certainly had a chilling effect on public debate which might otherwise have assisted potential patients to make informed choices about the possible use of chiropractic.

A similar situation, of course, applies to two other recent cases that pitted libel law against the public interest in scientific criticism. First, Swedish academic Francisco Lacerda, who criticized the voice risk analysis principles embedded in lie detector systems (including one bought by the Department of Work and Pensions at a cost of £2.4 million). Second, British cardiologist Peter Wilmshurst is defending charges of libel and slander over comments he made regarding a clinical trial in which he served as a principal investigator. In all three cases, the public interest is suffering. Ensuring that there is a public interest defense is accordingly a key element of the libel law reform campaign's platform.

The opinion may be mistaken, but to allow the party which has been denounced on the basis of it to compel its author to prove in court what he has asserted by way of argument is to invite the court to become an Orwellian ministry of truth.

This was in fact the gist of Eady's ruling: he categorized Singh's words as fact rather than comment and would have required Singh to defend a meaning his article went on to say explicitly was not what he was saying. We must leave it for someone more English than I am to say whether that is a judicial rebuke.

We would respectfully adopt what Judge Easterbrook, now Chief Judge of the US Seventh Circuit Court of Appeals, said in a libel a2ction over a scientific controversy, Underwager v Salter: "[Plaintiffs] cannot, by simply filing suit and crying 'character assassination!', silence those who hold divergent views, no matter how adverse those views may be to plaintiffs' interests. Scientific controversies must be settled by the methods of science rather than by the methods of litigation.

What they said.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 26, 2010

The community delusion

The court clerk - if that's the right term - seemed slightly baffled by the number of people who showed up for Tuesday's hearing in Simon Singh v. British Chiropractic Association. There was much rearrangement, as the principals asked permission to move forward a row to make an extra row of public seating and then someone magically produced eight or ten folding chairs to line up along the side. Standing was not allowed. (I'm not sure why, but I guess something to do with keeping order and control.)

It was impossible to listen to the arguments without feeling a part of history. Someday - ten, 50, 150 years from now - a different group of litigants will be sitting in that same court room or one very like it in the same building and will cite "our" case, just as counsel cited precedents such as Reynolds and Branson v Bower. If Singh's books don't survive, his legal case will, as may the effects of the campaign to reform libel law (sign the petition!) it has inspired and the Culture, Media, and Sport report (Scribd) that was published on Wednesday. And the sheer stature of the three judges listening to the appeal - Lord Chief Justice Lord Judge (to Americans: I am not making this up!), Master of the Rolls Lord Neuberger, and Lord Justice Sedley - ensures it will be taken seriously.

There are plenty of write-ups of what happened in court and better-informed analyses than I can muster to explain what it means. The gist, however: it's too soon to tell which pieces of law will be the crucial bits on which the judges make their decision. They certainly seemed to me to be sympathetic to the arguments Singh's counsel, Adrienne Page QC, made and much less so to the arguments the BCA's counsel, Heather Rogers QC. But the case will not be decided on the basis of sympathy; it will be decided on the basis of legal analysis. "You can't read judges," David Allen Green (aka jackofkent) said to me over lunch. So we wait.
But the interesting thing about the case is that this may be the first important British legal case to be socially networked: here is a libel case featuring no pop stars or movie idols, and yet they had to turn some 20 or 30 people away from the courtroom. Do judges read Twitter?

Beginning with Howard Rheingold's 1993 book The Virtual Community, it was clear that the Net's defining characteristic as a medium is its enablement of many-to-many communication. Television, publishing, and radio are all one-to-many (if you can consider a broadcaster/publisher a single gatekeeper voice). Telephones and letters are one-to-one, by and large. By 1997, business minds, most notably John Hagel III and Arthur Armstrong in net.gain, had begun saying that the networked future of businesses would require them to build communities around themselves. I doubt that Singh thinks of his libel case in that light, but today's social networks (which are a reworking of earlier systems such as Usenet and online conferencing systems) are enabling him to do just that. The leverage he's gained from that support is what is really behind both the challenge to English libel law and the increasing demand for chiropractors generally to provide better evidence or shut up.

Given the value everyone else, from businesses to cause organizations to individual writers and artists, places on building an energetic, dedicated, and active fan base, it's surprising to see Richard Dawkins, whose supporters have apparently spent thousands of unpaid hours curating his forums for him, toss away what by all accounts was an extraordinarily successful community supporting his ideas and his work. The more so because apparently Dawkins has managed to attract that community without ever noticing what it meant to the participants. He also apparently has failed to notice that some people on the Net, some of the time, are just the teeniest bit rude and abusive to each other. He must lead a very sheltered life, and, of course, never have moderated his own forums.

What anyone who builds, attracts, or aspires to such a community has to understand from the outset is that if you are successful your users will believe they own it. In some cases, they will be right. It sounds - without having spend a lot of time poring over Dawkins' forums myself - as though in this case in fact the users, or at least the moderators, had every right to feel they owned the place because they did all the (unpaid) work. This situation is as old as the Net - in the days of per-minute connection charges CompuServe's most successful (and economically rewarding to their owners) forums were built on the backs of volunteers who traded their time for free access. And it's always tough when users rediscover the fact that in each individual virtual community, unlike real-world ones, there is always a god who can pull the plug without notice.

Fortunately for the causes of libel law reform and requiring better evidence, Singh's support base is not a single community; instead, it's a group of communities who share the same goals. And, thankfully, those goals are bigger than all of us.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. I would love to hear (net.wars@skeptic.demon.co.uk) from someone who could help me figure out why this blog vapes all non-spam comments without posting them.

February 12, 2010

Light year

This year is going to be the first British general election in which blogging is going to be a factor, someone said on Monday night at the event organized by the Westminster Skeptics on the subject of political blogging: does it make any difference? I had to stop and think: really? Things like the Daily Kos have been part of the American political scene for so long now - Kos was founded in 2002 - that they've been through two national elections already.

But there it was: "2005 was my big break," said Paul Staines, who blogs as Guido Fawkes. "I was the only one covering it. 2010 is going to be much tougher." To stand out, he went on to say, you're going to need a good story. That's what they used to tell journalists.

Due to the wonders of the Net, you can experience the debate for yourself. The other participants were Sunny Hundal (Liberal Conspiracy), Mick Fealty (Slugger O'Toole), Jonathan Isaby (Conservative Home), and the Observer journalist Nick Cohen, there to act as the token nay-sayer. (I won't use skeptic, because although the popular press like to see a "skeptic" as someone who's just there to throw brickbats, I use the term rather differently: skepticism is inquiry and skeptics ask questions and examine evidence.)

All four of political bloggers have a precise idea of what they're trying to do and who they're writing for. Jonathan Isaby, who claims he's the first British journalist to leave a full-time newspaper job (at the Telegraph) for new media, said he's read almost universally among Conservative candidates. Paul Staines aims Guido Fawkes at "the Westminster bubble". Mick Fealty uses Slugger O'Toole to address a "differentiated audience" that is too small for TV, radio, and newspapers. Finally, Sunny Hundal uses Liberal Conspiracy to try to "get the left wing to become a more coherent force".

Despite their various successes, Cohen's basic platform defended newspapers. Blogging, he said, is not replacing the essential core of journalism: investigation and reporting. He's right up to a point. But some do exactly that. Westminster Skeptics convenor David Allen Green, then standing approximately eight inches away, is one example. But it's probably true that for every blogger with sufficient curiosity and commitment to pick up a phone or bang on someone's door there are a couple of hundred more who write blog postings by draping a couple of hundred words of opinion around a link to a story that appeared in the mainstream media.

Of course, as Cohen didn't say, plenty of journalists\, through lack of funding, lack of time, or lack of training, find themselves writing news stories by draping a couple of hundred words of rewritten press release around the PR-provided quotes - and soul-destroying work it is, too. My answer to Cohen, therefore, is to say that commercial publishers have contributed to their own problems, and that one reason blogs have become such an entrenched medium is that they cover things that no newspaper will allow you to write about in any detail. And it's hard to argue with Cohen's claim that almost any blogger finding a really big story will do the sensible thing and sell it to a newspaper.

If you can. Arguably the biggest political story of 2009 was MPs' expenses. That material was released because of the relentless efforts of Heather Brooke, who took up the 2005 arrival into force of the UK's Freedom of Information Act as a golden opportunity. It took her nearly five years to force the disclosure of MPs' expenses - and when she finally succeeded the Telegraph wrote its own stories after poring over the details that were disclosed.

The fact is that political blogging has been with us for far longer than one five-year general election cycle. It's just that most of it does not take the same form as the "inside politics" blogs of the US or the traditional Parliamentary sketches in the British newspapers. The push for Libel reform began with Jack of Kent (David Allen Green); the push to get the public more engaged with their MPs began with MySociety's Fax Your MP. It was clear as long ago as 2006 that MPs were expert users of They Work For You: it's how they keep tabs on each other. MySociety's sites are not blogs - but they are the source material without which political blogging would be much harder work.

I don't find it encouraging to hear Isaby predict that in the upcoming election (expected in May) blogging "will keep candidates on their toes" because "gaffes will be more quickly reported". Isn't this the problem with US elections? That everyone gets hung up on calumnies such as that Al Gore claimed to have invented the Internet. Serious issues fall by the wayside, and good candidates can be severely damaged by biased reporting that happens to feed an eminently quotable sarcastic joke. Still: anything for a little light into the smoke-filled back rooms where British politics is still made. Even with smoking now banned, it's murky back there.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 5, 2010

Getting run down on the infobahn

It's not going out on much of a limb to predict that 2010 is, finally, the year of the ebook. A lot of electrons are going to be spilled trying to predict the winners on this frontier; the most likely, I think, are Apple (iPhone, iPad), Amazon (Kindle), Google (Books), and Ray Kurzweil (Blio). Note something about all those guys? Yes: none of them are publishers. Just like the music industry, publishers have left it to technology companies to invent their new medium for them.

Note something else about what those guys are not? Authors. Almost everything that's created in this world - books, newspapers, magazines, movies, games, advertising, music, even some industrially designed products - eventually goes back to one person sitting in a room with a blank sheet of paper trying to think up a compelling story.

Authors - and writers generally - used to have a hard but easy job: deliver a steady stream of publishable work, and remuneration will probably happen. Publishers sold books; authors just wrote them. One of my friends, a science fiction writer contractually bound to HarperCollins, used to refer to Rupert Murdoch as "the little man who publishes my books for me". That happy division of labor did not, of course, provide all, or even most writers with a full-time living. But the most important thing authors want is for their work to be noticed; publishers could make that happen.

Things have been changing for some time. It's fifteen years since authors of my acquaintance began talking about the need to hire your own publicist because unless you had a very large (six figures and up) advance most mainstream publishers would not consider your book worth spending money and effort to market it much beyond sending out a press release. Even copy-editing is falling by the wayside, as a manuscript submitted electronically can now feed straight into a typesetting system without the human intervention that gave pause for rethought.

"Everyone's been seeing their royalty statements shrink," a friend observed gloomily last week. He made, 20 years ago, what then seemed an intelligent career decision: to focus on writing reference books because they had a consistent market among people who really needed them, and they would have a continuing market in regular updates. And that worked great until along came Wikipedia online dictionaries and translation engines and government agency Web sites and blogs and picture galleries, and now, he says, "People don't buy reference books any more." I am no exception: all the reference books on the shelves behind my desk are at least 15 years old. About 10 percent are books I'd buy today if I didn't already have them.

So this is also the year in which the more far-seeing authors get to figure out what their future business models are going to be. An author with a business plan? Who ever heard of such a thing? The nearest thing to that in my acquaintance is the science fiction writer Charles Stross; he is smarter about the economic and legal workings of publisher than anyone I've ever met or heard speak at a conference. And even he is asking for suggestions.

First of all, there's the Google Books settlement, which is so complicated that I imagine hardly any of the authors whose works the settlement is a settlement of can stand to read the whole thing. The legal scholar and MacArthur award winner Pamela Samuelson has written a fine explanation of the problems; authors had until January 28 to opt out or object. This isn't over yet: the US Justice Department still doesn't like the terms.

We can also expect more demarcation disputes like this week's spat between Amazon and Macmillan, discussed intelligently by Stross here, here, and here, with an analysis of the scary economics of the Kindle here. The short version: Macmillan wants Amazon to pay more for the Kindle versions of its books, and Amazon threw Macmillan's books out of its .com pram. Caught in the middle are a bunch of very pissed-off authors, who are exercising their rights in the only way they can: by removing links to Amazon and substituting links to the competition: Barnes and Noble and independent booksellers including the wonderful Portland, Oregon stalwart, Powells.

To be fair, removing the "buy new" button from all of the Macmillan listings on Amazon.com (Amazon.co.uk seems to be unaffected) doesn't mean you can't buy the books. In general, you simply click on a different link and buy the book from a marketplace seller rather than Amazon itself. Amazon doesn't care: according to its SEC filings, the company makes roughly the same profit whoever sells the book via its site.

It's times like these when you want to remember the Nobel Laureate author Doris Lessing's advice to all writers: "And it does no harm to repeat, as often as you can, 'Without me, the literary industry would not exist: the publishers, the agents, the sub-agents, the sub-sub agents, the accountants, the libel lawyers, the departments of literature, the professors, the theses, the books of criticism, the reviewers, the book pages - all this vast and proliferating edifice is because of this small, patronized, put-down, and underpaid person.'"

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

January 22, 2010

Music night

Most corporate annual reports seek to paint a glowing picture of the business's doings for the previous year. By law they have to disclose anything really unfortunate - financial losses, management malfeasance, a change in the regulatory landscape. The International Federation of the Phonographic Industry was caught in a bind writing its Digital Music Report 2010 (PDF) (or see the press release). Paint too glowing a picture of the music business, and politicians might conclude no further legislation is needed to bolster the sector. Paint too gloomy a picture, and ministers might conclude theirs is a lost cause, and better to let dying business models die.

So IFPI's annual report veers between complaining about "competing in a rigged market" (by which they mean a market in which file-sharing exists) and stressing the popularity of music and the burgeoning success of legally sanctioned services. Yay, Spotify! Yay, Sky Songs! Yay, iTunes! You would have to be the most curmudgeonly of commentators to point out that none of these are services begun by music companies; they are services begun by others that music companies have been grudgingly persuaded to make deals with. (I say grudgingly; naturally, I was not present at contract negotiations. Perhaps the music companies were hopping up and down like Easter bunnies in their eagerness to have their product included. If they were, I'd argue that the existence of free file-sharing drove them to it. Without file-sharing there would very likely be no paid subscription services now; the music industry would still be selling everyone CDs and insisting that this was the consumer's choice.)

The basic numbers showed that song downloads increased by 10 percent - but total revenue including CDs fell by 12 percent in the first half of 2009. The top song download: Lady Gaga's "Poker Face".

All this is fair enough - an industry's gotta eat! - and it's just possible to read it without becoming unreasonable. And then you hit this gem:

Illegal file-sharing has also had a very significant, and sometimes disastrous, impact on investment in artists and local repertoire. With their revenues eroded by piracy, music companies have far less to plough back into local artist development. Much has been made of the idea that growing live music revenues can compensate for the fall-off in recorded music sales, but this is, in reality, a myth. Live performance earnings are generally more to the benefit of veteran, established acts, while it is the younger developing acts, without lucrative careers, who do not have the chance to develop their reputation through recorded music sales.
So: digital music is ramping up (mostly through the efforts of non-music industry companies and investors). Investment in local acts and new musicians is down. And overall sales are down. And we're blaming file-sharing? How about blaming at least the last year or so of declining revenues on the recession? How about blaming bean counters at record companies who see a higher profit margin in selling yet more copies of back catalogue tried-and-tested, pure-profit standards like Frank Sinatra and Elvis Presley than in taking risks on new music? At some point, won't everyone have all the copies of the Beatles albums they can possibly use? Er, excuse me, "consume". (The report has a disturbing tendency to talk about "consuming" music; I don't think people have the same relationship with music that they do with food. I'd also question IFPI's whine about live music revenues: all young artists start by playing live gigs, that's how they learn; *radio play* gets audiences in; live gigs *and radio play* sell albums, which help sell live gigs in a virtuous circle, but that's a topic for another day.)

It is a truth rarely acknowledged that all new artists - and all old artists producing new work - are competing with the accumulated back catalogue of the past decades and centuries.

IFPI of course also warns that TV, book publishing, and all other media are about to suffer the same fate as music. The not-so-subtle underlying message: this is why we must implement ferocious anti-file-sharing measures in the Digital Economy Bill, amendments to which, I'm sure coincidentally, were discussed in committee this week, with more to come next Tuesday, January 26.

But this isn't true, or not exactly. As a Dutch report on file-sharing (original in Dutch) pointed out last year, file-sharing, which it noted goes hand-in-hand with buying, does not have the same impact on all sectors. People listen to music over and over again; they watch TV shows fewer but still multiple times; if they don't reread books they do at least often refer back to them; they see most movies only once. If you want to say that file-sharing displaces sales, which is debatable, then clearly music is the least under threat. If you want to say that file-sharing displaces traditional radio listening, well, I'm with you there. But IFPI does not make that argument.

Still, some progress has been made. Look what IFPI says here, on page 4 in the executive summary right up front: "Recent innovations in the à-la-carte sector include...the rollout of DRM-free downloads internationally." Wha-hey! That's what we told them people wanted five years ago. Maybe five years from now they'll be writing how file-sharing helps promote artists who, otherwise, would never find an audience because no one would ever hear their work.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

January 15, 2010

The once and future late-night king

On the face of it, the unexpected renewal of the late-night TV wars is a pretty trivial matter. As The Tonight Show with Conan O'Brien itself points out, there is a lot of real news that's a lot more important - health care, Haiti, Google versus China, network neutrality, and discussions of the Digital Economy bill (my list, not theirs). O'Brien wrote in an open letter a couple of days ago that he has been "absurdly lucky". Even so.

But Conan-versus-Leno is personalization; at heart this story is about the future of broadcasting and its money. Given today's time-shifting choices, few things lure viewers to a particular TV channel at a precise time. Two are live sports and breaking news. A third is the run of talk-variety shows that start in most parts of the US at 11:35pm (10:35 Central) and run until around 2am.

The kingpin of all of these is The Tonight Show, broadcast on NBC every night following the 11 o'clock news for nearly 60 years. For 30 of those years it was presented by a single host, Johnny Carson, probably the biggest star television has ever had - and quite possibly the biggest television ever will have. They make talent like Carson's very infrequently; they don't make broadcasting like that any more. According to Bill Carter in his book The Late Shift: Letterman, Leno, and the Network Battle for the Night, many years Carson's apparently effortless comedy and guest interviews generated 15 to 20 percent of the network's profits.

Every one of today's late-night hosts grew up watching Carson, and probably all of them dreamed of one day having his job. Carson's job, on The Tonight Show on NBC, not a similar job on a similar show at the same time on another network.

The roots of today's mess go back to 1991, when Carson announced he would retire in May 1992. At the time, David Letterman was hosting NBC's 12:30 show, while Jay Leno was Carson's regular substitute host. In a move that seemed to surprise everyone, NBC appointed Leno Carson's successor, fatally assuming that Letterman wouldn't mind. He did mind. The net result was months of uncertainty, politics, and legal wrangling, not least because Leno's early months in the job were unpromising. By 1993, Letterman had begun a competing show at CBS and every other network had tried putting up an 11:30 talk-variety show, most of them dreadful and quickly canned. Since then, Leno has usually won the ratings - but Letterman the awards. Arguably the biggest beneficiary was O'Brien, who landed Letterman's old 12:30 job with barely any performing experience. After following Leno for 16 years, late last year, as per an agreement announced in 2005 and intended to avoid a repeat of 1992, O'Brien got The Tonight Show.

Now, NBC is doing to O'Brien almost exactly what it did to Letterman, apparently filled with panic over declining revenues and shrinking ratings and completely self-destructing (just as Comcast is trying to buy it from GE). As Kansas City critic Aaron Barnhart writes, late-night is about the long haul. In restoring Leno, NBC is hanging onto its past and at best a couple of years of present at the expense of its future. All hosts - almost all entertainers - eventually find their audience is aging along with them. Even Carson seemed old-fashioned to younger viewers by the time he retired at 66: my parents watched Carson; I watch Letterman and Conan; my 20-something friends watch Conan and Jon Stewart.

In his letter, O'Brien says holding The Tonight Show to 11:35 is vital. He is almost certainly right: people go to bed, watch the news and the opening monologue, and progressively drift off to sleep during the guests. By midnight, half of the Tonight Show's viewers are gone; the latest shows are seen by insomniacs and people without kids and early-morning commutes.

Most likely NBC will shortly find out there is no way back to Leno's ratings of 2008. Diehard Leno fans will stick with him but Conan fans will tune out in protest; if they watch anyone it will be Letterman or Stewart. The younger people the network needs for the future watch online.

You may think none of this matters very much outside the US. The shows themselves have never traveled very well, though the format has been widely copied throughout the world. But of all the businesses having to cope with the digital revolution, in television it may be the broadcast networks who are most under threat. Those who copy and share TV shows buy DVDs; they do not return to watch the broadcast versions or consume advertising. Shows have fans; networks don't. The focus on file-sharing ignores the wide variety of streams copied live from broadcasters all over the world that are readily accessible if you know where to look. It is far cheaper to subscribe directly to the tennis tours than to pay Sky Sports or Eurosport, for example - and often free to pick up a stream.

When the history of the digital revolution is written, historians may pinpoint the day Carson announced his retirement as the broadcasting equivalent of Peak Oil.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

November 13, 2009

Cookie cutters

Sometimes laws sneak up on you while you're looking the other way. One of the best examples was the American Telecommunications Act of 1996: we were so busy obsessing about the freedom of speech-suppressing Communications Decency Act amendment that we failed to pay attention to the implications of the bill itself, which allowed the regional Baby Bells to enter the long distance market and changed a number of other rules regarding competition.

We now have a shiny, new example: we have spent so much time and electrons over the nasty three-strikes-and-you're offline provisions that we, along with almost everyone else, utterly failed to notice that the package contains a cookie-killing provision last seen menacing online advertisers in 2001 (our very second net.wars).

The gist: Web sites cannot place cookies on users' computers unless said users have agreed to receive them unless the cookies are strictly necessary - as, for example, when you select something to buy and then head for the shopping cart to check out.

As the Out-Law blog points out this proposal - now to become law unless the whole package is thrown out - is absurd. We said it was in 2001 - and made the stupid assumption that because nothing more had been heard about it the idea had been nixed by an outbreak of sanity at the EU level.

Apparently not. Apparently MEPs and others at EU level spend no more time on the Web than they did eight years ago. Apparently none of them have any idea what such a proposal would mean. Well, I've turned off cookies in my browser, and I know: without cookies, browsing the Web is as non-functional as a psychic being tested by James Randi.

But it's worse than that. Imagine browsing with every site asking you to opt in every - pop-up - time - pop-up - it - pop-up - wants - pop-up - to - pop-up - send - pop-up - you - a - cookie - pop-up. Now imagine the same thing, only you're blind and using the screen reader JAWS.

This soon-to-be-law is not just absurd, it's evil.

Here are some of the likely consequences.

As already noted, it will make Web use nearly impossible for the blind and visually impaired.

It will, because such is the human response to barriers, direct ever more traffic toward those sites - aggregators, ecommerce, Web bulletin boards, and social networks - that, like Facebook, can write a single privacy policy for the entire service to which users consent when they join (and later at scattered intervals when the policy changes) that includes consent to accepting cookies.

According to Out-Law, the law will trap everyone who uses Google Analytics, visitor counters, and the like. I assume it will also kill AdSense at a stroke: how many small DIY Web site owners would have any idea how to implement an opt-in form? Both econsultancy.com and BigMouthMedia think affiliate networks generally will bear the brunt of this legislation. BigMouthMedia goes on to note a couple of efforts - HTTP.ETags and Flash cookies - intended to give affiliate networks more reliable tracking that may also fall afoul of the legislation. These, as those sources note, are difficult or impossible for users to delete.

It will presumably also disproportionately catch EU businesses compared to non-EU sites. Most users probably won't understand why particular sites are so annoying; they will simply shift to sites that aren't annoying. The net effect will be to divert Web browsing to sites outside the EU - surely the exact opposite of what MEPs would like to see happen.

And, I suppose, inevitably, someone will write plug-ins for the popular browsers that can be set to respond automatically to cookie opt-in requests and that include provisions for users to include or exclude specific sites. Whether that will offer sites a safe harbour remains to be seen.

The people it will hurt most, of course, are the sites - like newspapers and other publications - that depend on online advertising to stay afloat. It's hard to understand how the publishers missed it; but one presumes they, too, were distracted by the need to defend music and video from evil pirates.

The sad thing is that the goal behind this masterfully stupid piece of legislation is a reasonably noble one: to protect Internet users from monitoring and behavioural targeting to which they have not consented. But regulating cookies is precisely the wrong way to go about achieving this goal, not just because it disables Web browsing but because technology is continuing to evolve. The EU would be better to regulate by specifying allowable actions and consequences rather than specifying technology. Cookies are not in and of themselves inherently evil; it's how they're used.

Eight years ago, when the cookie proposals first surfaced, they, logically enough, formed part of a consumer privacy bill. That they're now part of the telecoms package suggests they've been banging around inside Parliament looking for something to attach themselves to ever since.

I probably exaggerate slightly, since Out-Law also notes that in fact the EU did pass a law regarding cookies that required sites to offer visitors a way to opt out. This law is little-known, largely ignored, and unenforced. At this point the Net's best hope looks to be that the new version is treated the same way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter or by email to netwars@skeptic.demon.co.uk).

October 23, 2009

The power of Twitter

It was the best of mobs, it was the worst of mobs.

The last couple of weeks have really seen the British side of Twitter flex its 140-character muscles. First, there was the next chapter of the British Chiropractic Association's ongoing legal action against science writer Simon Singh. Then there was the case of Jan Moir, who wrote a more than ordinarily Daily Mailish piece for the Daily Mail about the death of Boyzone's Stephen Gately. And finally, the shocking court injunction that briefly prevented the Guardian from reporting on a Parliamentary question for the first time in British history.

I am on record as supporting Singh, and I, too, cheered when, ten days ago, Singh was granted leave to appeal Justice Eady's ruling on the meaning of Singh's use of the word "bogus". Like everyone, I was agog when the BCA's press release called Singh "malicious". I can see the point in filing complaints with the Advertising Standards Authority over chiropractors' persistent claims, unsupported by the evidence, to be able to treat childhood illnesses like colic and ear infections.

What seemed to edge closer to a witch hunt was the gleeful take-up of George Monbiot's piece attacking the "hanging judge", Justice Eady. Disagree with Eady's ruling all you want, but it isn't hard to find libel lawyers who think his ruling was correct under the law. If you don't like his ruling, your correct target is the law. Attacking the judge won't help Singh.

The same is not true of Twitter's take-up of the available clues in the Guardian's original story about the gag to identify the Parliamentary Question concerned and unmask Carter-Ruck, the lawyers who served it and their client, Trafigura. Fueled by righteous and legitimate anger at the abrogation of a thousand years of democracy, Twitterers had the PQ found and published thousands of times in practically seconds. Yeah!

Of course, this phenomenon (as I'm so fond of saying) is not new. Every online social medium, going all the way back to early text-based conferencing systems like CIX, the WELL, and, of course, Usenet, when it was the Internet's town square (the function in fact that Twitter now occupies) has been able to mount this kind of challenge. Scientology versus the Net was probably the best and earliest example; for me it was the original net.war. The story was at heart pretty simple (and the skirmishes continue, in various translations into newer media, to this day). Scientology has a bunch of super-secrets that only the initiate, who have spent many hours in expensive Scientology training, are allowed to see. Scientology's attempts to keep those secrets off the Net resulted in their being published everywhere. The dust has never completely settled.

Three people can keep a secret if two of them are dead, said Mark Twain. That was before the Internet. Scientology was the first to learn - nearly 15 years ago - that the best way to ensure the maximum publicity for something is to try to suppress it. It should not have been any surprise to the BCA, Trafigura, or Trafigura's lawyers. Had the BCA ignored Singh's article, far fewer people would know now about science's dim view of chiropractic. Trafigura might have hoped that a written PQ would get lost in the vastness that is Hansard; but they probably wouldn't have succeeded in any case.

The Jan Moir case, and the demonstration outside Carter-Ruck's offices are, however rather different. These are simply not the right targets. As David Allen Green (Jack of Kent) explains, there's no point in blaming the lawyers; show your anger to the client (Trafigura) or to Parliament.

The enraged tweets and Facebook postings about Moir's article helped send a record number of over 25,000 complaints to the Press Complaints Commission, whose Web site melted down under the strain. Yes, the piece was badly reasoned and loathsome, but isn't that what the Daily Mail lives for? Tweets and links create hits and discussion. The paper can only benefit. In fact, it's reasonable to suppose that in the Trafigura and Moir cases both the Guardian and the Daily Mail manipulated the Net perfectly to get what they wanted.

But the stupid part about let's-get-Moir is that she does not *matter*. Leave aside emotional reactions, and what you're left with is someone's opinion, however distasteful.

This concerted force would be more usefully turned to opposing the truly dangerous. See for example, the AIDS denialism on parade by Fraser Nelson at The Spectator. The "come-get-us" tone e suggests that they saw attention New Humanist got for Caspar Melville's mistaken - and quickly corrected - endorsement of the film House of Numbers and said, "Let's get us some of that." There is no more scientific dispute about whether HIV causes AIDS than there is about climate change or evolutionary theory.

If we're going to behave like a mob, let's stick to targets that matter. Jan Moir's column isn't going to kill anybody. AIDS denialism will. So: we'll call Trafigura a win, chiropractic a half-win, and Moir a loser.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

October 16, 2009

Unsocial media

"No one under 30 will use email," the convenor objected.

There was a bunch of us, a pre-planning committee for an event, and we were talking about which technology we should have the soon-to-be appointed program committee use for discussions. Email! Convenient. Accessible by computer or phone. Easily archived, forwarded, quoted, or copied into any other online medium. Why are we even talking about this?

And that's when he said it.

Not so long ago, if you had email you were one of the cool kids, the avant-garde who saw the future and said it was electronic. Most of us spent years convincing our far-flung friends and relatives to get email so we didn't have to phone or - gasp - write a letter that required an envelope and a stamp. Being told that "email is for old people" is a lot like a 1960s "Never trust anyone over 30" hippie finding out that the psychedelic school bus he bought to live in to support the original 1970 Earth Day is a gas-guzzling danger to the climate and ought to be scrapped.

Well, what, then? (Aside: we used to have tons of magazines called things like Which PC? and What Micro? to help people navigate the complex maze of computer choices. Why is there no magazine called Which Social Medium??)

Facebook? Clunky interface. Not everyone wants to join. Poor threading. No easy way to export, search, or archive discussions. IRC or other live chat? No way to read discussion that took place before you joined the chat. Private blog with comments and RSS? Someone has to set the agenda. Twitter? Everything is public, and if you're not following all the right people the conversation is disjointed and missing links you can't retrieve. IM? Skype? Or a wiki? You get the picture.

This week, the Wall Street Journal claimed that "the reign of email is over" while saying only a couple of sentences later, "We all still use email, of course." Now that the Journal belongs to Rupert Murdoch, does no one check articles for sense?

Yes, we all still use email. It can be archived, searched, stored locally, read on any device, accessed from any location, replied to offline if necessary, and read and written thoughtfully. Reading that email is dead is like reading, in 2000, that because a bunch of companies went bust the Internet "fad" was over. No one then who had anything to do with the Internet believed that in ten years the Internet would be anything but vastly bigger than it was then. So: no one with any sense is going to believe that ten years from now we'll be sending and receiving less email than we are now. What very likely will be smaller, especially if industrial action continues, is the incumbent postal services.

What "No one under 30 uses email" really means is that it's not their medium of first choice. If you're including college students, the reason is obvious: email is the official stuff they get from their parents and universities. Facebook, MySpace, Twitter, and texting is how they talk to their friends. Come the day they join the workforce, they'll be using email every day just like the rest of us - and checking the post and their voicemail every morning, too.

But that still leave the question: how do you organize anything if no one can agree on what communications technology to use? It's that question that the new Google Wave is trying to answer. It's too soon, really, to tell whether it can succeed. But at a guess, it lacks one of the fundamental things that makes email such a lowest common denominator: offline storage. Yes, I know everything is supposed to be in "the cloud" and even airplanes have wifi. But for anything that's business-critical you want your own archive where you can access it when the network fails; it's the same principle as backing up your data.

Reviews vary in their take on Wave. LifeHacker sees it as a collaborative tool. ZDNet UK editor Rupert Goodwins briefly called it Usenet 2.0 and then retracted and explained using the phrase "unified comms".

That, really, is the key. Ideally, I shouldn't have to care whether you - or my fellow committee members - prefer to read email, participate in phone calls (via speech-to-text, text-to-speech synthesizers), discuss via Usenet, Skype, IRC, IM, Twitter, Web forums, blogs, or Facebook pages. Ideally, the medium you choose should be automatically translated in to the medium I choose. A Babel medium. The odds that this will happen in an age when what companies most want is to glue you to their sites permanently so they can serve you advertising are very small.

Which brings us back to email. Invented in an era when the Internet was commercial-free. Designed to open standards, so that anyone can send and receive it using any reader they like. Used, in fact, to alert users to updates they want to know about to their accounts on Facebook/IRC/Skype/Twitter/Web forums. Yes, it's overrun with corporate CYA memos and spam. But it's still the medium of record - and it isn't going anywhere. Whereas: those 20-somethings will turn 30 one day soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 9, 2009

Phantom tollbooths

This was supposed to be the week that the future of Google Books became clear or at least started to; instead, the court ordered everyone to go away and come up with a new settlement (registration required). The revised settlement is due by November 9; the judge will hear objections probably around the turn of the year.

Instead this turned into the Week of the Postcode, after the Royal Mail issued cease-and-desist letters to the postcode API service Ernest Marples (built by Richard Pope and Open Rights Group advisory council member Harry Metcalfe). Marples' sin: giving away postcode data without a license (PDF).

At heart, the Postcode spat and the Google Books suit are the same issue: information that used to be expensive can now be made available on the Internet for free, and people who make money from the data object.

We all expect books to be copyrighted; but postcodes? When I wrote about it, astonished, in 1993 for Personal Computer World, the spokesperson explained that as an invention of the Royal Mail of course they were the Royal Mail's property (they've now just turned 50). There are two licensed services, the Postcode Address File (automates filling in addresses) and PostZon, the geolocator database useful for Web mashups. The Royal Mail says it's currently reviewing its terms and licensing conditions for PostZon; based on the recent similar exercise for PAF (PDF) we'll guess that the biggest objections to giving it away will come from people who are already paying for it and want to lock out competitors.

There's just a faint hint that postcodes could become a separate business; the Royal Mail does not allow the postcode database and mail delivery to cross-subsidize (to mollify competitors who use the database). Still, Charles Arthur, in the Guardian, estimates that licensing the postcode database costs us more than it makes.

This is the other sense in which postcodes are like Google Books: it costs money to create and maintain the database. But where postcodes are an operational database for the Royal Mail, books may not be for Google Wired UK has shown what happens when Google loses economic interest in a database, in this case Google Groups (aka, the Usenet archive).

But in the analogy Google plays the parts of both the Royal Mail (investing in creating a database from which it hopes to profit) and the geeks seeking to liberate the data (locked-up, out-of-print books, now on the Web! Yeah!). The publishers are merely an intervening toll booth. This is one reason reactions to Google Books have been so mixed and so confusing: everyone's inner author says, "Google will make money. I want some," while their inner geek says, "Wow! That is so *cool*! I want that!".

The second reason everyone's so confused, of course, is that the settlement is 141 pages of dense legalese with 15 appendices, and nobody can stand to read it. (I'm reliably told that the entire basis for handling non-US authors' works is one single word: "If".) This situation is crying out for a wiki where intellectual property lawyers, when they have a moment, can annotate and explain. The American Library Association has bravely managed a two-page summary (PDF).

What's really at stake, as digital library expert Karen Coyle explained to me this week, is orphan works, which could have long ago been handled by legislation if everyone hadn't gotten all wrapped up in the Google Books settlement. Public domain works are public domain (and you will find many of those Google has scanned in quietly available at the Internet Archive, where someone has been diligently uploading them. Works whose authorship is known have authors and publishers to take charge. But orphan works...the settlement would give a Book Rights Registry two-thirds of the money Google pays out to distribute to authors of orphan works. This would be run by the publishers, who I'm sure would put as much effort into finding authors to pay as, as, as...the MPAA@@. It was on this basis that the Department of Justice objected to the settlement.

The current situation with postcodes shows us something very important: when the Royal Mail invented them, 50 years ago, no one had any idea what use they might have outside of more efficiently delivering the mail. In the intervening time, postcodes have enabled the Royal Mail to automate sorting and slim down its work force (while mysteriously always raising postage); but they have also become key data points on which to hang services that have nothing to do with mail but everything to do with location: job seeking, political protest, property search, and quick access to local maps.

Similarly: we do not know what the future might hold for a giant database of books. But the postcode situation reminds us what happens when one or two stakeholders are allowed to own something that has broader uses than they ever imagined. Meanwhile, if you'd like to demand a change in the postcode situation this petition is going like gangbusters.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

October 2, 2009

Free thought

Well, this makes you blink and check the date: the Evening Standard is proposing to drop its cover price to zero on October 12. The paper's owner, Alexander Lebedev, expects the move to more than double the paper's circulation, from 250,000 to 600,000. And, one supposes not incidentally, to kick the stuffing out of the free sheets hawkers have been harassing Londoners to take for the last few years. That's how to compete with free: throw away a couple of million pounds of revenue in favor of increased distribution. I particularly like this quote: In the same statement, Geordie Greig, editor of the Standard, called it "an historic moment and great opportunity".

It wasn't so long ago - say, the turn of the century, nine years ago - that the critics used to lambaste Amazon.com and other dot-com upstarts for taking the view that Getting Big Fast was a good strategy, even if it meant you lost money at a rate that would scare a banker. It was even more recently - August - that Rupert Murdoch decided that news was not meant to be free, first closing his three-year-old free London title and then announcing News International would begin charging for online news.

Murdoch's notion was easily dismissed: to date, he has been consistently and persistently wrong in every online venture he's tried. For the history challenged: in late 1993, when graphical interfaces were taking over and the Web was about to explode he bought the 100,000-subscriber king of text-based online services, Delphi. The relatively modest purchase price, estimated at $3-5 million, wound up costing Murdoch hundreds of millions more in trying to adapt to the pace of technological change.

That money went on this plan: to reinvent Delphi as part of Springboard, the long-forgotten 1996-1997 attempt to fashion a mass-market news service in collaboration with first MCI and then BT. And who could forget - well, probably everyone - Currant Bun, the news service for readers of the Sun?

Murdoch's goal is at least clear and consistent: he wants to turn the Internet into a traditional medium that, like television and newspapers offers mass-market access but a walled garden of content he can charge for. One day, if we pay insufficient attention to network neutrality and system design, he may succeed.

But if there's one thing everyone has agreed on over the last year it's that newspapers can't survive on Web revenues - that is, advertising - alone. Can a print version succeed on that same business model with far higher distribution costs? And still do quality journalism?

Based on , you would think not. In 1993, the Times - Murdoch, again - kicked off a price war among Britain's quality dailies by dropping the cover price to 20p. The Independent and the Telegraph were forced to follow. The net result: the Times increased its readership by a lot, the Telegraph, and the Independent struggled. Fifteen years later, with everyone losing readers, the relative positions haven't changed much.

But cheap is not free; it's far easier to slowly raise the price back up again (as in fact has happened) than it is to cross the gap between free and not-free. People get in the habit of thinking that things they don't have to pay for aren't *worth* paying for, where they're more likely to think that something that's cheap now will cost more later. Lebedev is going through a one-way door.

There is also the question of whether the readers you get from distributing 600,000 free copies of a newspaper are the same value to advertisers as the readers you get from selling the same newspaper to 250,000.

It's hard to see how this change will be sustainable in the long run and maybe even in the short run. The newspaper business, however much it needs to be reinvented, is an established one. Dumping an entire revenue stream in an established industry is not the same as being willing to lose money as an investment in the future in a new medium that's growing like crazy. More-than-doubling distribution might slow but won't fundamentally alter the shift of classified ads (on which the Standard, unlike the Guardian depends) from print to online. That shift is fuelled by the ads being (mostly) free to post and instantly updated, not just by their being free for readers to see; the Internet is simply a better medium for most small ads.

The immediate reaction on the part of many commentators is to assume that the Standard's move will put pressure on the national former broadsheets. This seems less likely: local newspapers have been the hardest hit (so far) in the move to the Web. Instead, the first to get hurt, as the ABSW pointed out in a Twitter comment, are the newsagents.

Jettisoning a significant source of revenue seems like divorce: you only do it if you're desperate. Maybe Lebedev will prove to be a genius, but it seems doubtful. As Clay Shirky has written: "There is no general model for newspapers to replace the one the Internet just broke."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 11, 2009

Public broadcasting

It's not so long ago - 2004, 2005 - that the BBC seemed set to be the shining champion of the Free World of Content, functioning in opposition to *AA (MPAA, RIAA) and general entertainment industry desire for total content lockdown. It proposed the Creative Archive; it set up BBC Backstage; and it released free recordings of the classics for download.

But the Creative Archive released some stuff and then ended the pilot in 2006, apparently because much of the BBC's content doesn't really belong to it. And then came the iPlayer. The embedded DRM, along with its initial Windows-only specification (though the latter has since changed), made the BBC look like less of a Free Culture hero.

Now, via the consultative offices of Ofcom we learn that the BBC wants to pacify third-party content owners by configuring its high-definition digital terrestrial services - known to consumers as Freeview HD - to implement copy protection. This request is, of course, part of the digital switchover taking place across the country over the next four years.

The thing is, the conditions under which the BBC was granted the relevant broadcasting licenses require that content be broadcast free-to-air. That is, unencrypted, which of course means no copy protection. So the BBC's request is to be allowed instead to make the stream unusable to outsiders by compressing the service information data using in-house-developed lookup tables. Under the proposal, the BBC will make those tables available free of charge to manufacturers who agree to its terms. Or, pretty clearly, the third party rights holders' terms.

This is the kind of hair-splitting the American humorist Jean Kerr used to write about when she detailed conversations with her children. She didn't think, for example, to include in the long list of things they weren't supposed to do when they got up first on a Sunday morning, the instruction not to make flour paste and glue together all the pages of the Sunday New York Times. "Now, of course, I tell them."

When the BBC does it, it's not so funny. Nor is it encouraging in the light of the broader trend toward claiming intellectual property protection in metadata when the data itself is difficult to restrict. Take, for example, the MTA's Metro-North Railroad, which runs commuter trains (on which Meryl Streep and Robert de Niro so often met in the 1984 movie Falling in Love) from New York City up both sides of the Hudson River to Connecticut. MTA has been issuing cease-and-desist orders to the owner of StationStops a Web site and iPhone schedule app dedicated to the Metro-North trains, claiming that it owns the intellectual property rights in its scheduling data. If it were in the UK, the Guardian's Free Our Data campaign would be all over it.

In both cases - and many others - it's hard to understand the originating organisation's complaint. Metro-North is in the business of selling train tickets; the BBC is supposed to measure its success in 1) the number of people who consumer its output; 2) the educational value of its output to the license fee-paying public. Promulgating schedule data can only help Metro-North, which is not a commercial company but a public benefit corporation owned by the State of New York. It's not going to make much from selling data licenses.

The BBC's stated intention is to prevent perfect, high-definition copies of broadcast material from escaping into the hands of (evil) file-sharers. The alternative, it says, would be to amend its multiplex license to allow it to encrypt the data streams. Which, they hasten to add, would require manufacturers to amend their equipment, which they certainly would not be able to do in time for the World Cup next June. Oh, the horror!

Fair enough, the consumer revolt if people couldn't watch the World Cup in HD because their equipment didn't support the new encryption standard would indeed be quite frightening to behold. But the BBC has a third alternative: tell rights holders that the BBC is a public service broadcaster, not a policeman for hire.

Manufacturers will still have to modify equipment under the more "modest" system information compression scheme: they will have to have a license. And it seems remarkably unlikely that licenses would be granted to the developers of open source drivers or home-brew devices such as Myth TV, and of course it couldn't be implemented retroactively in equipment that's already on the market. How many televisions and other devices will it break in your home?

Up until now, in contrast to the US situation, the UK's digital switchover has been pretty gentle and painless for a lot of people. If you get cable or satellite, at some point you got a new set-top box (mine keep self-destructing anyway); if you receive all your TV and radio over the air you attached a Freeview box. But this is the broadcast flag and the content management agenda all over again.

We know why rights holders want this. But why should the BBC adopt their agenda? The BBC is the best-placed broadcasting and content provider organisation in the world to create a parallel, alternative universe to the strictly controlled one the commercial entertainment industry wants. It is the broadcaster that commissioned a computer to educate the British public. It is the broadcaster that belongs to the people. Reclaim your heritage, guys.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

August 28, 2009

Develop in haste, lose the election at leisure

Well, this is a first: returning to last week's topic because events have already overtaken it.

Last week, the UK government was conducting a consultation on how to reduce illegal file-sharing by 70 percent within a year. We didn't exactly love the proposals, but we did at least respect the absence of what's known as "three strikes" - as in, your ISP gets three complaints about your file-sharing habit and kicks you offline. The government's oh-so-English euphemism for this is "technical measures". Activists opposed to "technical measures" often call them HADOPI, after the similar French law that was passed in May (and whose three strikes portions were struck down in June); HADOPI is the digital rights agency that law created.

This week, the government - or more precisely, the Department for Business, Innovation, and Skills - suddenly changed its collective mind and issued an addendum to the consultation (PDF) that - wha-hey! - brings back three strikes. Its thinking has "developed", BIS says. Is it so cynical to presume that what has "developed" in the last couple of months is pressure from rights holders? Three strikes is a policy the entertainment industry has been shopping around from country to country like an unwanted refugee. Get it passed in one place and use that country a lever to make all the others harmonize.

What the UK government has done here is entirely inappropriate. At the behest of one business sector, much of it headquartered outside Britain, it has hijacked its own consultation halfway through. It has issued its new-old proposals a few days before the last holiday weekend of the summer. The only justification it's offered: that its "new ideas" (they aren't new; they were considered and rejected earlier this year, in the Digital Britain report (PDF)) couldn't be implemented fast enough to meet its target of reducing illicit file-sharing by 70 percent by 2012 if they aren't included in this consultation. There's plenty of protest about the proposals, but even more about the government's violating its own rules for fair consultations.

Why does time matter? No one believes that the Labour government will survive the next election, due by 2010. The entertainment industries don't want to have to start the dance all over again, fine: but why should the rest of us care?

As for "three strikes" itself, let's try some equivalents.

Someone is caught speeding three times in the effort to get away from crimes they've committed, perhaps a robbery. That person gets points on their license and, if they're going fast enough, might be prohibited from driving for a length of time. That system is administered by on-the-road police but the punishment is determined by the courts. Separately, they are prosecuted for the robberies, and may serve jail time - again, with guilt and punishment determined by the courts.

Someone is caught three times using their home telephone to commit fraud. They would be prosecuted for the fraud, but they would not be banned from using the telephone. Again, the punishment would be determined by the courts after a prosecution requiring the police to produce corroborating evidence.

Someone is caught three times gaming their home electrical meter so that they are able to defraud the electrical company and get free electricity. (It's not so long since in parts of the UK you could achieve this fairly simply just by breaking into the electrical meter and stealing back the coins you fed it with. You would, of course, be caught at the next reading.) I'm not exactly sure what happens in these cases, but if Wikipedia is to be believed, when caught such a customer would be switched to a higher tariff.

It seems unlikely that any court would sentence such a fraudster to live without an electricity supply, especially if they shared their home, as most people do, with other family members. The same goes for the telephone example. And in the first case, such a person might be banned from driving - but not from riding in a car, even the getaway car, while someone else drove it, or from living in a house where a car was present.

Final analogy: millions of people smoke marijuana, which remains illegal. Marijuana has beneficial uses (relieving the nausea from chemotherapy, remediating glaucoma) as well as recreational ones. We prosecute the drug dealers, not the users.

So let's look again at these recycled-reused proposals. Kicking someone offline after three (or however many) complaints from rights holders:

1- Affects everyone in their household. Kids have to go to the library to do homework, spouses/'parents can't work at home or socialize online. An entire household is dropped down the wrong side of the Digital Divide. As government functions such as filing taxes, providing information about public services, and accepting responses to consultations all move online, this household is now also effectively disenfranchised.

2- May in fact make both the alleged infringer and their spouse unemployable.

3- Puts this profound control over people's lives, private and public, personal and financial into the hands of ISPs, rights holders, and Ofcom, with no information about how or whether the judicial process would be involved. Not that Britain's court system really has the capacity to try the 10 percent of the population that's estimated to engage in file-sharing. (Licit, illicit, who can tell?)

All of these effects are profoundly anti-democratic. Whose government is it, anyway?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

August 21, 2009

This means law

You probably aren't aware of this, but there's a consultation going on right now about what to do about illegal peer-to-peer file-sharing; send in comments by September 15. Tom Watson, the former minister for digital engagement, has made some sensible suggestions for how to respond in print and blog.

This topic has been covered pretty regularly in net.wars, but this is different and urgent: this means law.

Among the helpful background material provided with the consultation document are an impact assessment and a financial summary. The first of these explains that there were two policy options under consideration: 1) Do nothing. 2) (Preferred) legislate to reduce illegal downloading "by making it easier and cheaper for rightsholders to bring civil actions against suspected illegal file-sharers". Implementing that requires ISPs to cooperate by notifying their subscribers. There will be a code of practice (less harsh than this one, we trust) including options such as bandwidth capping and traffic shaping, which Ofcom will supervise, at least for now (there may yet be a digital rights agency).

The document is remarkably open about who it's meant to benefit - and it's not artists.

Government intervention is being proposed to address the rise in unlawful P2P file-sharing which can reduce the incentive for the creative industries to invest in the development, production and distribution of new content. Implementation of the proposed policy will allow right [sic] holders to better appropriate returns on their investment.

The included financial assessment, which in this case is the justification for the entire exercise (p 40), lays out the expected benefits: BERR expects rightsholders to pick up £1,700 million by "recovering displaced sales", at a cost to ISPs and mobile network operators of £250 to £500 million over ten years. Net benefit: £1.2 billion. Wha-hey!

My favorite justification for all this is the note that because that are an estimated 6.5 million file-sharers in the UK there are *too many* of us to take us all to court, rightsholders' preferred deterrence method up until now. Rightsholders have marketing experts working for them; shouldn't they be getting some message from these numbers?

There are some things that are legitimately classed as piracy and that definitely cost sales. Printing and selling counterfeit CDs and DVDs is one such. Another is posting unreleased material online without the artist's or rightsholder's permission; that is pre-empting their product launch, and whether you wind up having done them a favor or not, there's no question that it's simply wrong. The answer to the first of these is to shut down pirate pressing operations; the answer to the second is to get the industry to police its own personnel and raise the penalties for insider leaks. Neither can be solved by harassing file-sharers.

It's highly questionable whether file-sharing costs sales; the experience of most of us who have put our work online for free is that sales increase. However, there is no doubt in my mind that there are industries file-sharing hurts. Two good examples in film are the movie rental business and the pay TV broadcasters, especially the premium TV movie channels.

As against that, however, the consultation notes but dismisses the cost to consumers: it estimates that ISPs' costs, when passed on to consumers, will reduce the demand for broadband by 10,000 to 40,000 subscribers, representing lost revenue to ISPs of between £2 and £9 million a year (p50). The consultatation goes on to note that some consumers will cease consuming content altogether and that therefore the policy will exacerbate existing inequality since those on the lowest incomes will likely lose the most.

It is not possible to estimate such welfare loss with current data availability, but estimates for the US show that this welfare loss could be twice as large as the benefit derived from reducing the displacement effect to industry revenues.

Shouldn't this be incorporated into the financial analysis?

We must pause to admire the way the questions are phrased. Sir Bonar would be proud: ask if your proposals are implementing what you want to do in the right way. In other words, ask if three is the right number of warning letters to send infringers before taking stronger action (question 9), or whether it's a good idea to leave exactly how costs are to be shared between rightsholders and ISPs flexible rather than specifying (question 6). The question I'd ask, which has not figured in any of the consultations I've seen would be: is this the best way to help artists navigate the new business models of the digital age?

Like Watson, my answer would be no.

Worse, the figures do not take into account the cost to the public, analyzed last year in the Netherlands.

And the assumptions seem wrong. The consultation document claims that research shows that approximately 70 percent of infringers stop when they receive a warning letter, at least in the short term. But do they actually stop? Or do they move their file-sharing to different technologies? Does it just become invisible to their ISP?

So far, file-sharers have responded to threats by developing new technologies better at obfuscating users' activities. Napster...Gnutella...eDonkey...BitTorrent. Next: encrypted traffic that looks just like a VPN connection.

I remain convinced that if the industry really wants to deter file-sharing it should spend its time and effort on creating legal, reliable alternatives. Nothing less will save it. Oh, yeah, and it would be a really good idea for them to be nice to artists, too. Without artists, rightsholders are nothing.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

July 10, 2009

The public interest

It's not new for journalists to behave badly. Go back to 1930s plays-turned-movies like The Front Page (1931) or Mr Smith Goes to Washington (1939), and you'll find behavior (thankfully, fictional) as bad as this week's Guardian story that the News of the World paid out £1 million to settle legal cases that would have revealed that its staff journalists were in the habit of hiring private investigators to hack into people's phone records and voice mailboxes.

The story's roots go back to 2006, when the paper's Royal editor, Clive Goodman, was jailed for illegally intercepting phone calls. The paper's then editor, Andy Coulson, resigned and the Press Complaints Commission concluded the paper's executives did not know what Goodman was doing. Five months later, Coulson became the chief of communications for the Tory party.

There are so many cultural failures here that you almost don't know where to start counting. The first and most obvious is the failure of a newsroom to obey the dictates of common sense, decency, and the law. That particular failure is the one garnering the most criticism, and yet it seems to me the least surprising, especially for one of Britain's most notorious tabloids. Journalists have competed for stories big enough to sell papers since the newspaper business was founded; the biggest rewards generally go to the ones who expose the stories their subjects least wanted exposed. It's pretty sad if any newspaper's journalists think the public interest argument is as strong for listening to Gwyneth Paltrow's voice mail as it was to exposing MPs' expenses, but that leads to the second failure: celebrity culture.

This one is more general: none of this would happen if people didn't flock to buy stories about intimate celebrity details. And newspapers are desperate for sales.

The third failure is specific to politicians: under the rubric of "giving people a second chance" Tory leader David Cameron continues to defend Coulson, who continues to claim he didn't know what was going on. Either Coulson did know, in which case he was condoning it, or he didn't, in which case he had only the shakiest grasp of his newsroom. The latter is the same kind of failure that at other papers and magazines has bred journalistic fraud: surely any editor now ought to be paying attention to sourcing. Either way, Coulson does not come off well and neither does Cameron. It would be more tolerable if Cameron would simply say outright that he doesn't care whether Coulson is honorable or not because he's effective at the job Cameron is paying him for.

The fourth failure is of course the police, the Press Complaints Commission, and the Information Commissioner, all of whom seem to have given up rather easily in 2007.

The final failure is also general: the problem that more and more intimate information about each of us is held in databases whose owners may have incentives (legal, regulatory, commercial) for keeping them secured but which are of necessity accessible by minions whose risks and rewards are different. The weakest link in security is always the human factor, and the problem of insiders who can be bribed or conned into giving up confidential information they shouldn't is as old as the hills, whether it's a telephone company employee, a hotel chambermaid, or a former Royal nanny. Seemingly we have learned little or nothing since Kevin Mitnick pioneered the term "social engineering" some 20 years ago or since Squidgygate, when various Royals' private phone conversations were published. At least some ire should be directed at the phone companies involved, whose staff apparently find it easy to refuse to help legitimate account holders by citing the Data Protection Act but difficult to resist illegitimate blandishments.

This problem is exacerbated by what University College of London security researcher Angela Sasse calls "security fatigue". Gaining access to targets' voice mail was probably easier than you think if you figure that many people never change the default PIN on their phones. Either your private investigator turned phone hacker tries the default PIN or, as Sophos senior fellow Graham Cluley suggests, convinces the phone company to reset the PIN to the default. Yes, it's stupid not to change the default password on your phone. But with so many passwords and PINs to manage and only so much tolerance for dealing with security, it's an easy oversight. Sasse's paper (PDF) fleshing out this idea proposes that companies should think in terms of a "compliance budget" for employees. But this will be difficult to apply to consumers, since no one company we interact with will know the size of the compliance burden each of us is carrying.

Get the Press Complaints Commission to do its job properly by all means. And stop defending the guy who was in charge of the newsroom while all this snooping was going on. Change a culture that thinks that "the public interest" somehow expands to include illegal snooping just because someone is famous.

But bear in mind that, as Privacy International has warned all along, this kind of thing is going to become endemic as Britain's surveillance state continues to develop. The more our personal information is concentrated into large targets guarded by low-paid staff, the more openings there will be for those trying to perpetrate identity fraud or blackmail, snoop on commercial competitors, sell stories about celebrities and politicians, and pry into the lives of political activists.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk.

June 19, 2009

Star system

In all the discussions I've seen about the mass extinction of newspapers and worries about where people, particularly elderly people, will get their news, I've seen little about the impact of the death of newspapers on the ecology of industries that have traditionally depended on them. At Roger Ebert's film festival there was some discussion about this with regard to movies. Reading critics is an important way people decide whether they can afford two hours of scarce leisure time and $20 to $50 of hard-earned money (tickets, babysitters, travel costs) to see a particular movie. As newspapers shrink, die, and fire their movie critics, the result, a panel concluded, is death to the chances of arthouse and independent movies.

Away from the glamor event that is Wimbledon, which starts Monday, the same concerns can be applied to the future of the two professional tennis tours, run by the WTA (women) and the ATP (men). This week's Eastbourne tournament - this year known as the AEGON International - began the week with seven of the world's top ten female players, plus the 2006 Wimbledon champion (Amelie Mauresmo) and the 2007 Wimbledon finalist (Marion Bartoli). By the semifinals, all of those but Bartoli were gone (and she retired, limping, from her semi against Virginie Razzano), and the survivors, while fine and accomplished players and diligent hard workers, are not the kinds of names whose exploits can be easily sold to editors. The national interest is in British players, who had all lost by the second round; the international interest is limited to Wimbledon contenders. You know it's a bad situation when journalists start going home before the quarterfinals.

To some extent, it's arguable that professional tennis writers are not as essential as they were. In 1989, say, if you wanted to follow the tour year-round you had to scour the sports pages for box scores and terse match write-ups. Today the Net is awash in tennis reporting: player sites, fan sites, official and unofficial blogs, Facebook pages and groups, Twitter, news wires, and official releases from the tours, the national federations, individual tournaments, and the overall governing body, the International Tennis Federation. It's a rare match whose report you can't find online within half an hour, and even if you don't sleep you probably couldn't read all of it.

In addition, the matches themselves are far more accessible than ever before: Europe has Eurosport; the US has The Tennis Channel. And if you can wait a day, more and more tennis matches are being posted online for download, legally or otherwise.

A couple of decades ago, the famed American sportscaster Howard Cosell wrote a book complaining that sports journalism was failing the public, that to cover sports properly journalists should have a working knowledge of economics, labor law, business, and medical science. You could see his point, especially over the last decade in baseball, where a bitter players' strike was followed by steroid scandals. Go back to the beginning of the Open Era of tennis, which began in 1968, and you'll find long-serving commentators like Richard Evans writing books about the considerable complexities of tennis politics. But that kind of coverage has largely shrunk: this week what you can sell a newspaper is either 1) local players or 2) Wimbledon contenders - that is, the stars. You hear many complaints among the tennis press about how little access they now have to the players, but they have even less access to the game's controllers.

Tennis is not alone in this: stars in every area from technology to movies would rather sequester themselves than answer too many unpleasant questions. And I can't always blame them. Explaining a bad loss to the media while the disappointment is still raw must be one of the most unpleasant moments for a player, almost up there with having your physique closely inspected and criticized. That sort of thing was something stars put up with when their industry was young and struggling to establish itself; the early pioneers of the women's tour did 5am talk radio, appeared in shopping malls - whatever it took.

We are not in those times any more. But as newspapers fail and lay off staff and reduce their expenditure on coverage of minority interests - which include tennis - both tours, and the movie industry, and many other industries that rely on sponsorship for fuel should be asking themselves how they're going to keep their public profile high enough to stay funded. The Slams - Wimbledon, the US Open, the Australian Open, and the French Open - will most likely survive (although the Australian has already announced the loss of several important sponsors). But creating the field of high-quality players for these events requires a healthy ecosystem of feed-up events that keep coaches, juniors, and amateurs engaged and involved. New media may sometime fill the gap, but not yet; no single outlet has a big enough megaphone. (And Wimbledon, apparently living in the past, does not accredit online-only writers.)

You may not feel that losing tennis as a spectacle would be much of a loss, and I'm sure you're right that the world would continue to turn. But the principle that the loss of traditional media disrupts many more industries than just its own applies to many more industries than just the one that will dominate the BBC for the coming fortnight.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

May 29, 2009

Three blind governments

I spent my formative adult years as a musician. And even so, if I were forced to choose to sacrifice one of my senses as a practical matter pick sight over hearing: as awful and isolating as it would be to be deaf it would be far, far worse to be blind.

Lack of access to information and therefore both employment and entertainment is the key reason. How can anyone participate in the "knowledge economy" if you can't read?

Years ago, when I was writing a piece about disabled access to the Net, the Royal National Institute for the Blind put me in touch with Peter Brasher, a consultant who was particularly articulate on the subject of disabled access to computing.

People tend to make the assumption - as I did - that the existence of Braille editions and talking books meant that blind and partially sighted people were catered for reasonably well. In fact, he said, only 8 percent of the blind population can read Braille; its use is generally confined to those who are blind from childhood (although see here for a counterexample). But by far and away the majority of vision loss comes later in life. It's entirely possible that the percentage of Braille readers is now considerably less; today's kids are more likely to be taught to rely on technology - text-to-speech readers, audio books, and so on. From 50 percent in the 1950s, the percentage of blind American children learning Braille has dropped to 10 percent.

There's a lot of concern about this which can be summed up by this question: if text-to-speech technology and audio books are so great, why aren't sighted kids told to use them instead of bothering to learn to read?

But the bigger issue Brasher raised was one of independence. Typically, he said, the availability of books in Braille depends on someone with an agenda, often a church. The result for an inquisitive reader is a constant sense of limits. Then computers arrived, and it became possible to read anything you wanted of your own choice. And then graphical interfaces arrived and threatened to take it all away again; I wrote here about what it's like to surf the Web using the leading text-to-speech reader, JAWS. It's deeply unpleasant, difficult, tiring, and time-consuming.

When we talk about people with limited ability to access books - blind, partially sighted; in other cases fully sighted but physically disabled - we are talking about an already deeply marginalized and underserved population. Some of the links above cite studies that show that unemployment among the Braille-reading blind population is 44 percent - and 77 percent among blind non-Braille readers. Others make the point that inability to access printed information interferes with every aspect of education and employment.

And this is the group that this week's meeting of the Standing Committee on Copyright and Related Rights at the World Intellectual Property Office has convened to consider. Should there be a blanket exception to allow the production of alternative formats of books for the visually impaired and disabled?

The proposal, introduced by Brazil, Paraguay, and Ecuador, seems simple enough, and the cause unarguable. The World Blind Union estimates that 95 percent of books never become available in alternative formats and when they do it's after some delay. As Brasher said nearly 15 years ago, such arrangements depend on the agendas ofcharitable organizations.

The culprit, as in so many net.wars, is copyright law. The WBU published arguments for copyright reform (DOC) in 2004. Amazon's Kindle is a perfect example of the problem: bowing to the demands of publishers, text-to-speech can be - and is being - turned off in the Kindle. The Kindle - any ebook reader with speech capabilities - ought to have been a huge step forward for disabled access to books.

And now, according to Twits present, at WIPO, the US, Canada, and the EU are arguing against the idea of this exemption. (They're not the only ones; elsewhere, the Authors Guild has argued that exemptions should be granted by special license and registration, something I'd certainly be unhappy about if I were blind.)

Governments, particularly democratic ones, are supposed to be about ensuring equal opportunities for all. They are supposed to be about ensuring fair play. What about the American Disabilities Act, the EU's charter of fundamental human rights, and Canada's human rights act? Can any of these countries seriously argue that the rights of publishers and copyright holders trump the needs of a seriously disadvantaged group of people that every single one of us is at risk of joining?

While it's clear that text-to-speech and audio books don't solve every problem, and while the US is correct to argue that copyright is only one of a number of problems confronting the blind, when the WBU argues that copyright poses a significant barrier to access shouldn't everyone listen? Or are publishers confused by the stereotypical image of the pirate with the patch over one eye?

If governments and rightsholders want us to listen to them about other aspects of copyright law, they need to be on the right side of this issue. Maybe they should listen to their own marketing departments about the way it looks when rich folks kick people who are already disadvantaged - and then charge for the privilege.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk (but please turn off HTML).

May 15, 2009

"Bogus"

There is a basic principle that ought to go like this: if someone is making a claim that a treatment has an impact on someone's health it should be possible to critique the treatment and the claim without being sued for libel. The efficacy of treatments that can cost people their lives - even if only by omission rather than commission - should be a case where the only thing that matters is the scientific evidence.

I refer, of course, to the terrible, terrible judgement in the case of British Chiropractic Association v. Simon Singh. In brief: the judge ruled that Singh's use of the word "bogus" in commentary that appeared in the Guardian (on its comments pages) and which he went on to explain in the following paragraph 1) was a statement of fact rather than opinion and 2) meant that the BCA's members engaged in deliberately deceiving their patients. The excellent legal blogger Jack of Kent (in real life, the London solicitor specialising in technology, communications, and media law David Allen Green) wrote up the day in court and also an assessment of the judgement and Singh's options for discussion.

None of it is good news for anyone who works in this area. Singh could settle; he could proceed to trial to prove something he didn't say and for which under the English system his lawyers may not be allowed to make a case for anyway; or he could appeal this ruling on meaning, with very little likelihood of success. Singh will announce his decision on Monday evening at a public support meeting (Facebook link).

A little about the judge, David Eady (b. 1943). Wikipedia has him called to the bar in 1966 and specializing in media law until 1997, when he was appointed a High Court judge. Eady has presided over a number of libel cases and also high-profile media privacy cases.

Speaking as a foreigner, this whole case has seemed to me bizarre. For one thing, there's the instinctive American reaction: English libel law reverses the burden of proof so that it rests on the defendant. Surely this is wrong. But more than that, I don't understand how it is possible to libel an organisation. The BCA isn't a person, even if its members supply personal services, and Singh named no specific members or officers. I note that it's sufficiently bizarre to British commenters that publications that normally would never reprint the text of a libel - like The Economist - are doing so in this case and analysing every word. Particularly, of course, the word "bogus", on which so much of the judgement depends. The fact that Singh explained what he meant by bogus in the paragraph after the one in dispute apparently did not matter in court.

We talk about the chilling effects of the Digital Millennium Copyright Act, but the chilling effects of English libel law are far older and much more deeply entrenched. Discussions about changing it are as perennial and unproductive as the annual discussions about how it would be a really good idea to add another week between the French Open and Wimbledon. And this should be of concern throughout the English-publishing world: in the age of the Internet English courts seem to recognise no geographical boundaries. The New York author Rachel Ehrenfeld was successfully sued in Britain over allegations made in her book on funding terrorism despite the fact that neither she, the person who sued, nor the publisher were based in the UK. The judge was...David Eady.

Ehrenfeld asked the New York courts to promise not to enforce the judgement against her. When they couldn't (because no suit had been filed in New York), the state passed a law barring courts from enforcing foreign libel judgements if the speech in question would not be libellous under US law. Other states and the federal government are following to stop "libel tourism".

None of that, however, will help Simon Singh or anyone else who wants to critically examine the claims of pseudoscientists. The Skeptic, which I founded and edited some years (look for our Best Of book, soon), routinely censors itself, as does every other publication in this country. There are certain individuals and organisations who are known to be extremely litigious, and they get discussed as little as possible. Libel law is supposed to encourage responsible reporting and provide redress to wronged individuals, but at this virulent a level libel law is actually preventing responsible reporting of contentious matters of science and the individuals who are wronged are the public who are at risk of being deprived of the knowledge they need to make informed decisions. David Allen Green, writing in New Scientist, provides an excellent summary of cases in point.

It will be understandable if Singh decides to settle. I've seen an estimate that doing so now could cost him £100,000 - and continuing will be vastly more expensive. Lawsuits are, I'm told, like having cancer: miserable, roller-coaster affairs that consume your waking life and that of everyone around you. I have no idea what decision he will or should make. But he has my sympathy and my support.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow on Twitter, post here, or reply by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 24, 2009

The way we were

Two people in the audience said they were actually at Woodstock.

The math: Champaign-Urbana's Virginia Theater seats 1,600 ("I saw all the Star Wars movies in this theater," said the guy behind me). Audience skews somewhat to Baby Boom and older. Mostly white. Half a million people at Woodstock. Hard to know, but the guy sitting next to me and I agreed: two *feels* right.

This week is Roger Ebert's Film Festival, a small, personal event likely to remain so because of its location: his Illinois home town. A nice, Midwestern town, chiefly known for the university whence came Mosaic. People outside the US may not know Ebert's work as well as those inside it: a Pulitzer Prize-winning print critic, he and fellow Chicago newspaper critic Gene Siskel invented TV movie criticism. The festival is a personal love letter to movie fans, to his home town, and to the movies he picks because he feels they deserve to be more widely known and/or appreciated.

This is what it's like: the second day the parents of one of the featured directors casually pull me to lunch in the student union cafeteria. "I used to sit at this table when I was a student here," said the wife. She pointed across the cafeteria. "Roger Ebert used to sit at that table over there." Her husband pointed in a third direction and added, "And that table over there is where we met."

People come because they love movies - and also love seeing them in a fine theater with perfect sound and projection filled with the ultimate in appreciative audiences. Watching Woodstock last night, people so much forgot that they weren't at a live concert that they applauded each act in turn. And when Country Joe yelled, "What does it spell?" they yelled back "FUCK" at increasingly high volume. (I will remind you that this is America's heartland; these are supposed to be the people whose sensibilities are too delicate for Janet Jackson's nipple. Hah.)

The next morning, at a panel about the tribulations of movie distribution in these troubled times, I found I was back at work. Woodstock Michael Wadleigh - who's heavy into saving the planet now - told a quaint story about the film's release. His contract gave him final cut. Warner Brothers saw his finished length - four hours - and was ready to ignore it and cut it down to one hour 50 minutes. Received wisdom: successful movies aren't longer than that. Received wisdom: rock and roll documentaries are not successful movies anyway. Received wisdom: we have more lawyers than you. Nyaaah. Come and sue us. This attitude toward artists seems familiar, somehow.

So Wadleigh and his producers stole back his film, just like in S.O.B.. The producer then called the studios and convinced them that Wadleigh was deranged enough to actually set fire to himself and all the footage if the studio didn't release the film exactly as he'd cut it. Studio relents (that probably wouldn't happen now either). Film is released at nearly four hours. Still the biggest-grossing documentary in history. Now remastered, cleaned up, sound digitized, etc. for a new DVD. That was, like flower power, then..

Cut to Nina Paley, sitting a few directors down the panel from Wadleigh. Paley, like most of the others here - Guy Madden (My Winnipeg), Karen Gehres (Begging Naked), Carl Deal and Tia Lessin (Trouble the Water) - can't find distribution. Unlike Lessin, who reacted with some umbrage to the notion of giving stuff away, Paley decided that rather than sign away effectively all rights to her movie for five or ten years she turned it over to her audience to distribute for her. Yes, she put all the movie's files on the Internet for free under a share-alike Creative Commons license. Go ye and download. I'll wait.

And what happened? People downloaded! People shared! People started inviting her to speak! People started demanding to buy DVDs. She started making money.

Wait. What?

Boggle, MPAA, boggle.

That doesn't mean to say that movie distribution isn't in trouble: it is. Wadleigh and the Warner Brothers publicity person, Ronnee Sass, next to him, may have a mutual admiration society, but even films that have won top prizes at Cannes and Sundance are having trouble getting seen. Art theaters are shutting down and the small distributors that service them are going out of business.

"Why?" I was asked over lunch. A dozen reasons. People have more entertainment options. Corporate-owned studios would rather gamble on blockbusters. Theaters got unpleasant - carved-up, badly angled, out-of-focus screening rooms with sticky floors and too-loud, distorted sound. To people who were watching movies on small TV screena with commercial disruptions, home theaters look like an improvement - you can talk to your friends, eat what you want, pick your own movies, and pause whenever you like. More, in fact, like reading a novel or listening to music than going to a movie in the old sense, when you didn't - couldn't - yawn halfway through the magic and say, "I'll finish it tomorrow.".

What people have forgotten is the way a theater filled with audience response changes the experience. Would Woodstock have been the same if everyone had stayed home and watched it on TV?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow on Twitter, post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 17, 2009

I think we're all pirates on this bus

So the Pirate Bay Four have been found guilty, sentenced to a year in jail, and ordered to pay 30 million kronor (lotta money) in damages to Big Media (hella big). How to make martyrs, guys.

Except: from the entertainment industry's point of view the best thing to come out of the trial shouldn't be either the verdict or the damages. It should be the news of the site's profitability and ownership, exposed to the non-Swedish-speaking world by Andrew Brown, first in a blog posting and then in Guardian article. Both sets of revelations came from the native Swedish newspapers which, of course, few outside Sweden can actually read.

Shouldn't the thought of possibly further enriching the heir to a fortune who is a supporter of extreme right-wing groups give Pirate Bay users pause? You'd think the entertainment industry would take advantage of this to play, as Sir Humphrey Appleby is advised in "Man Overboard", the man instead of the ball.

In The Register, Andrew Orlowski has speculated that the English-language media have failed to pick up on Brown's revelations because...I don't know, everyone is too pro-"freetard" or something. It's more likely that, lacking familiarity with the language, culture, and politics of Sweden, they aren't comfortable reporting them.

As much as The Pirate Bay is a useful site if you're looking for stuff to download for free, the site can't really make the same arguments many others can: that they don't really know what they're hosting (YouTube, torrent search sites). The site is much too neatly organized and catalogued. Not that it's clear the site's owners have any interest in making such an argument: they've been arrogantly defiant with respect to the trial and earlier threats. It's one thing to sit down and argue principles and try to change laws you disagree with; it's another to openly jeer at the law, effectively behaving like a cartoon character dancing on the edge of a cliff yelling, "Come get me!"

I've argued all along that there ought to be a distinction between personal, non-profit copying and commercial copying. The Pirate Bay falls in the middle. The site's users certainly are engaging in non-profit, personal copying. And the site isn't dealing in commercial copying in the sense that I meant originally, in that it's not selling copies (which would be an absolutely clear diversion of the market from legitimate sources). But if you believe the Swedish press it is making real money from advertising. Unless it opens its books for inspection by the public, we have no way of telling how much of that is actually profit, how much goes to pay the site's no doubt substantial server and bandwidth costs, and how much, if any, is used to support Piratbyrån, the political party aiming to change copyright law in Sweden.

It ought to be clear by now - though apparently it's not - to entertainment companies that attacking file-sharing sites isn't getting them anywhere. Yes, they can point to having closed down a number of sites, but that's like boasting that you've cut 1,000 heads off the Lernaean Hydra. What a boast like really says is how much bigger the monster is now than when you started: you still can't say you killed it, or even that you've scared it a little bit. Year on year, remorselessly, no matter how many people they've threatened or sites they've prosecuted, file-sharing has grown both in usage and in breadth. Plus, the publicity that attends every case is serving excellently to spread the word to people who might otherwise have never heard of file-sharing. Wired News reports that since the case started The Pirate Bay's user base has grown to 22 million and the site is profiting from its new anonymization VPN service.

In terms of breadth, there are still plenty of gaps in what you can find online, but over the years those have continued to narrow as niche interest groups start up their own sites to share old, obscure, and commercially unavailable material. What porn fanciers can do, tennis nuts can do better.

More to the point, entertainment industry attacks on file-sharing are doing for file-sharing sites what Prohibition did for the Mafia: turning them into sympathetic heroes who are just nobly trying to help their fellow citizens. The Pirate Bay may not look like a speakeasy, but what else is it, really?

The problem for the entertainment industry is that decades of television and radio broadcasts have trained users that viewing and listening without payment at the point of consumption is a normal state of affairs. In that sense, downloading torrents is far more like the way television and radio have presented themselves than paid downloads or buying CDs and DVDs. Ironically, US commercial television is now so heavily ad-laden that watching it now makes the trade-off of providing content in return for viewers' attention to advertising much more explicit - and viewers don't like it one bit.

In the end, The Pirate Bay guys may sound like posturing jerks, but they're right: they may go to jail but file-sharing will live on even if they turn out to be wrong about The Pirate Bay's own invulnerability. The entertainment industry might just as well adopt the slogan, "We won't stop until everyone's a pirate."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 20, 2009

The untweetable Xeroxness of being

So the other week I was chatting on Twitter across time and space with a Xerox machine from 1961... (As you do.)

At least, it said it was a Xerox machine. On the Internet, no one knows you're a coffee pot.

In fact, the conversation is less rational than that: it's a fictional Xerox machine.

The story begins with the brilliantly conceived and executed TV show Mad Men. Set in the 1960s in a Madison Avenue advertising agency of the kind my father worked with at that time (my father was a Manhattan printer from the 1920s to 1980s), the show's first season featured secretaries and a typing pool. At the beginning of the second season, which starts in February 1962, the office has a new arrival: a Xerox 914. The secretaries gape at it and admire its workings without quite realizing that the machine heralds the decline of secretarial careers, a process that will become complete when PCs arrive on every desktop. Like, now.

"There will be a little 914 in everything," the machine tweeted at one point, a nod to the fact that today's graphical interfaces were first dreamed up at Xerox's PARC research lab, doubtless funded by some of the 914's revenues.

Pause to look up the Xerox 914. It was, I read on Wikipedia, the first commercially successful plain-paper copier. Plain paper! In 1961 the only copy machines I ever got near used nasty thermal paper that got easily scuffed. In fact, I was still being rude about the local library's thermal paper fax in 1971. Sterling Cooper was an early adopter and a big spender on this one. Its number derived from the size of things it could copy: anything up to 9in by 14in.

Aaannyway, someone on WELL noted that the show's 914 had a Twitter account. I thought it was just amusing enough to follow. For months, it burped out a tweet at irregular intervals, a few weeks or a month apart. It's hinted at irregularities in the expense accounts filed by Pete Campbell (a character on the show who also has a Twitter feed), and admired Joan Holloway's figure (ditto). I don't follow the human characters. Human characters are a dime a dozen. It takes real talent to be a machine.

The other night, the machine went berserk and started pumping out URLs. No explanation of what they were, just shortened URLs. Ten or 20 at least, in the space of an hour or two. Finally, maddened, I sent the machine a message.

"Did squirrels get into the nuts in the writers' room, or what?" I demanded intemperately. I didn't expect an answer any more than I did on the day in 1979 at the Winnipeg Folk Festival, when I passed a guy pouring beer on his head and - well, I guess he thought it was - dancing, and muttered, just to vent, "First time on the planet, sir?" (Stan Rogers, who happened to be watching, reminded me of this incident several years later; apparently he liked the line so much he grabbed it and used it on hecklers throughout the rest of his career.)

The next morning, however, I found a message waiting: "My nuts are perfectly tight, thank you."

I posted this little exchange back onto the WELL, where someone less suspicious than I pointed out that the URLs the machine had been posting were links to pictures of other old Xerox machines and very early computers, plus one to a secret Fortran manual. The machine, in other words, was behaving exactly in character, excited because it had come across a treasure trove of pictures of friends, family, and...would that machine look sexy if you were a machine? Oh. It was surfing for *porn*.

It wasn't unreasonable to be suspicious. Spam has come to Twitter, as will become increasingly obvious over the next few months. I used "credit card" in a message this week, and almost instantly got a reply directing me to a site selling money management tools to help me pay off my credit cards. (My credit cards are perfectly tight, thank you.) And of course, someone could have hacked the machine's account, or the studio advertising department could have decided restraint was stupid. You just never know. But...I was wrong.

And so I told it, with an apology for not trusting it. It replied with nothing but a shortened URL that, when I clicked, displayed an empty page with a message in the title bar: " No apology needed @wendyg, I am only offended by shameless low voltage and the occasional body fluids on my glass." Hm.

But I'm still making this conversation sound more sensible than it was, because it's actually not clear which, if any, of the characters' Twitter feeds actually emanate from the show's broadcast channel, AMC, or from the show's production team. There was, some months back, a mini-war between the Twitterers and AMC, which issued DMCA notices to shut them down and then recanted. Xerox914's profile links to the real 914's Wikipedia entry; others link to fan blogs; a few go to AMC's site.

So start over.

The other week I was chatting on Twitter with a fake fictional Xerox machine from 1961. On the Internet, no one knows you're a piece of carbon paper...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 13, 2009

Threat model

It's not about Phorm, it's about snooping. At Wednesday morning's Parliamentary roundtable, "The Internet Threat", the four unhappy representatives I counted from Phorm had a hard time with this. Weren't we there to trash them and not let them reply? What do you mean the conversation isn't all about them?

We were in a committee room many medieval steps up unside the House of Lords. The gathering, was convened by Baroness Miller of Chilthorne Domer with the idea of helping Parliamentarians understand the issues raised not only by Phorm but also by the Interception Modernisation Programme, Google, Microsoft, and in fact any outfit that wants to collect huge amounts of our data for purposes that won't be entirely clear until later.

Most of the coverage of this event has focused on the comments of Sir Tim Berners-Lee, the indefatigable creator of the 20-year-old Web (not the Internet, folks!), who said categorically, "I came here to defend the integrity of the Internet as a medium." Using the Internet, he said, "is a fundamental human act, like the act of writing. You have to be able to do it without interference and/or snooping." People use the Internet when they're in crisis; even just a list of URLs you've visited is very revealing of sensitive information.

Other distinguished speakers included Professor Wendy Hall, Nicholas Bohm representing the Foundation for Information Policy Research, the Cambridge security research group's Richard Clayton, the Open Rights Group's new executive director, Jim Killock, and the vastly experienced networking and protocol consultant Robb Topolski.

The key moment, for me, was when one of the MPs the event was intended to educate asked this: "Why now?" Why, in other words, is deep packet inspection suddenly a problem?

The quick answer, as Topolski and Clayton explained, is "Moore's Law." It was not, until a couple-three years ago, possible to make a computer fast enough to sit in the middle of an Internet connection and not only sniff the packets but examine their contents before passing them on. Now it is. Plus, said Clayton, "Storage."

But for Kent Ertegrul, Phorm's managing director, it was all about Phorm. The company had tried to get on the panel and been rejected. His company's technology was being misrepresented. Its system makes it impossible for browsing habits to be tracked back to people. Tim Berners-Lee, of all people, if he understood their system, would appreciate the elegance of what they've actually done.

Berners-Lee was calm, but firm. "I have not at all criticized behavioral advertising," he pointed out. "What I'm saying is a mistake is snooping on the Internet."

Right on.

The Internet, Berners-Lee and Topolski explained, was built according to the single concept that all the processing happens at the ends, and that the middle is just a carrier medium. That design decision has had a number of consequences, most of them good. For example, it's why someone can create the new application of the week and deploy it without getting permission. It's why VOIP traffic flows across the lines of the telephone companies whose revenues it's eating. It is what network neutrality is all about.

Susan Kramer, saying she was "the most untechie person" (and who happens to be my MP), asked if anyone could provide some idea of what lawmakers can actually do. The public, she said, is "frightened about the ability to lose privacy through these mechanisms they don't understand".

Bohm offered the analogy of water fluoridation: it's controversial because we don't expect water flowing into our house to have been tampered with. In any event, he suggested that if the law needs to be made clearer it is in the area of laying down the purposes for which filtering, management, and interference can be done. It should, he said, be "strictly limited to what amounts to matters of the electronic equivalent of public health, and nothing else."

Fluoridation of water is a good analogy for another reason: authorities are transparent about it. You can, if you take the trouble, find out what is in your local water supply. But one of the difficulties about a black-box-in-the-middle is that while we may think we know what it does today - because even if you trust, say, Richard Clayton's report on how Phorm works (PDF) there's no guarantee of how the system will change in the future. Just as, although today's government may have only good intentions in installing a black box in every ISP that collects all traffic data, the government of ten years hence may use the system in entirely different ways for which today's trusting administration never planned. Which is why it's not about Phorm and isn't even about behavioural advertising; Phorm was only a single messenger in a bigger problem.

So the point is this: do we want black boxes whose settings we don't know and whose workings we don't understand sitting at the heart of our ISPs' networks examining our traffic? This was the threat Baroness Miller had in mind - a threat *to* the Internet, not the threat *of* the Internet beloved of the more scaremongering members of the press. Answers on a postcard...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML)

March 6, 2009

The camcorder conundrum

So yesterday on BBC Radio Scotland's The Movie Café, Eddie Leverton, on behalf of the Federation Against Copyright Theft, directed what I thought was going to be a general discussion of file-sharing and the role of ISPs into the specific case of movies being uploaded within weeks, perhaps hours, of their first release.

This is a different problem than the one we usually talk about. While it's legitimate to argue that people who sample music and TV shows online may become paying customers, it's harder to argue the same about movies, still less about movies in first-run, when they pick up most of their ticket sales. A Dutch study of file-sharing, published on February 18 (there's an English version here (PDF)), makes precisely this point: that file-sharing does not have the same impact on music, TV shows, and films.

Music, the authors argue, is the most likely to be replayed frequently. TV shows, less so, but still: you replay early episodes when later ones cast a new light on them, or (with shows like The Sopranos or Damages you rewatch the last season to gear up for the new one). Movies, however... There are of course some movies - the Marx Brothers in A Night at the Opera, François Truffaut's Day for Night - that you revisit periodically throughout your lifetime. But let's face it, there a lot of movies that you're only going to see once, and that only to stay in touch with popular culture. One must therefore calculate the ratio of files shared to sales lost differently in each of these cases. It is reasonable to suppose that file-sharing has a bigger impact on the film industry.

Nonetheless, the Dutch report calculates that overall file-sharing is a benefit to society at large. Certainly, a lot of Dutch people are doing it: 4.7 million Dutch Internet users (out of a total population of 16.6 million as of last July) aged 15 or older have downloaded files without paying on one or more occasions in the last year. As of now, the film industry's revenues are still growing in the Netherlands in terms of cinema visits and DVD sales.

But DVD rentals are slumping - and that, in my own experience, is exactly where you'd expect file-sharing to have its first effect. For me, DVD rental replaced premium TV channels: for the same money, I could see at least as many new movies in a month, and they'd be more interesting. Since most movie DVDs get ripped and uploaded with celerity, if you're willing to forego some quality in favor of convenience, file-sharing is an easy replacement for DVD rentals. "File-sharing and buying go hand in hand," says the Dutch report; the same need not apply to rentals.

But Leverton was talking about movies recorded in the cinema on a camcorder and then uploaded. Industry paranoia about this has reached a high level. Also on the show was a film critic enraged at having his mobile phone uplifted during critics' previews. Impounding critics' mobile phones makes sense, I suppose, if you think alienating the critics before the movie even starts is a good idea. Making them line up at the end to get their phones back is a really excellent way of putting them in a foul mood to write their reviews, too.

The film critic and I pointed out that a lot of early torrents come from screeners and other insider leaks. Leverton denied this, saying screeners haven't been an issue for three years. I have news for him: a quick search finds (unchecked for validity) torrents of screeners of films opening in the US this week and even a few that haven't opened yet. Surely these pose a bigger threat than camcorders: there must be some limit to how much quality people are willing to give up just to get something for free. The camcorder rips I've seen are ghastly; you'd have to be either desperate to see that particular film or the kind of person who'll watch anything as long as it's free. The former probably have no other choice; the latter are interested in free stuff, not movies. Neither category is likely to represent lost sales.

More generally, if people are watching downloaded copies of movies rather than go to a theater, then there's something wrong with the theater experience. And there is: it's expensive, it's technically inferior, the sound is usually too loud, and the traveling takes time, which is in increasingly short supply. Cinema showings now have to compete with home theater, especially as many DVDs now cost less to buy than a single ticket. They also have to compete with other entertainments: when the cost of movies in London's West End reached the price of a ticket for live theater, suddenly live theater seemed like the far better deal.

So is file-sharing really the film industry's biggest problem? The Dutch report recommends redefining its business models. Creating legitimate download services is a start. But do stop blaming ISPs: licit downloads cost them just as much in bandwidth as illicit ones.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 20, 2009

Control freaks

It seems like every year or two some currently populat company revises its Terms of Service in some stupid way that gets all its users mad and then either 1) backs down or 2) watches a stampede for the exits. This year it's Facebook.

In announcing the reversal, founder Mark Zuckerberg writes that given its 175 million users, if Facebook were a country it would be the sixth most populous country in the world, and called the TOS a "governing document". While those numbers must sound nice on the business plan - wow! Facebook has more people than Pakistan! - in reality Facebook doesn't have 175 million users in the sense that Pakistan has 172 million inhabitants. I'm sure that Facebook, like every other Internet site or service, has a large percentage of accounts that are opened, used once or twice, and left for dead. Countries must plan governance and health care for all their residents; no one's a lapsed user of the country they live in.

Actually, the really interesting thing about 175 million people: that's how many live outside the countries they were born in. Facebook more closely matches the 3 percent of the world's population who are migrants.

It is nice that Zuckerberg is now trying to think of the TOS as collaborative, but the other significant difference is of course that Facebook is owned by a private company that is straining to find a business model before it stops being flavor of the month. (Which, given Twitter's explosive growth, could be any time now.) The Bill of Rights in progress has some good points (that sound very like the WELL's "You own your own words", written back in the 1980s. The WELL has stuck to its guns for 25 years, and any user can delete ("scribble") any posting at any time, but the WELL has something Facebook doesn't: subscription income. Until we know what Facebook's business model is - until *Facebook* knows what Facebook's business model is - it's impossible to put much faith in the durability of any TOS the company creates.

At the Guardian, Charles Arthur argues that Facebook should just offer a loyalty card because no one reads the fine print on those. That's social media for you: grocery shopping isn't designed for sharing information. Facebook and other Net companies get in this kind of trouble is because they *are* social media, and it only takes a few obsessives to spread the word. If you do read the fine print of TOSs on other sites, you'll be even more suspicious.

But it isn't safe to assume - as many people seem to have - that Facebook is just making a land grab. Its missing-or-unknown business model is what makes us so suspicious. But the problem he's grappling with is a real one: when someone wants to delete their account and leave a social network, where is the boundary of their online self?

The WELL's history, however, does suggest that the issues Zuckerberg raises are real. The WELL's interface always allowed hosts and users to scribble postings; the function, according to Howard Rheingold in The Virtual Community and in my own experience was and is very rarely used. But scribble only deletes one posting at a time. In 1990, a departing staffer wrote and deployed a mass scribble tool to seek out and destroy every posting he had ever made. Some weeks later, more famously, a long-time, prolific WELL user named Blair Newman, turned it loose on his own work and then, shortly afterwards, committed suicide.

Any suicide leaves a hole in the lives of the people he knows, but on the WELL the holes are literal. A scribbled posting doesn't just disappear. Instead, the shell of the posting remains, with the message "" in place of the former content. Also, after a message is scribbled even long-dead topics pop up when you read a conference, so a mass scribble hits you in the face repeatedly. It doesn't happen often; the last I remember was about 10 years ago, when a newly appointed CEO of a public company decided to ensure that no trace remained of anything inappropriate he might ever have posted.

Of course, scribbling your own message doesn't edit other people's. While direct quoting is not common on the WELL - after all, the original posting is (usually) still right there, unlike email or Usenet - people refer to and comment on each other's postings all the time. So what's left is a weird echo, as if all copies of the Bible suddenly winked out of existence leaving only the concordances behind.

It is this problem that Zuckerberg is finding difficult. The broad outline so far posted seems right: you can delete the material you've posted, but messages you've sent to others remain in their inboxes. There are still details: what about comments you post to others' status updates or on their Walls? What about tags identifying you that other people have put in their photographs?

Of course, Zuckerberg's real problem is getting people to want to stay. Companies like to achieve this by locking them in, but ironically, just like in real life, reassuring people that they can leave is the better way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 14, 2009

The Gattaca in Gossip Girl

Spotted: net.wars obsessing over Gossip Girl instead of diligently reading up on the state of the data retention directive's UK implementation.

It's the cell phones. The central conceit of the show and the books that inspired it is this: an unseen single-person Greek (voiced by Kristen Bell in a sort of cross between her character on Veronica Mars and Christina Ricci's cynical, manipulative trouble-maker in The Opposite of Sex) chorus of unknown identity publishes - to the Web and by blast to subscribers' cell phones - tips and rumors about "the scandalous lives of Manhattan's elite".

The Upper East Siders she? reports on are, of course, the private high school teens whose centrally planned destiny is to inherit their parents' wealth, power, social circles, and Ivy League educations. These are teens under acute pressure to perform as expected, and in between obsessing about whether they can get into Yale (played on-screen by Columbia), they blow off steam by throwing insanely expensive parties, drinking, sexing, and scheming. All, of course, in expensive designer clothes and bearing the most character and product-placement driven selection of phones ever seen on screen.

Most of the plots are, of course, nonsense. The New Yorker more or less hated it on sight. Also my first reaction: I went, not to the school the books' author, Cecily von Ziegesar, did, but to one in the same class 25 years earlier and then to an Ivy League school. One of my closest high school friends grew up in - and his parents still live at - the building the inhabited in the series by teen queen Blair Waldorf. So I can assess the show's unreality firsthand. So can lots of other New Yorkers who are equally obsessed with the show: the New York Magazine runs a hysterically funny reality index recap of each episode of "the Greatest Show of Our Time", followed by a recap of the many comments.

But we never had the phones! Pink and flip, slider and black, Blackberries, red, gold, and silver phones! Behind the trashy drama portraying the ultra rich as self-important, stressed-out, miserable, self-absorbed, and mean is a fictional exploration of what life is like under constant surveillance by your peers.

Over the year and a half of the show's run - SPOILER ALERT - all sorts of private secrets have been outed on Gossip Girl via importunate camera phone and text message. Serena is spotted buying a pregnancy test (causing panic in at least two households); four characters are revealed at a party full of agog subscribers to be linked by a half-sibling they didn't know they had until the blast went out; and of course everyone is photographed kissing (or worse) the wrong person at some point. Exposure via Gossip Girl is also handy for blackmail (Blair), pre-emption (Chuck), lovesick yearning (Dan), and outing his sister's gay boyfriend (Dan).

"If you're sending tips to Gossip Girl, you're in the game with the rest of us," Jenny tells Dan, who had assumed his own moral superiority.

A lot of privacy advocates express concern that today's "digital natives" don't care about privacy, or at least, don't understand the potential consequences to their future job and education prospects of the decisions they make when they post the intimate details of their lives online. In fact, when this generation grows up they'll all be in the same boat, exposure wise.. Both in reality and in this fiction, the case is as it's usually been, that teens don't fear each other; they collude as allies to exclude their parents. That trope, too, is perfectly played on the show when Blair (again!) gets rid of a sociopathic interloper by going over the garden wall and calling her parents. This is not the world of David Brin's The Transparent Society, after all; the teens surveille each other but catch adults only by accident, though they take full advantage when they do.

"Gossip Girl...is how we communicate," Blair says, trying to make one of her many vendettas seem normal.

Privacy advocates also often stress that surveillance chills spontaneous behaviour. Not here, or at least not yet. Instead, the characters manipulate and expose, then anguish when it happens to them. A few become inured.

Says Serena, trying to comfort Rachel Carr, the first teacher to be so exposed: "I've been on Gossip Girl plenty of times and for the worst things...eventually everyone forgets. The best thing to do with these things is nothing at all,"

Phones and Gossip Girl are not the only mechanisms by which the show's characters spy on and out each other. They use all the more traditional media, too - in-person interaction, mistaken identity (a masked ball!), rifling through each other's belongings, stolen phones, eavesdropping, accident, and, of course, the gossip pages of the New York press.

"It's anonymous, so no one really knows," Serena says, when asked who is behind the site. But she and all the others do know: the tips come from each other and from the nameless other students they ignore in the background. Gossip Girl merely forwards them, with commentary in her own style:

You know you love me.

XOXO,
Net.wars

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 6, 2009

Forty-five years

This week the EU's legal affairs committee, JURI, may vote - again - on term extension in sound recordings. As of today, copyright is still listed on the agenda.

Opposing term extension was a lot simpler at the national level in the UK; the path from proposal to legislation is well-known, well trodden, and well-watched by the national media. At the EU level, JURI is only one of four committees involved in proposing and amending term extension on behalf of the European Parliament - and then even after the Parliament votes it's the Commission who makes the final decision. The whole thing drags on for something close to forever, which pretty much guarantees that only the most obsessed stay in touch through the whole process. If you had designed a system to ensure apathy except among lobbyists who like good food, you'd have done exactly this.

There are many reasons to oppose term extension, most of which we've covered before. Unfortunately, these seem invisible to some politicians. As William Patry blogs, the harm done by term extension is diffuse and hard to quantify while easily calculable benefits accrue to a small but wealthy and vocal set of players.

What's noticeable is how many independent economic reviews agree with what NGOs like the Electronic Frontier Foundation and the Open Rights Group have said all along.

According to a joint report from several European intellectual property law centers (PDF), the Commission itself estimates that 45 extra years of copyright protection will hand the European music industry between €44 million and €843 million - uncertain by a factor of 20! The same report also notes that term extension will not net performers additional broadcast revenue; rather, the same pot will be spread among a larger pool of musicians, benefiting older musicians at the expense of young incomers. The report also notes that performers don't lose control over their music when the term of copyright ends; they lose it when they sign recording contracts (so true).

Other reports are even less favorable. In 2005, for example, the Dutch Institute for Information Law concluded that copyright in sound recordings has more in common with design rights and patents than with other areas of copyright, and it would be more consistent to reduce the term rather than extend it. More recently, an open letter from Bournemouth University's Centre for Intellectual Property Policy Management questioned exactly where those estimated revenues were going to come from, and pointed out the absurdity of the claim that extension would help performers.

And therein is the nub. Estimates are that the average session musician will benefit from term extension in the amount of €4 to €58 (there's that guess-the-number-within-a-factor-of-20 trick again). JURI's draft opinion puts the number of affected musicians at 7,000 per large EU member state, less in the rest. Call it 7,000 in all 27 and give each musician €20; that's €3.78 million, hardly enough for a banker's bonus. We could easily hand that out in cash, if handouts to aging performers are the purpose of the exercise.

Benefiting performers is a lobbyists' red herring that cynically plays on our affection for our favorite music and musicians; what term extension will do, as the Bournemouth letter points out, is benefit recording companies. Of that wackily wide range of estimated revenues in the last paragraph, 90 percent, or between €39 million and €758 million will go to record producers, even according to the EU's own impact assessment (PDF), based on a study carried out by PriceWaterhouseCooper.

If you want to help musicians, the first and most important thing you should do is improve the industry's standard contracts and employment practices. We protect workers in other industries from exploitation; why should we make an exception for musicians? No one is saying - not even Courtney Love - that musicians deserve charity. But we could reform UK bankruptcy law so that companies acquiring defunct labels are required to shoulder ongoing royalty payment obligations as well as the exploitable assets of the back catalogue. We could put limits on what kind of clauses a recording company is allowed to impose on first-time recording artists. We could set minimums for what is owed to session musicians. And we could require the return of rights to the performers in the event of a recording's going out of print. Any or all of those things would make far more difference to the average musician's lifetime income than an extra 45 years of copyright.

Current proposals seem to focus on this last idea as a "use it or lose it" clause that somehow makes the rest of term extension all right. Don Foster, the conservative MP who is shadow minister for the Department of Culture, Media, and Sport, for example, has argued for it repeatedly. But by itself it's not enough of a concession to balance the effect of term extension and the freezing of the public domain.

If you want to try to stop term extension, this is a key moment. Lobby your MEP and the members of the relevant committees. Remind them of the evidence. And remind them that it's not just the record companies and the world's musicians who have an interest in copyright; it's the rest of us, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 30, 2009

Looking backward

Governments move slowly; technology moves fast. That's not a universal truth - witness Obama's first whirlwind week in office - but in the early days of the Net it was the kind of thing people said smugly when they wanted to claim that cyberspace was impervious to regulation. It worked well enough for, say, setting free strong cryptography over the objections of the State Department and ITAR.

This week had two perfect examples. First: Microsoft noted in its 10-Q that the EU may force it to do something about tying Internet Explorer to Windows - remove it, make it one of only several browsers consumers can choose from at setup, or randomly provide different browsers. Still fighting the browser wars? How 1995.

Second: the release of the interim Digital Britain report by the Department for Culture, Media, and Sport. Still proposing Digital Rights Management as a way of protecting rightsholders' interest in content? How 2005.

It probably says something about technology cycles that the DRM of 2005 is currently more quaint and dated than the browser wars of 1995-1998. The advent of cloud computing and Google's release of Chrome last year have reinvigorated the browser "market". After years of apparent stagnation it suddenly matters again that we should have choices and standards to keep the Internet from turning into a series of walled gardens (instead of a series of tubes).

DRM, of course, turns content into a series of walled gardens and causes a load of other problems we've all written about extensively. But the most alarming problem about its inclusion in the government's list of action items is that even the music industry that most wanted it is abandoning it. What year was this written in? Why is a report that isn't even finished proposing to adopt a technological approach that's already a market failure? What's next, a set of taxation rules designed for CompuServe?

The one bit of good, forwarding-thinking news - which came as a separate announcement from Intellectual Property Minister David Lammy, is that apparently the UK government is ready to abandon the "three strikes" idea for punishing file-sharers - it's too complicated (Yes, Minister rules!) to legislate. And sort of icky arresting teenagers in their bedrooms, even if the EU doesn't see anything wrong with that and the Irish have decided to go ahead with it.

The interim report bundles together issues concerning digital networks (broadband, wireless, infrastructure), digital television and radio, and digital content. It's the latter that's most contentious: the report proposes creating a Rights Agency intended to encourage good use (buying content) and discourage bad use (whatever infringes copyright law). The report seems to turn a blind eye to the many discussions of how copyright law should change. And then there's a bunch of stuff about whether Britain should have a second public service broadcaster to compete "for quality" with the BBC. How all these things cohere is muddy.

For a really scathing review of the interim report, see The Guardian , where Charles Arthur attacks not only the report's inclusion of DRM and a "rights agency" to collaborate on developing it, but its dirt path approach to broadband speed and its proposed approach to network neutrality (which it calls "net neutrality", should you want to search the report to find out what it says).

The interim report favors allowing the kind of thing Virgin has talked about: making deals with content providers in which they're paid for guaranteed service levels. That turns the problem of who will pay for high-speed fiber into a game of pass-the-parcel. Most likely, consumers will end up paying, whether that money goes to content providers or ISPs. If the BBC pays for the iPlayer, so do we, through the TV license. If ISPs pay, we pay in higher bandwidth charges. If we're going to pay for it anyway, why shouldn't we have the freedom of the Internet in return?

This is especially true because we do not know what's going to come next or how people will use it. When YouTube became the Next Big Thing, oh, say, three or four years ago, it was logical to assume that all subsequent Next Big Things were going to be bandwidth hogs. The next NBT turned out to be Twitter, which is pretty much your diametrical opposite. Now, everything is social media - but if there's one thing we know about the party on the Internet it's that it keeps on moving on.

There's plenty that's left out of this interim report. There's a discussion of spectrum licensing that doesn't encompass newer ideas about spectrum allocation. It talks about finding new business models for rightsholders without supporting obsolete ones and the "sea of unlawful activity in which they have to swim" and mentions ISPs - but leaves out consumers except as "customers" or illegal copiers. It nods at the notion that almost anyone can be a creator and find distribution, but still persists in talking of customers and rightsholders as if they were never the same people.

No one ever said predicting the future was easy, least of all Niels Bohr, but it does help if you start by noticing the present.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 16, 2009

Health watch

We'll have to wait some months to find out what Steve Jobs' health situation really is, just as investors will have to wait to find out how well Apple is prepared to handle his absence. But that doesn't stop rampant speculation about both things, or discussion about whether Jobs owes it to the public to disclose his health problems.

As an individual, of course not. We write - probably too often for some people's tastes - about privacy with respect to health matters. But Jobs isn't just a private individual, and he isn't an average CEO. Like Warren Buffett, who saw his company's share price decline noticeably some years back during a scare over his health, Jobs's presence as CEO is a noticeable percentage of Apple's share price. That means that shareholders - and therefore by extension the Securities and Exchange Commission - have some legitimate public interest in his state of health.

That doesn't mean that all the speculation going on is a good thing. If Jobs is smart, he doesn't read news stories about himself; in normal times no one needs their sense of self-importance inflated that much, and in a health crisis the last thing you need is to read dozens of people speculating that you're on the way out. The pruriently curious may like to know that there is some speculation that the weight loss is the result of the Whipple procedure Jobs reportedly had in 2004 to treat his islet cell neuroendocrine tumor (a less aggressive type of pancreatic cancer); or that it's a thyroid disorder. No one wants to just write a post that says simply, "I don't know."

It would not matter if Jobs and Apple did not so conspicuously embrace the cult of personality. The downside of having a celebrity CEO is that when that CEO is put out of action the company struggles to keep its market credibility. The more the CEO takes credit - and Jobs is indelibly associated with each of Apple's current products - the less confidence people have in the company he runs.

To a large extent, it's absurd. No one - not even Jobs - can run a tech company the size of Apple by himself. Jobs may insist on signing off on every design detail, but let's face it, he's not the one working evenings and weekends to write the software code and run bug testing and run a final polishing cloth over the shinies before they hit the stores. Apple definitely lost his way during the period he wasn't at the helm - that much is history. But Jobs helped recruit John Sculley, the CEO who ran Apple during those lost years. And Jobs's next company, NeXT, was a glossy, well-designed, technically sophisticated market failure whose biggest success came when Apple bought it (and Jobs) and incorporated some of the company's technology into its products. Jobs had far more success with Pixar, now part of Disney; but accounts of the company's early history suggest was the company's founders who did the heavy lifting.

Unfortunately, if you're a public company you don't get to create public confidence by pointing out the obvious: that even with Jobs out of action there's a lot of company left for the managers he picked to run in the direction's he's chosen. Apple, whose relations with the press seem to be a dictionary definition of "arrogant", has apparently never cared to create a public image for itself that suggests it's a strong company with or without Jobs.

Compare and contrast to Buffett, who has been a rock star CEO for far longer than Jobs has. Buffett is 78, and Berkshire Hathaway's success is universally associated almost solely with him; yet every year he reminds shareholders that he has three or four candidates to succeed him who are chosen and primed and known to his board of directors. His annual shareholder's letters, too, are filled with praise for the managers and directors of the many subsidiaries Berkshire owns. Based on all that, it is clear that Buffett has an eye to ensuring that his company will retain its value and culture with or without him. That so many Berkshire Hathaway millionaires are his personal friends and neighbors, who staked money in the company decades ago at some personal risk, may have something to do with it.

Apple has not done anything like the same, which may have something to do with the personality of its CEO. Jobs's health troubles of 2004 should have been a wakeup call; if Buffett can understand that his age is a concern for shareholders, why can't Jobs understand that his health is, too? If he doesn't want people prying into his medical condition, that's understandable. But then the answer is to loosen his public identification with the company. As long as the perception is that Jobs is Apple and Apple is Jobs, the company's fortunes and share price will be inextricably linked to the fragility of his aging human body. Show that the company has a plan for succession, give its managers and product developers public credit, and identify others with its most visible products, and Jobs can go back to having some semblance of a private medical record.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 19, 2008

Backbone

There's a sense in which you haven't really arrived as a skeptic until someone's sued you. I've never had more than a threat, so as founder of The Skeptic, I'm almost a nobody. But by that standard Simon Singh, author with alternative medicine professor Edzard Ernst of the really excellent Trick or Treatment: The Undeniable Facts about Alternative Medicine, has arrived.

I think of Singh as one of the smarter, cooler generation of skeptics, who combine science backgrounds, good writing, and the ability to make their case in the mass media. Along with Ben Goldacre, Singh has proved that I was wrong when I thought, ten years ago, that getting skepticism into the national press on a regular basis was just too unlikely.

It's probably no coincidence that both cover complementary and alternative medicine, one of the biggest consumer issues of our time. We have a government that wants to save money on the health service. We have consumers who believe, after a decade or more of media insistence, that medicine is bad (BSE, childhood vaccinations, mercury fillings) and alternative treatments that defy science (homeopathy, faith healing) are good. We have overworked doctors who barely know their patients and whose understanding of the scientific process is limited. We have patients who expect miraculous cures like the ones they see on the increasingly absurd House. Doctors recommend acupuncture and Prince Charles, possessed of the finest living standards and medical treatment money can buy, promotes everything *else*. And we have medical treatments whose costs spiral every upwards, and constant reports of new medicines that fail their promise in one way or another.

But the trouble with writing for major media in this area is that you run across the litigious, and so has Singh: as Private Eye has apparently reported, he is being sued for libel by the British Chiropractic Association. The original article was published by the Guardian in April; it's been pulled from the site but the BCA's suit has made reposting it a cause celebre. (Have they learned *nothing* about the Net?) This annotated version details the evidence to back Singh's rather critical assessment of chiropractic. And there are many other New Zealand. And people complain about Big Pharma - the people alternative-medicine folks are supposed to be saving us from.

I'm not even sure how much sense it makes as a legal strategy. As the "gimpy" blog's comments point out, most of Singh's criticisms were based on evidence; a few were personal opinion. He mentioned no specific practitioners. Where exactly is the libel? (Non-UK readers may like to glance at the trouble with UK libel laws, recently criticized by the UN as operating against the public interest..

All science requires a certain openness to criticism. The whole basis of the scientific method is that independent researchers should be able to replicate each other's results. You accept a claim on that basis and only that basis - not because someone says it on their Web site and then sues anyone who calls it lacking in evidence. If the BCA has evidence that Singh is wrong, why not publish it? The answer to bad speech, as Mike Godwin, now working at Wikimedia, is so fond of saying, is more speech. Better speech. Or (for people less fond of talking) a dignified silence in the confidence that the evidence you have to offer is beyond argument. But suing people - especially individual authors rather than major media such as national newspapers - smacks of attempted intimidation. Though I couldn't possibly comment.

Ever since science became a big prestige, big money game we've seen angry fights and accusations - consider, for example, the ungracious and inelegant race to the Nobel prize on the part of some early HIV researchers. Scientists are humans, too, with all the ignoble motives that implies.

But many alternative remedies are not backed by scientific evidence, partly because often they are not studied by scientists in any great depth. The question of whether to allocate precious research money and resource to these treatments is controversial. Large pharmaceutical companies are unlikely to do it, for similar reasons to those that led them to research pills to reverse male impotence instead of new antibiotics. Scientists in research areas may prefer to study bigger problems. Medical organizations are cautious. The British Medical Association has long called for complementary therapies to be regulated to the same standards as orthodox medicine or denied NHS funding. As the General Chiropractic Council notes NHS funding is so far not widespread for chiropractic.

If chiropractors want to play with the big boys - the funded treatments, the important cures - they're going to have to take their lumps with the rest of them. And that means subluxing a little backbone and stumping up the evidence, not filing suit.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 12, 2008

Watching the Internet

It is more than ten years since it was possible to express dissent about the rights and wrongs of controlling the material available on the Net without being identified as either protecting child abusers or being one. Even the most radical of civil liberties organisations flinch at the thought of raising a challenge to the Internet Watch Foundation. Last weekend's discovery that the IWF had added a page from Wikipedia to its filtering list was accordingly the best possible thing that could have happened. It is our first chance since 1995 to have a rational debate about whether the IWF is fulfilling successfully the purpose for which it was set up and the near nationwide coverage of BT's Cleanfeed, despite the problems Cambridge researcher Richard Clayton has highlighted (PDF).

The background: the early 1990s was full of media scare stories about the Internet. In 1996, the police circulated a list of 133 Usenet newsgroups they claimed hosted child pornography, and threatened seizures of equipment. The government threatened regulation. And in that very tense climate, Peter Dawe, the founder of Pipex, called a meeting to announce an initiative he had sketched out on the back of an envelope called SafetyNet, aimed at hindering the spread of child pornography over the Internet. He was willing to stump up £500,000 to get it off the ground.

Renamed the IWF, the system still operates largely like he envisioned it would: it operates a hotline to which the public can report the objectionable material they find. If the IWF believes the material is illegal under UK law and it's hosted in the UK, the ISP is advised to remove it and the police are notified. If it's hosted elsewhere, the IWF adds it to the list of addresses that it recommends for blocking. ISPs must pay to join the IWF to subscribe to the list, and the six biggest ISPs, who have 90 to 95 percent of the UK's consumer accounts, all are members. Cleanfeed is BT's implementation of the list. Of course, despite its availability via Google Groups, Usenet hardly matters any more, and ISPs are beginning to drop it quietly from their offerings as a cost with little return.

The IWF's statement when it eventually removed the block is rather entertaining: it says, essentially, "We were right, but we'll remove the block anyway." In other words, the IWF still believes the image is "potentially illegal" - which provides a helpful, previously unavailable, window into their thinking - but it recognises the foolishness of banning a page on the world's fourth biggest Web site, especially given that the same image can be purchased in large, British record shops in situ on the cover of the 32-year-old album for which it was commissioned.

We've also learned that the most thoughtful debate on these issues is actually available on Wikipedia itself, where the presence of the image had been discussed at length from a variety of angles.

At the free speech end of the spectrum, the IWF is an unconscionable form of censorship. It operates a secret blocklist, it does not notify non-UK sites that they are being blocked, and it operates an equally secret appeals process. Some of this is silly. If it's going to exist the blocklist has to be confidential: a list of Internet links is actions, not words and they can be emailed across the world in seconds, and the link targets downloaded in minutes. Plus, it might be committing a crime: under UK law, it is illegal to take, make, distribute, show, or possess indecent images of children; that includes accessing such images.

At the control end of the spectrum, the IWF is probably too limited. There have been calls for it to add hate speech and racial abuse to its mandate, calls that as far as we know it has so far largely resisted. Pornography involving children - or, in the IWF's preferred terminology, "child sexual abuse images" - is the one thing that most people can agree on.

When the furor dies down and people can consider the matter rationally, I think there's no chance that the IWF will be disbanded. The compromise is too convenient for politicians, ISPs, and law enforcement. But some things could usefully change. Here's my laundry list.

First, this is the first mistake that's come to light in the 12 years of the IWF's existence. The way it was caught should concern us: Wikipedia's popularity and technical incompatibilities between the way Wikipedia protects itself from spam edits and the way UK ISPs have implemented the block list. Other false positives may not be so lucky. The IWF has been audited twice in 12 years; this should be done more frequently and the results published.

The IWF board should be rebalanced to include at least one more free speech advocate and a representative of consumer interests. Currently, it is heavily overbalanced in the direction of law enforcement and child protection representatives.

There should be judicial review and/or oversight of the IWF. In other areas of censorship, it's judges who make the call.

The IWF's personnel should have an infusion of common sense.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 28, 2008

Mother love

It will be very easy for people to take away the wrong lessons from the story of Lori Drew, who this week was found guilty of several counts of computer fraud in a case of cyberbullying that drove 13-year-old Missouri native Megan Meier to suicide.

The gist: in 2006, 49-year-old Lori Drew, a neighbor of Meier's who believed that Meier had spread gossip about her own 13-year-old daughter, a former friend. With help from her daughter and her 18-year-old assistant, Drew created a MySpace page belonging to a fictitious 16-year-old boy named Josh Evans. For some weeks Evans sent Meier flirtatious messages, then abruptly dumped her with a stream of messages and bulletings, ending with the message, "The world would be a better place without you." Meier, who had for five years been taking prescription medication for attention deficit disorder and depression, who was overweight and lacked self-esteem, hanged herself.

The story is a horror movie for parents. This is a teen who was, her mother said in court, almost always supervised in her Internet use. In fact, Meier and Drew's daughter had, some months earlier, created a fake MySpace page to talk to boys online, an escapade that caused Meier's mother to close down her MySpace access for some months. On the day of Meier's suicide, her mother was on her way to the orthodontist with her younger daughter when Meier, distraught, reported the stream of unpleasant messages. Her mother told her to sign off. She didn't; when her mother came home there was a brief altercation; they found her 20 minutes later.

The basic elements of the story are not, of course, new. Identity deception is as old as online services; the best-known early case was that of Joan, a CompuServe forum regular who for more than two years in the early 1990s claimed to be a badly disabled former neuropsychologist whose condition made her reluctant to meet people, especially her many online friends. Joan was in fact a fictional character, the increasingly elaborate creation of a male New York psychiatrist named Alex.

Cyberbullying is, of course, also not new. You can go back to the war between alt.tasteless and rec.pets.cats in 1992, if you like, but organized playground behavior seems to flourish in every online medium. Gail Williams, the conference manager at the WELL, said about ten years ago that a lot of online behavior seems to be people working our their high school angst, and nothing has changed in the interim except that a lot of people online now actually still in high school. And unfortunately for them, the people they're working out their high school angst with are bigger, older, more experienced, and a lot savvier about where to stick in the virtual knife. People can be damned unpleasant sometimes.

But let's look at the morals people are finding. EfluxMedia:
The case of Megan Meier calls for boundaries when it comes to cyberbullying and the use of social networking sites in general, but also calls for reason. Social networking sites and the Internet in general have become more than just virtual realities, they are now part of our everyday lives, and they influence us in ways that we cannot ignore. What we must learn from this is that our actions may have unimaginable consequences on other people, even when it comes to the Internet, so think twice before you act.

Boundaries? Meier was far more rigorously supervised online than the average teen. Who's going to supervise the behavior of a 49-year-old woman to make sure she doesn't cross the line?

More to the point, the court's verdict found that Drew had broken federal laws concerning computer fraud. Is it hacking to set up a pseudonymous MySpace page and send fraudulent postings? The MySpace's 2006 terms and conditions required registration information to be truthful and banned harassment and sexual exploitation. Have MySpace's terms become federal law?

The answer is probably that there was no properly applicable law. We've seen that situation before, too - Robert Schifreen and Steve Gold were prosecuted under the laws against wire fraud. The eventual failure of the case on appeal proved the need for the Computer Misuse Act and comparable laws against hacking elsewhere in the world. Ironically, these laws are now showing their limits, too, as the Drew case proves. We can now, I suppose, expect to see a lot of proposals for laws banning cyberbullying under which people like Drew could be more correctly prosecuted.

But the horror movie is only partly about online; online, in this case MySpace, allowed the hoaxers to post "Josh Evans'" bare-chested photo. The same kind of hoax, with hardly less impact, could have been carried out by letter and poster. Wanda Holloway didn't need online to contract to muder her daughter's more successful cheerleading rival.

Ultimately, the lesson we should be learning is the same one we heard at this year's Computers, Freedom, and Privacy conference: just like rape and incest, you are more at risk for harassment and cyberbullying from people you know. Unfortunately, most such law seems to be written with the idea that it's strangers who are dangerous.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 14, 2008

The USB stick in the men's room

How can we compete with free?

This is the question the entertainment industry has been asking ever since the first MP3 was uploaded. We are supposed to feel sorry for them, pass laws to protect their business model, and arrest the wicked "pirates" who "steal" their work and...well, I suppose "fence" would be the right word for getting it out to others.

Many of us have argued many times that the numbers rightsholders - the software industry, the entertainment industry - comes up with to estimate the direct cost of piracy to their bottom lines are questionable, if not greatly exaggerated. Not all free downloads would have been sales; some customers would not have paid for the work if they couldn't first sample it for free. Agonizingly slowly, the entertainment industry is beginning to behave in the ways we've argued for all along. Digital rights management is vanishing from downloaded music; MGM is putting its movies on YouTube; and TV networks are posting their shows online. Legal streaming and downloading is coming along, and while the torrenting population keeps growing, the legal population will grow faster and eventually outstrip it.

But all these pieces of the acrimonious copyright wars, are merely about distribution. The more profound copyright wars are just starting; and these are between free content and paid content.

In the free content category: Blogs. Advertorial, including infomercials. Services - Web, print, or otherwise - that are automatically generated from existing content such as news wires and other sites. User-generated sites like Flickr and YouTube.

In the paid content category: all the traditional media.

Clearly some people do manage to compete with free: bottled water, Windows, and iTunes all are successful despite the existence of tap water, Linux, and BitTorrent. Others are struggling: Craigslist is killing the classified advertising in many US newspapers, including the New York Times and its subsidiary, the Boston Globe; Flickr is making life hard for photographers; copy-and-paste blogs are hammering newspapers (again).

Free by itself isn't exactly the problem. Take, for example, Flickr and photographers. No matter how good their best photos are, few Flickr posters have what professionals have: the ability to produce, to order, without fail exactly the photographs required by the client. For a live event where time and reliability of the essence, you need a professional.

But the rest of the time... Flickr would be no threat if it hosted only a few hundred images. What's killing photographers is the law of truly large numbers: given hundreds of millions of images the chances that someone will be able to find a free one that is good enough go up. Volume is the killer.

Similarly, the problem for newspapers isn't that any of the millions of blogs out there can do what they do. It's the aggregate impact of all those expert blogs on single topics, coupled with the loss of advertising revenues from copy-and-pasters mashed up with the quaintly long lead times necessary for print.

Still, there were hints at last week's American Film Institute Digifest that music and film companies might be beginning to find an answer. If the first day was all about cross-media promotion, the second was all about using multiple media to make movies and music into the kernel of a broader experience - the kind you can't copy by downloading for free.

Christopher Sandberg, for example, talked about the "participation drama" The Company P built around The Truth About Marika, the story of a young woman searching for a missing friend. Based on a true story, the TV drama formed merely the center of a five-week reality role-playing game that included conspiracy Web sites, staged TV "debates", real-world and in-game clues.

"It's not about new media. It's the level of engagement," he said. "The audience can get as close as they want to the core story."

In a second example, the band Nine Inch Nails' Trent Reznor kicked off the launch of his Year Zero CD by planting a USB stick bearing the first release of one of the CD's tracks on top of a urinal in a men's room at one of their concerts. A complex alternative reality game later, the most active fans in the community were taken on a bus to a secret show. Three million fans played the game. Plus, the CD itself was cool: heated up, the top changed color and displayed a secret message.

The key question, asked by someone in the audience: did the effort mean the band sold more CDs?

"All projects have specific goals and objectives," said Susan Bonds, head of 42 Entertainment, which ran the project, "and sometimes they're tied to sales." In this case, because the music industry's album sales are dropping and Nine Inch Nails has a particularly technology-savvy fan base, the goal was more "building the people who will show up at your shows and consume your albums and be your audience on the Web and figuring out how to connect to them."

The tiny folk scene has long known that audiences like the perceived added value of buying CDs direct from the musicians. That that doesn't scale to millions - because there's only so much artist to go around. But the arts have always been about selling special experiences first and foremost. Participatory media will reach their own scaling problems - how many alternative reality games does anyone have time for? - but at last they've made a start on finding a positive response to the ease with which digital media can be copied.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her | | Comments (0) | TrackBacks (0)

November 7, 2008

Reality TV

The Xerox machine in the second season of Mad Men has its own Twitter account, as do many of the show's human characters. Other TV characters have MySpace pages and Facebook groups, and of course they're all, legally or illegally, on YouTube.

Here at the American Film Institute's Digifest in Hollywood - really Hollywood, with the stars on the sidewalks and movie theatres everywhere - the talk is all of "cross-platform". This event allows the AFI's Digital Content Lab to show off some of the projects it's fostered over the last year, and the audience is full of filmmakers, writers, executives, and owners of technology companies, all trying to figure out digital television.

One of the more timely projects is a remix of the venerable PBS Newshour with Jim Lehrer. A sort of combination of Snopes, Wikipedia, and any of a number of online comment sites, the goal of The Fact Project is to enable collaboration between the show's journalists and the public. Anyone can post a claim or a bit of rhetoric and bring in supporting or refuting evidence; the show's journalistic staff weigh in at the end with a Truthometer rating and the discussion is closed. Part of the point, said the project's head, Lee Banville, is to expose to the public the many small but nasty claims that are made in obscure but strategic places - flyers left on cars in supermarket parking lots, or radio spots that air maybe twice on a tiny local station.

The DCL's counterpart in Australia showed off some other examples. Areo, for example, takes TV sets and footage and turns them into game settings. More interesting is the First Australians project, which in the six-year process of filming a TV documentary series created more than 200 edited mini-documentaries telling each interviewee's story. Or the TV movie Scorched, which even before release created a prequel and sequel by giving a fictional character her own Web site and YouTube channel. The premise of the film itself was simple but arresting. It was based on one fact, that at one point Sydney had no more than 50 weeks of water left, and one what-if - what if there were bush fires? The project eventually included a number of other sites, including a fake government department.

"We go to islands that are already populated," said the director, "and pull them into our world."

HBO's Digital Lab group, on the other hand, has a simpler goal: to find an audience in the digital world it can experiment on. Last month, it launched a Web-only series called Hooking Up. Made for almost no money (and it looks it), the show is a comedy series about the relationship attempts of college kids. To help draw larger audiences, the show cast existing Web and YouTube celebrities such as LonelyGirl15, KevJumba, and sxePhil. The show has pulled in 46,000 subscribers on YouTube.

Finally, a group from ABC is experimenting with ways to draw people to the network's site via what it calls "viewing parties" so people can chat with each other while watching, "live" (so to speak), hit shows like Grey's Anatomy. The interface the ABC party group showed off was interesting. They wanted, they said, to come up with something "as slick as the iPhone and as easy to use as AIM". They eventually came up with a three-dimensional spatial concept in which messages appear in bubbles that age by shrinking in size. Net old-timers might ask churlishly what's so inadequate about the interface of IRC or other types of chat rooms where messages appear as scrolling text, but from ABC's point of view the show is the centrepiece.

At least it will give people watching shows online something to do during the ads. If you're coming from a US connection, the ABC site lets you watch full episodes of many current shows; the site incorporates limited advertising. Perhaps in recognition that people will simply vanish into another browser window, the ads end with a button to click to continue watching the show and the video remains on pause until you click it.

The point of all these initiatives is simple and the same: to return TV to something people must watch in real-time as it's broadcast. Or, if you like, to figure out how to lure today's 20- and 30-somethings into watching television; Newshour's TV audience is predominantly 50- and 60-somethings.

ABC's viewing party idea is an attempt - as the team openly said - to recreate what the network calls "appointment TV". I've argued here before that as people have more and more choices about when and where to watch their favourite scripted show, sports and breaking news will increasingly rule television because they are the only two things that people overwhelmingly want to see in real time. If you're supported by advertising, that matters, but success will depend on people's willingness to stick with their efforts once the novelty is gone. The question to answer isn't so much whether you can compete with free (cue picture of a bottle of water) but whether you can compete with freedom (cue picture of evil file-sharer watching with his friends whenever he wants).


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 3, 2008

Deprave and corrupt

It's one of the curiosities of being a free speech advocate that you find yourself defending people for saying things you'd never say yourself.

I noticed this last week when a friend, after delivering an impassioned defense of the rights of bloggers to blog about the world around them - say, recounting the Nazi costumes people were wearing to the across-the-street neighbor's party last weekened or detailing the purchases your friend made in the drugstore - and then turned around and said she didn't know why she was defending it because she wouldn't actually put things like that in her blog. (Unless, I suppose, her neighbor was John McCain.)

Probably most bloggers have struggled at one point or another with the collision these tell-the-world-your-private-thoughts technologies create between freedom of speech and privacy. Usually, though, invading your own privacy is reasonably safe, even if that invasion takes the form of revealing your innermost fantasies. Yes, there's a lot of personal information in them thar hills, and the enterprising data miner could certainly find out a lot about me by going through my 17-year online history via Google searches and intelligent matching. But that's nothing to the situation Newcastle civil servant Darryn Walker finds himself in after allegedly posting a 12-page kidnap, torture, and murder fantasy about the pop group Girls Aloud.

As unwise postings go, this one sounds like a real winner. It was (reports say) on a porn site; it named a real pop group (making it likely to pop up in searches by the group's fans); and identified as the author was a real, findable person - a civil servant, no less. A member of the public reported the story to the Internet Watch Foundation, who reported it to the police, who arrested Walker under the Obscene Publications Act.

The IWF's mission in life is to get illegal content off the Net. To this end, it operates a public hotline to which anyone can report any material they think might be illegal. The IWF's staff sift through the reports - 31,776 in 2006, the last year their Web site shows statistics for - and determines whether the material is "potentially illegal". If it is, the IWF reports it to the police and also recommends to the many ISPs who subscribe to its service that the material be removed from their servers. The IWF so far has focused on clearly illegal material, largely pornographic images, both photographic and composited, of children. Since 2003, less than 1 percent of illegal images involving children is hosted in the UK.
As a cloistered folksinger I had never heard of the very successful group Girls Aloud; apparently they were created like synthetic gemstones in 2002 by the TV show Popstars: the Rivals. According to their Wikipedia entries, they're aged 22 to 26 - hardly children, no matter how unpleasant it is to be the heroines of such a violent fantasy.

So the case poses the question: is posting such a story illegal? That is, in the words of the Obscene Publications Act, is it likely to "deprave and corrupt"? And does it matter that the site to which it was posted is not based in the UK?

It is now several decades since any text work was prosecuted under the Obscene Publications Act, and much longer since any such prosecution succeeded. The last such court case, the 1976 prosecution against the publishers of Inside Linda Lovelace apparently left the Metropolitan Police believing they couldn't win . In 1977, a committee recommended excluding novels from the Act. Novels, not blog postings.

Succeeding in this case would therefore potentially extend the IWF's - and the Obscene Publications Unit's - remit by creating a new and extremely large class of illegal material. The IWF prefers to use the term "child abuse images" rather than "child pornography"; in the case of actual photographs of real incidents this is clearly correct. The argument for outlawing composited or wholly created images as well as photographs of actual children is that pedophiles can use them to "groom" their targets - that is, to encourage their participation in child abuse by convincing them that these are activities that other children have engaged in and showing them how. Outlawing text descriptions of real events could block child abuse victims from publishing their own personal stories; outlawing fiction, however disgusting seems a wholly ineffectual way of preventing child abuse. Bad things happen to good fictional characters all the time.

So, as a human being I have to say that I not only wouldn't write this piece, I don't even want to have to read it. But as a free speech advocate I also have to say that the money spent tracking down and prosecuting its writer would have been more effectively spent on...well, almost anything. The one thing the situation has done is widely publicize a story that otherwise hardly anyone knew existed. Suppressing material just isn't as easy as it used to be when all you had to do was tell the publisher to get it off the shelves.

Of course, for Walker none of this matters. The most likely outcome for him in today's environment is a ruined life.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 12, 2008

Slow news

It took a confluence of several different factors for a six-year-old news story to knock 75 percent off the price of United Airlines shares in under an hour earlier this week. The story said that United Airlines was filing for bankruptcy, and of course was true - in 2002. Several media owners are still squabbling about whose fault it was. Trading was halted after that first hour by the systems put in place after the 1987 crash, but even so the company's shares closed 10 percent down on the day. Long-term it shouldn't matter in this case, but given a little more organization and professionalism that sort of drop provides plenty of opportunities for securities fraud.

The factor the companies involved can't sue: human psychology. Any time you encounter a story online you make a quick assessment of its credibility by considering: 1) the source; 2) its likelihood; 3) how many other outlets are saying the same thing. The paranormal investigator and magician James Randi likes to sum this up by saying that if you claimed you had a horse in your back yard he might want a neighbor's confirmation for proof, but if you said you had a unicorn in your back yard he'd also want video footage, samples of the horn, close-up photographs, and so on. The more extraordinary the claim, the more extraordinary the necessary proof. The converse is also true: the less extraordinary the claim and the better the source, the more likely we are to take the story on faith and not bother to check.

Like a lot of other people, I saw the United story on Google News on Monday. There's nothing particularly shocking these days about an airline filing for bankruptcy protection, so the reaction was limited to "What? Again? I thought they were doing better now" and a glance underneath the headline to check the source. Bloomberg. Must be true. Back to reading about the final in prospect between Andy Murray and Roger Federer at the US Open.

That was a perfectly fine approach in the days when all content was screened by humans and media were slow to publish. Even then there were mistakes, like the famous 1993 incident when a shift worker at Sky News saw an internal rehearsal for the Queen Mother's death on a monitor and mentioned it on the phone to his mother in Australia, who in turn passed it on to the media, which took it up and ran with it.

But now in the time that thought process takes daytraders have clicked in and out of positions and automated media systems have begun republishing the story. It was the interaction of several independently owned automated systems made what ought to have been a small mistake into one that hit a real company's real financial standing - with that effect, too, compounded by automated systems. Logically, we should expect to see many more such incidents, because all over the Web 2.0 we are building systems that talk to each other without human intervention or oversight.

A lot of the Net's display choices are based on automated popularity contests: on-the-fly generated lists of the current top ten most viewed stories, Amazon book rankings, Google's page rank algorithm that bumps to the top sites with the most inbound links for a given set of search terms. That's no different from other media: Jacqueline Kennedy and Princess Diana were beloved of magazine covers for the most obvious sale-boosting reasons. What's different is that on the Net these measurements are made and acted upon instantaneously, and sometimes from very small samples, which is why in a very slow news hour on a small site a single click on a 2002 story seems to have bumped it up to the top, where Google spotted it and automatically inserted it into its feed.

The big issue, really - leaving aside the squabble between the Tribune and Google over whether Google should have been crawling its site at all - is the lack of reliable dates. It's always a wonder to me how many Web sites fail to anchor their information in time: the date a story is posted or a page is last updated should always be present. (I long, in fact, for a browser feature that would display at the top of a page the last date a page's main content was modified.)

Because there's another phenomenon that's insufficiently remarked upon: on the Internet, nothing ever fully dies. Every hour someone discovers an old piece of information for the first time and thinks it's new. Most of the time, it doesn't matter: Dave Barry's exploding whale is hilariously entertaining no matter how many times you've read it or seen the TV clip. But Web 2.0 will make new money for endless recycling part of our infrastructure rather than a rare occurrence.

In 1998 I wrote that crude hacker defacement of Web sites was nothing to worry about compared to the prospect of the subtle poisoning of the world's information supply that might become possible as hackers became more sophisticated. This danger is still with us, and the only remedy is to do what journalists used to be paid to do: check your facts. Twice. How do we automate that?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 5, 2008

Return of the browser wars

It was quiet, too quiet. For so long it's just been Firefox/Mozilla/Netscape, Internet Explorer, and sometimes Opera that it seemed like that was how it was always going to be. In fact, things were so quiet that it seemed vaguely surprising that Firefox had released a major update and even long-stagnant Internet Explorer has version 8 out in beta. So along comes Chrome to shake things up.

The last time there were as many as four browsers to choose among, road-testing a Web browser didn't require much technical knowledge. You loaded the thing up, pointed it at some pages, and if you liked the interface and nothing seemed hideously broken, that was it.

This time round, things are rather different. To really review Chrome you need to know your AJAX from your JavaScript. You need to be able to test for security holes, and then discover more security vulnerabilities. And the consequences when these things are wrong are so much greater now.

For various reasons, Chrome probably isn't for me, quite aside from its copy-and-paste EULA oops. Yes, it's blazingly fast and I appreciate that because it separates each tab or window into its own process it crashes more gracefully than its competitors. But the switching cost lies less in those characteristics than in the amount of mental retraining it takes to adapt your way of working to new quirks. And, admittedly based on very short acquaintance, Chrome isn't worth it now that I've reformatted Firefox 3's address bar into a semblance of the one in Firefox 2. Perhaps when Chrome is a little older and has replaced a few more of Firefox's most useful add-ons (or when I eventually discover that Chrome's design means it doesn't need them).

Chrome does not do for browsers what Google did for search engines. In 1998, Google's ultra-clean, quick-loading front page and search results quickly saw off competing, ultra-cluttered, wait-for-it portals like Altavista because it was such a vast improvement. (Ironically, Google now has all those features and more, but it's smart enough to keep them off the front page.)

Chrome does some cool things, of course, as anything coming out of Google always has. But its biggest innovation seems to be more completely merging local and global search, a direction in which Firefox 3 is also moving, although with fewer unfortunate consequences. And, as against that, despite the "incognito" mode (similar to IE8) there is the issue of what data goes back to Google for its coffers.

It would be nice to think that Chrome might herald a new round of browser innovation and that we might start seeing browsers that answer different needs than are currently catered for. For example: as a researcher I'd like a browser to pay better attention to archiving issues: a button to push to store pages with meaningful metadata as well as date and time, the URL the material was retrieved from, whether it's been updated since and if so how, and so on. There are a few offline browsers that sort of do this kind of thing, but patchily.

The other big question hovering over Chrome is standards: Chrome is possible because the World Wide Web Consortium has done its work well. Standards and the existence of several competing browsers with significant market share has prevented any one company from seizing control and turning the Web into the kind of proprietary system Tim Berners-Lee resisted from the beginning. Chrome will be judged on how well it renders third-party Web pages, but Google can certainly tailor its many free services to work best with Chrome - not so different a proposition from the way Microsoft has controlled the desktop.

Because: the big thing Chrome does is bring Google out of the shadows as a competitor to Microsoft. In 1995, Business Week ran a cover story predicting that Java (write once, run on anything) and the Web (a unified interface) could "rewrite the rules of the software industry". Most of the predictions in that article have not really come true - yet - in the 13 years since it was published; or if they have it's only in modest ways. Windows is still the dominant operating system, and Larry Ellison's thin clients never made a dent in the market. The other big half of the challenge to Microsoft, GNU/Linux and the open-source movement, was still too small and unfinished.

Google is now in a position to deliver on those ideas. Not only are the enabling technologies in place but it's now a big enough company with reliable enough servers to make software as a Net service dependable. You can collaboratively process your words using Google Docs, coordinate your schedules with Google Calendar, and phone across the Net with Google Talk. I don't for one minute think this is the death of Microsoft or that desktop computing is going to vanish from the Earth. For one thing, despite the best-laid cables and best-deployed radios of telcos and men, we are still a long way off of continuous online connectivity. But the battle between the two different paradigms of computing - desktop and cloud - is now very clearly ready for prime time.

Wendy M. Grossman's Web site hasn extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 15, 2008

License to kill


Yesterday, a US federal appeals court reversed a lower court ruling that might have invalidated open-source licenses. The case, Jacobsen v. Katzer, began more than two years ago with a patent claim.

Open-source software developer Robert Jacobsen manages the collective effort that produced Java Model Railroad Interface, which allows enthusiasts to reprogram the controller chips in their trains. JMRI is distributed under the Artistic License, an older and less-well known one of the free licenses (it isn't one of the Free Software Foundation's approved licenses, though its successor, Artistic License 2.0, is). Matthew Katzer and Kamind, aka KAM Industries sells a functionally similar commercial product that, crucially, Jacobsen claims is based on downloaded portions of JMRI. The Artistic License requires attribution, copyright notices, references to the file listing copyright terms, identification of the source of the downloaded files, and a description of the changes made by the new distributor. None of these conditions were met, and accordingly Jacobsen moved for a preliminary injunction on the basis of copyright infringement. The District Court denied the motion on the grounds that the license is "intentionally broad", and argued that violating the conditions "does not create liability for copyright infringement where it would not otherwise exist". It is this decision that has been reversed.

This win for Jacobsen doesn't get him anything much yet: the case is simply remanded back to the California District Court for further consideration. But it gets the rest of the open-source movement quite a lot. The judgement affirms Richard Stallman's original insight that created the General Public License in the first place, that copyright could be used to set works free as well as to close them down.

The decision hinges on the question of whether the licensing terms are conditions or covenants, a distinctions that's clear as glass to a copyright lawyer and clear as mud to everyone else. According to the Electronic Frontier Foundation's helpful explanation (and they have lots of copyright lawyers to explain this sort of thing), it's the difference between contract law and copyright law. Violating conditions means you don't have a copyright license; violating covenants means you've broken the contact but you still have a license. In the US, it's also the difference between federal and state law. When you violate the license's conditions, therefore, as Lawrence Lessig explains , what you have is a copyright infringement.

It's hard to understand how the district court could have taken the view it did. It is very clear from both the licenses themselves and from the copious documentation of the thinking that went into their creation that their very purpose was to ensure that work created collectively and intended to be free for use, modification, and redistribution could not be turned into a closed commercial product that benefited only the company or individual that sells it. To be sure, it's not what the creators of copyright - intended as a way to give authors control over publishers - originally had in mind.

But once you grant the idea of a limited monopoly and say that creators should have the right to control how their work is used, it makes no sense to honor that right only if it's used restrictively. Either creators have the legal right to determine licensing conditions or they have not. (The practical right is of course a different story; economics and the size of publishing businesses give them sufficient clout to impose terms on creators that those creators wouldn't choose.). Seems to me that a creator could specify as a licensing condition that the work could only be published on the side of a cow, and any publisher fool enough to agree to that would be bound by it or be guilty of infringement.

But therein lies the dark side of copyright licensing conditions. The Jacobsen decision might also give commercial software publishers Ideas about the breadth of conditions they can attach to their end-user license agreements. As if these weren't already filled with screeds of impenetable legalese, much of which could be charitably described as unreasonable. EFF points this out and provides a prime example: the licensing terms imposed by World of Warcraft owner Blizzard Entertainment have been upheld in court.

Blizzard's terms ban automated playing software such as Glider, whose developer, Michael Donnelly, was the target of the suit. EFF isn't arguing that Blizzard doesn't have the right to ban bots from its servers; EFF just doesn't think accusing Glider users of copyright infringement for doing is a good legal precedent. Public Knowledge has a fuller explanation of the implications of this case, which it filed as an amicus brief. Briefly, PK argues that upholding these terms as copyright conditions could open the way for software publishers to block software that interoperates with theirs. (Interestingly, Blizzard's argument seems to rely on the notion that software copied into RAM is a copyright infringement, an approach I recall Europe rejecting a few years ago).

You'd think no company would want to sue its own customers. But keeping the traditional balance copyright law was created to achieve between providing incentives for artists and creators and public access to ideas continues to require more than relying on common sense.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 8, 2008

Broadcast of the Rings

There's a certain irony in the International Olympic Committee's choice of YouTube as its broadcast platform for the Beijing Olympics, which started last night or this morning depending on your time zone. The plan is that the IOC's official channel will bring clips of Olympic coverage to the 77 countries in Africa, Asia, and the Middle East where it hasn't sold TV rights. This is the first time the Olympics will have official Internet coverage.

The IOC said eight years ago that it would not allow Internet broadcasting until technology was in place to control geographical distribution reliably. Four years ago, major broadcasters like the BBC did their first Webcasts of the Games to subscribers in the right geographical areas who had broadband. And now YouTube: the Olympics are starting to do their own TV production.

The irony lies in a couple of things. First of all, of course, are all those suits YouTube is currently experiencing. There's the Viacom suit, the one in which the judge has ordered YouTube to turn over "anonymized" user data. There's the €500 million suit brought by Mediaset, Italy's largest commercial broadcaster, owned by prime minister Silvio Berlusconi, which has said it will also claim compensation for lost advertising revenues. Music publishers. Football leagues. And so on. It's a surprise that the IOC is partnering with YouTube rather than suing Google.

Second of all is that even though YouTube (which, as it was only founded in February 2005, didn't actually exist at the time of the last summer Olympics) seems to be capable of blocking viewers from the wrong sort of IP address from the official channel the odds are pretty good that in a very short time the amount of unrestricted "unofficial" Olympic coverage on the site will dwarf the official stuff. It remains to be seen what kind of policing effort the IOC mounts to prevent that.

But the third irony is of course that there are plenty of ways to see the Olympics that bypass local broadcasters. And plenty of motives for doing so: US viewers, for example, have for years been frustrated by NBC's insistence on saving the biggest events for prime-time evening viewing, even if that means showing them on tape delay many hours after they actually took place. Got a friend with broadband and a VPN in another country that shows events live? VPN into friend's network and access their local broadcaster's stream via their network. British friends ought to be especially in demand for this kind of thing, since the BBC's coverage is...actually, comprehensive isn't really a big enough word for it.

If you're friendless and don't care about real-time viewing, you'll probably find the sport of your choice popping up pretty quickly via the usual torrent sites. True, that, too, will be time-delayed, but you will still get it sooner than those poor NBC-afflicted saps.

If you're friendless and do care about real-time viewing, your best bet is to download one of the many Chinese P2P TV players such as TVU Player (desktop and mobile phone versions), Sopcast (desktop and Web versions), or PPLive, or head over to Channelsurfing.net. These things tap into the open streams from broadcasters all over the world. Not ideal: the output is in a small, low-res screen on your computer, but as against that there's the benefit of having the commentary in a (usually) incomprehensible language. It's hard to get so annoyed with commentators you don't understand. (TVU Player showed the Olympic opening ceremony over what seemed to be an Italian channel.) Channelsurfing.net publishes a schedule you can click on. With the other players the schedule is always a little bit of a mystery, although AsiaPlate seems to be helpful with respect to the Olympic streaming schedule. (Its tennis page, however, hasn't been updated since February.)

By 2012, it would be a logical progression for the IOC to offer streaming video from its own site, particularly for the smaller niche sports that don't get much coverage even in the best-endowed countries. NBC is boasting as much as 3,600 hours of coverage if you include TV and broadband services, standard and high-def; NBC has said 2,900 hours of it will be live. The difficulty for the IOC is that according to its own figures (PDF) 50 percent of its revenues - $2.57 billion - come from broadcast rights (and much of that from NBC). Sponsorship is 40 percent, ticketing 8 percent, and licensing and other sources only 2 percent. It's hard to imagine the Net being able to replace that kind of revenue any time soon. What's more likely is pressure on broadcasters to encrypt those open streams.

Sports, particularly the biggest events, seem likely to continue to increase in value to broadcasters: they are one of the few things that a mass of people really care about seeing live. Which is the fourth irony: both the IOC's own official YouTube channel and an important portion (a little over 20 percent) of the official channels of its biggest broadcaster, NBC, are both tape-delayed.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 6, 2008

The Digital Revolution turns 15

"CIX will change your life," someone said to me in 1991 when I got a commission to review a bunch of online systems and got my first modem. At the time, I was spending most or all of every day sitting alone in my house putting words in a row for money.

The Net, Louis Rossetto predicted in 1993, when he founded Wired, would change everybody's lives. He compared it to a Bengali typhoon. And that was modest compared to others of the day, who compared it favorably to the discovery of fire.

Today, I spend most or all of every day sitting alone in my house putting words in a row for money.

But yes: my profession is under threat, on the one hand from shrinkage of the revenues necessary to support newspapers and magazines - which is indeed partly fuelled by competition from the Internet - and on the other hand from megacorporate publishers who routinely demand ownership of the copyrights freelances used to resell for additional income - a practice that the Internet was likely to largely kill off anyway. Few have ever gotten rich from journalism, but freelance rates haven't budged in years; staff journalists get very modest raises and for those they are required to work more hours a week and produce more words.

That embarrassingly solipsistic view aside, more broadly, we're seeing the Internet begin to reshape the entertainment, telecommunications, retail, and software industries. We're seeing it provide new ways for people to organize politically and challenge the control of information. And we're seeing it and natural laziness kill off our history: writers and students alike rely on online resources at the expense of offline archives.

Wired was, of course, founded to chronicle the grandly capitalized Digital Revolution, and this month, 15 years on, Rossetto looked back to assess the magazine's successes and failures.

Rossetto listed three failures and three successes. The three failures: history has not ended; Old Media are not dead (yet); and governments and politics still thrive. The three successful predictions: the long boom; the One Machine, a man/machine planetary consciousness; that technology would change the way we relate to each other and cause us to reinvent social institutions.

I had expected to see the long boom in the list of failures, and not just because it was so widely laughed at when it was published. Rossetto is fair to say that the original 1997 feature was not invalidated by the 2000 stock market bust. It wasn't about that (although one couldn't resist snickering about it as the NASDAQ tanked). Instead, what the piece predicted was a global economic boom covering the period 1980 to 2020.

Wrote Peter Schwartz and Peter Leyden, "We are riding the early waves of a 25-year run of a greatly expanding economy that will do much to solve seemingly intractable problems like poverty and to ease tensions throughout the world. And we'll do it without blowing the lid off the environment."

Rossetto, assessing it now, says, " There's a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it's not necessarily what the politicians, priests, or pundits are telling you."

I think: 1) the time to assess the accuracy of an article outlining the future to 2020 is probably around 2050; 2) the writers themselves called it a scenario that might guide people through traumatic upheavals to a genuinely better world rather than a prediction; 3) that nonetheless, it's clear that the US economy, which they saw as leading the way has suffered badly in the 2000s with the spiralling deficit and rising consumer debt; 4) that media alarm about the environment, consumer debt, government deficits, and poverty is hardly a conspiracy to tell us lies; and 5) that they signally underestimated the extent to which existing institutions would adapt to cyberspace (the underlying flaw in Rossetto's assumption that governments would be disbanding by now).

For example, while timing technologies is about as futile as timing the stock market, it's worth noting that they expected electronic cash to gain acceptance in 1998 and to be the key technology to enable electronic commerce, which they guessed would hit $10 billion by 2000. Last year it was close to $200 billion. Writing around the same time, I predicted (here) that ecommerce would plateau at about 10 percent of retail; I assumed this was wrong, but it seems that it hasn't even reached 4 perecent yet, though it's obvious that, particularly in the copyright industries, the influence of online commerce is punching well above its statistical weight.

No one ever writes modestly about the future. What sells - and gets people talking - are extravagant predictions, whether optimistic or pessimistic. Fifteen years is a tiny portion even of human history, itself a blip on the planet. Tom Standage, writing in his 1998 book The Victorian Internet, noted that the telegraph was a far more radically profound change for the society of its day than the Internet is for ours. A century from now, the Internet may be just as obsolete. Rossetto, like the rest of us, will have to wait until he's dead to find out if his ideas have lasting value.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 18, 2008

Like a Virgin

Back in November 2005 the CEO of AT&T, Ed Whitacre, told Business Week that he was tired of big Internet sites like Google and Yahoo! using "my pipes" "for free". With those words he launched the issue of network neutrality onto the front pages and into the public consciousness. At the time, it seemed like what one of my editors used to grandly dismiss as an "American issue". (One such issue, it's entertaining to remember now, was spam. That was in 1997.) The only company dominant enough and possessed of sufficient infrastructure to impose carriage charges on content providers in the UK was BT - and if BT had tried anything like that Ofcom would - probably - have stomped all over it.

But what starts in America usually winds up here a few years later, and this week, the CEO of Virgin Media, Neil Berkett, threatened that video providers who don't pay for faster service may find their traffic being delivered in slow "bus lanes". Network neutrality, he said, was "a load of bollocks".

His PR people recanted - er, clarified a day or two later. We find it hard to see how a comment as direct as "a load of bollocks" could be taken out of context. However. Let's say he was briefly possessed by the spirt of Whitacre, who most certainly meant what he said.

The recharacterization of Berkett's comments: the company isn't really going to deliberately slow down YouTube and the BBC's iPlayer. Instead, it "could offer content providers deals to upgrade their provisioning." I thought this sounded like the wheeze where you're not charged more for using a credit card, you're given a discount for paying cash. But no: what they say they have in mind is direct peering, in which no money changes hands, which they admit could be viewed as a "non-neutral" solution.

But, says Keith Mitchell, a fellow member of the Open Rights Group advisory board, "They are in for a swift education in the way the global transit/peering market works if they try this." Virgin seems huge in the context of the UK, where its ownership of the former ntl/Telewest combine gives it a lock on the consumer cable market - but in the overall scheme of things it's "a very small fish in the pond compared to the Tier 1 transit providers, and the idea that they can buck this model single-handedly is laughable."

Worse, he says, "If Virgin attempts to cost recover for interconnects off content providers on anything other than a sender-keeps-all/non-settlement basis, they'll quickly find themselves in competition with the transit providers, whose significantly larger economies of scale put them in a position to provide a rather cheaper path from the content providers."

What fun. In other words, if you're, say, the BBC, and you're faced with paying extra in some form to get your content out to the Net you'd choose to pay the big trucking company with access to all the best and fastest roads and the international infrastructure rather than the man-with-a-van who roams your local neighborhood.

ISPs versus the iPlayer seems likely to run and run. It's clear, for example, that streaming is growing at a hefty clip. Obviously, within the UK the iPlayer is the biggest single contributor to this; viewers are watching a million programs a week online, sopping up 3 to 5 percent of all Internet traffic in Britain.

We've seen exactly this sort of argument before: file-sharing (music, not video!), online gaming, binary Usenet newsgroups. Why (ancient creaking voice) I remember when the big threat was the advent of the graphical Web, which nearly did kill the Net (/ancient creaking voice). The difference this time is that there is a single organization with nice, deep, taxpayer-funded pockets to dig into. Unlike the voracious spider that was Usenet, the centipede that is file-sharing, or the millipedes who were putting up Web sites, YouTube and the BBC make up an easily manageable number of easily distinguished targets for a protection racket. At the same time, the consolidation of the consumer broadband market from hundreds of dial-up providers into a few very large broadband providers means competition is increasingly mythical.

But the iPlayer is only one small piece of the puzzle. Over the next few years we're going to see many more organizations offering streaming video across the Net. For example, a few weeks ago I signed up for an annual pass for the streaming TV service for the nine biggest men's tennis tournaments of the year. The economics make sense: $70 a year versus £20 a month for Sky Sports - and I have no interest in any of Sky's other offerings - or pay nothing and "watch" really terrible low-resolution video over a free Chinese player offering rebroadcasts of uncertain legality.

The real problem, as several industry insiders have said to me lately, is pricing. "You have a product," said one incredulously, "that people want more and more of, and you can't make any money selling it?" When companies like O2 are offering broadband for £7.50 a month as a loss-leading add-on to mobile phone connections, consumers don't see why they should pay any more than that. Jerky streaming might be just the motivator to fix that.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2008

My IP address, my self

Some years back when I was writing about the data protection directive, Simon Davies, director of Privacy International, predicted a trade war between the US and Europe over privacy laws. It didn't happen, or at least it hasn't happened yet.

The key element to this prediction was the rule in the EU's data protection laws that prohibited sending data on for processing to countries whose legal regimes aren't as protective as those of the EU. Of course, since then we've seen the EU sell out on supplying airline passenger data to the US. Even so, this week the Article 29 Data Protection Working Party made recommendations about how search engines save and process personal data that could drive another wedge between the US and Europe.

The Article 29 group is one of those arcane EU phenomena that you probably don't know much about unless you're a privacy advocate or paid to find out. The short version: it's a sort of think tank of data protection commissioners from all over Europe. The UK's Information Commissioner, Richard Thomas, is a member, as are his equivalents in countries from France to Lithuania.

The Working Party (as it calls itself) advises and recommends policies based on the data protection principles enshrined in the EU Data Protection Directive. It cannot make law, but both its advice to the European Commission and the Commission's action (or lack thereof) are publicly reported. It's arguable that in a country like the UK, where the Information Commissioner operates with few legal teeth to bite with, the existence of such a group may help strengthen the Commissioner's hand.

(Few legal teeth, at least in respect of government activities: the Information Commissioner has issued an opinion about Phorm indicating that the service must be opt-in only. As Phorm and the ISPs involved are private companies, if they persisted with a service that contravened data protection law, the Information Commissioner could issue legal sanctions. But while the Information Commissioner can, for example, rule that for an ISP to retain users' traffic data for seven years is disproportionate, if the government passes a law saying the ISP must do so then within the UK's legal system the Information Commissioner can do nothing about it. Similarly, the Information Commissioner can say, as he has, that he is "concerned" about the extent of the information the government proposes to collect and keep on every British resident, but he can't actually stop the system from being built.)

The group's key recommendation: search engines should not keep personally identifiable search histories for longer than six months, and it specifically includes search engines whose headquarters are based outside the EU. The group does not say which search engines it studied, but it was reported to be studying Google as long ago as last May. The report doesn't look at requirements to keep traffic data under the Data Retention Directive, as it does not apply to search engines.

Google's shortening the life of its cookies and anonymizing its search history logs after 18 months turns out to have a significance I didn't appreciate when, at the time, I dismissed it as insultingly trivial (which it was): it showed the Article 29 working group that the company doesn't really need to keep all that data for so long. In

One of the key items the Article 29 group had to decide in writing its report on data protection issues related to search engines (PDF) is this: are IP addresses personal information? It sounds like one of those bits of medieval sophistry, like asking how many angels can dance on the head of a pin. In the dial-up days, it might not have mattered, at least in Britain, where local phone charges forced limited usage, so users were assigned a different IP address every time they logged in. But in the world of broadband, where even the supposedly dynamic IP addresses issued by cable suppliers may remain with a single subscriber for years on end. Being able to track your IP address's activities is increasingly like being able to track your library card, your credit card, and your mobile phone all at the same time. Fortunately, the average ISP doesn't have the time to be that interested in most of its users.

The fact is that any single piece of information that identifies your activities over a long period and can be mapped to your real-life identity has to be considered personal information or the data protection laws make no sense. The libertarian view, of course, would be that there are other search engines. You do not actually have to use Google, Gmail, or even YouTube. But if all search engines adopted Google's habits the choice would be more apparent than real. Time was when the US was the world's policeman. With respect to data, it seems that the EU has taken on this role. It will be interesting to see whether this decision has any impact on Google's business model and practices. If it does, that trade war could finally be upon us. If not, then Google was building up a vast data store just because we can.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 29, 2008

Phormal ware

In the last ten days or so a stormlet has broken out about the announcement that BT, Carphone Warehouse, and TalkTalk, who jointly cover about 70 percent of British Internet subscribers, have signed up for a new advertising service. The supplier, Phorm (previously, 121Media), has developed Open Internet Exchange (OIX), a platform to serve up "relevant" ads to ISPs' customers. Ad agencies and Web sites also sign up to the service which, according to Phorm's FAQ, can serve up ads to any Web site "in the regular places the website shows ads". Partners include most British national newspapers, iVillage, and MGM OMD.

A brief chat with BT revealed that the service, known to consumers as Webwise, will apply only to BT's retail customers, not its wholesale division. Consumers will be able to opt out, and BT is planning an educational exercise to explain the service.

Obviously all concerned hope Webwise will be acceptable to consumers, but to make it a little more palatable, not signing out of it gets you warnings if you land on suspected phishing sites. I don't think improved security should, ethically, be tied to a person's ad-friendliness, but this is the world we live in.

"We've done extensive research with our customer base," says BT's spokesman, "and it's very clear that when customers know what is happening they're overwhelmingly in favor of it, particularly in terms of added security."

But the Net folk are suspicious folk, and words like "spyware" and "adware" are circling, partly because Phorm's precursor, 121Media, was blocked by Symantec and F-Secure as spyware. Plus, The Register discovered that BT had been sharing data with Phorm as long ag as last summer, and, apparently, lying about it.

Phorm's PR did not reply to a request for an interview, but a spokeswoman contacted briefly last week defended the company. "We are absolutely not and in no way an adware product at all."

The overlooked aspect: Phorm called in Privacy International's new commercial arm, 80/20, to examine its system.

PI's executive director, Simon Davies, one of the examiners, says, "Phorm has done its very best to eliminate and minimise the use of personal information and build privacy into the core of the technology. In that sense, it's a privacy-friendly technology, but that does not get us away from the intrusion aspect." In general, the principle is that ads shouldn't be served on an opt-out basis; users should have to opt in to receive them.

Tailoring advertising to the clickstream of user interests is of course endemic online now; it's how Google does AdSense, and it's why that company bought DoubleClick, which more or less invented the business of building up user profiles to create personalized ads. Phorm's service, however, does not build user profiles.

A cookie with a unique ID is stored on the user's system - but does not associate that ID with an individual or the computer it's stored on. Say you're browsing car sites like Ford and Nissan. The ISP does not give Phorm personally identifiable information like IP addresses, but does share the information that the computer this cookie is on is looking at car sites right now. OIX serves up car ads. The service ignores niche sites, secure sites (HTTPS), and low-traffic sites. Firewalling between Phorm and the ISP means that the ISP doesn't know and can't deduce the information that the OIX platform knows about what ads are being served. Nothing is stored to create a profile. Phorm instead offers advertisers instead is the knowledge that they are serving ads that reflect users' interests in real time.

The difference to Davies is that Google, which came last in Privacy International's privacy rankings, stores search histories and browsing data and ties them to personal identifiers, primarily login IDs and IP addresses. (Next month, the Article 29 Group will report its opinion as to whether IP addresses are personal information, so we will know better then which way the cookie crumbles.)

"The potential to develop a profile covertly is extremely limited, if not eliminated," says Davies.

Phorm itself says, "We really think what our stuff does dispells the myth that in order to provide relevance you have to store data."

I hate advertising as much as the next six people. But most ISPs are operating on razor-thin margins if they make money at all, and they're looking at continuously increasing demand for bandwidth. That demand can only get worse as consumers flock to the iPlayer and other sources of streaming video. The pressure on pricing is steadily downward with people like TalkTalk and O2 offering free or extremely cheap broadband as an add-on to mobile phone accounts. Meanwhile, the advertising revenues go to everyone but them. Is it surprising that they'd leap at this? Analysts estimate that BT will pick up £85 million in the first year. Nice if you can get it.

We all want low-cost broadband and free content. None of us wants ads. How exactly do we propose all this free stuff is going to be paid for?

As for Phorm, it's going to take a lot to make some users trust them. I'd say, though, that the jury is still out. Sometimes people do learn from past mistakes.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 18, 2008

Harmony, where is thy sting?

On the Net, John Perry Barlow observed long ago, everything is local and everything is global, but nothing is national. It's one of those pat summations that sometimes is actually right. The EU, in the interests of competing successfully with the very large market that is the US, wants to harmonize the national laws that apply to content online.

They have a point. Today's market practices were created while the intangible products of human ingenuity still had to be fixed in a physical medium. It was logical for the publishers and distributors of said media to carve up the world into national territories. But today anyone trying to, say, put a song in an online store, or create a legal TV download service has to deal with a thicket of national collection societies and licensing authorities.

Where there's a problem there's a consultation document, and so there is in this case: the EU is giving us until February 29 (leap year!) to tell them what we think (PDF).

The biggest flaw in the consultation document is that the authors (who needed a good copy editor) seem to have bought wholesale the 2005 thinking of rightsholders (whom they call "right holders"). Fully a third of the consultation is on digital rights management: should it be interoperable, should there be a dispute resolution process, should SMEs have non-discriminatory access to these systems, should EULAs be easier to read?

Well, sure. But the consultation seems to assume that DRM is a) desirable and b) an endemic practice. We have long argued that it's not desirable; DRM is profoundly anti-consumer. Meanwhile, the industry is clearly fulfilling Naxos founder Klaus Heymann's April 2007 prophecy that DRM would be gone from online music within two years. DRM is far less of an issue now than it was in 2006, when the original consultation was launched. In fact, though, these questions seem to have been written less to aid consumers than to limit the monopoly power of iTunes.

That said, DRM will continue to be embedded in some hardware devices, most especially in the form of HDCP, a form of copy protection being built, invisibly to consumers until it gets in their way, into TV sets and other home video equipment. Unfortunately, because the consultation is focused on "Creative Content Online", such broader uses of DRM aren't included.

However, because of this and because some live streaming services similarly use DRM to prevent consumers from keeping copies of their broadcasts (and probably more will in future as Internet broadcasting becomes more widespread), public interest limitations on how DRM can be used seem like a wise idea. The problem with both DRM and EULAs is that the user has no ability to negotiate terms. The consultation leaves out an important consumer consideration: what should happen to content a consumer pays for and downloads that's protected with DRM if the service that sold it closes down? So far, subscribers lose it all; this is clea

The questions regarding multi-territory licensing are far more complicated, and I suspect answers to those depend largely on whether you're someone trying to clear rights for reuse, someone trying to protect your control over your latest blockbuster's markets, or someone trying to make a living as a creative person. The first of those clearly wants to buy one license rather than dozens. The second wants to sell dozens of licenses rather than one (unless it's for a really BIG sum of money). The third, who is probably part of the "Long Tail" mentioned in the question, may be very suspicious of any regime that turns everything he created before 2005 into "back catalogue works" that are subject to a single multi-territory license. Science fiction authors, for example, have long made significant parts of their income by selling their out-of-print back titles for reprint. An old shot in a photographer's long tail may be of no value for 30 years – until suddenly the subject emerges as a Presidential candidate. Any regime that is adopted must be flexible enough to recognize that copyrighted works have values that fluctuate unpredictably over time.

The final set of question has to do with the law and piracy. Should we all follow France's lead and require ISPs to throw users offline if they're caught file-sharing more than three times? We have said all along that the best antidote to unauthorized copying is to make it easy for people to engage in authorized copying. If you knew, for example, that you could reliably watch the latest episode of The Big Bang Theory (if there ever is one) 24 hours after the US broadcast, would you bother chasing around torrent sites looking for a download that might or might not be complete? Technically, it's nonsense to think that ISPs can reliably distinguish an unauthorized download of copyrighted material from an authorized one; filtering cannot be the answer, no matter how much AT&T wants to kill itself trying. We would also remind the EU of the famed comment of another Old Netizen, John Gilmore: "The Internet perceives censorship as damage, and routes around it."

But of course no consultation can address the real problem, which isn't how to protect copyright online: it's how to encourage creators.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 14, 2007

Nativity plays

Last night I was involved in recording a segment of an edition of the regional ITV show London Calling that I'm told will be broadcast next week (by which time I will have avoided embarrassment by leaving the country). I was there as a skeptic, not an Internet commentator. But it was annoying enough that I'm going to pretend the experience is a suitable subject for net.wars.

I've said before now that in general the skeptics do not take a position on matters of faith: we think about things that can be tested and how to test them. If you want to tell me that you believe that a little pink cloud is up there guiding your way through life there really isn't much I can say. If, however, you tell me that every year that little pink cloud impregnates a virgin, we might start talking about how to test this phenomenon under proper observing conditions. The rise of the religious right in the US and the increasing fight over teaching creationism in the schools and Bush's disregard for science mean that many American skeptics are being forced to modify this long-held policy.

I was told the show would be a lively debate; it was more of a free-for-all, in which I, along with three humanists and the atheist stand-up comedian Robin Ince, found ourselves arguing about the threat to Christianity posed by the disappearance of school nativity plays. The show was fronted by a quintet of I guess bigger-league journalists and TV people: Vanessa Feltz, Eve Pollard, Nick Ferrari, a guy from the Evening Standard whose name I didn't catch. (They were all far too grand to consort in the green room with us lower-level invited guests, who were in turn kept away from the hoi polloi of the nondescript audience. Such is the role of hierarchy in television. I would point out that I, too, have a Wikipedia entry; so there.)

The bottom line of the discussion: almost everyone, be they Indian, Muslim, Christian, or Jew, loves Christmas. But – said Keith Porteous Wood, head of the National Secular Society – only 30 percent of the population celebrate it as a religious festival. For most of us, religious or agnostic, atheist or Jedi Knight, Christmas is about decorating trees, giving and receiving presents, organising travel schedules and accommodation for family members, and enjoying a lot of good food. The people who aren't doing the cooking and the airport runs may even have a pretty good time.

Of course, last night was primarily about whipping people into a frenzy. Ferrari, who does a show on LBC radio that I was previously unaware of, in particular fulminated at the moral injustice of "taking the Christ out of Christmas". Well, folks, this is the price you pay for success. Your holiday – which of course you largely stole from the pagans - has been adopted by a lot of people who do not care about your reasons for celebrating it. I'm sorry you don't get royalties for this the way Microsoft does on copies of Windows, but there it is.

One of the main guests' most important contentions: Christianity is under attack. Please. This is an idea you've imported from the US. You have not only a dominant religion but an established one. Granted, the planned reforms to the makeup of the House of Lords will remove some of the bishops. Granted, church attendance has been dropping for decades now. But a few schools deciding they live in a multicultural society is small beer. British Christians still have the Queen, the Parliament, and the country's entire structure of holidays on their side.

The claim that Christianity is the subject of attack isn't even all that sound in the US, where Christians are much shakier in their claim that "This is a Christian country". They may feel this way, sure – but so does every religious or non-religious group at one time or another. It's a good tactic, though, for fostering group bonding, a nice thing to have in an election year.

A lot of last night's complaints played on nostalgia for the way things were when they were children. Vanessa Feltz in particular hammered on this one: according to her the country is now awash in such ghastly characters as Christmas lobsters, apple pies, and so on. We're supposed to be horrified. (Apparently the apple pie character was to promote healthy eating, which sounds dire even for a school play.)

I'd bet that today's children themselves do not share their parents' horror at playing a lobster instead of a virgin miraculously impregnated by an invisible spirit. Probably Feltz was right that whatever that lobster is up to isn't as good a story or told in as attractive language as the story of the shepherds. My school didn't have nativity players that I remember, but the language of that story is engraved in my brain, too. Tempora mutantur, et nos mutamur in illis.

OK, OK, I know the show was trash. The next segment (in which I was mercifully not involved) is "Golddiggers: is marrying for money wrong – or just practical?" Faugh. I feel better now.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 30, 2007

Spam today and spam tomorrow

Admittedly you have to be not really paying enough attention to do this, but in the last couple of weeks I've discovered torrent spam. Here's how it works: you download a file you think is something you want, and discovered it's been RAR-compressed. When you uncompress the file, you get a second RAR file that requires a password and a Readme file. The Readme advises you that to get the password you need to go to a Web site and enter an email address – any email address. I'm not quite demented enough to do this, even with the venerable black-hole address nobody@nowhere.com. Who knows what evils might be lurking on that Web site?

This is the more or less harmless kind. Other stories say that there are more dangerous types of torrent spam, where to play the file you are required to download a new video player that is typically infected with malware.

For once, this seems not to be an RIAA/MPAA initiative. It's just spam, reflecting the reality that any time anything on the Net gets sufficiently popular someone tries to turn it into a vehicle for unwanted crap. And you know they know it's unwanted, because otherwise they wouldn't be trying so hard to trick you into reading it. At one time – oh, say, a year ago – a lawyers' mailing list agreed that at the threshold of around 10,000 readers you have to turn off or moderate comments because the comment spam got too heavy. Page rank can do it, too: the pelicancrossing.net site that hosts one version of this column gets something like 1,000 comment spams a week – and hardly any real ones. (Moveable Type, which powers that blog, does have anti-spam settings, which trap most, but not all, of the junk. Unfortunately, the price is that for some reason it rejects all comments I make myself, which means that people who do comment don't get responses from me. Despite a lot of trawling through settings, I have yet to find a solution to this.)

Appropriately diligent research shows that torrent spam isn't new; it was first reported in 2004, and by 2005 there were efforts to create a reporting service. That service now has very little traffic in its forums, and that makes it hard to tell from its stats whether this is a growing problem. Despite the egocentric desire to see it as one – hey, I noticed it! It must be big! – it's probably just a footnote to the great tide of spam that washes over us in so many other ways. A modest amount of attention paid to checking the torrent you're downloading defeats it.

Still, it's arguably yet another reason why the *AAs should have fought back by creating their own cheap, reliable, widely available services. They may pick up some short-term advantage by being able to campaign semi-truthfully on the idea that using P2P to download copyrighted material is risky. But long-term the educational task they'll face in trying to explain to ordinary consumers why we should trust that their systems are safer will be a bigger disadvantage.

On the wider Internet, of course, spam continues to be a relentless flood. Google broke ranks this week to claim that the amount of spam reaching its network is declining. I find it hard to believe this. It's certainly true that spam does move on if a particular technology goes out of favor – the areas of Usenet I frequent are now almost completely spam-free though not, unfortunately, devoid of single-idea-obsessed idiots with a trigger-finger on the abusive adjectives.

But if email spam does start to die because too many people have moved their real communications to IM, Skype, Facebook, and other newer, more carefully gated media it seems unlikely that any one service provider will be singled out. Given that the single biggest reason email spam is popular is that it costs next to nothing to send, I really can't see botnet designers sitting around their labs going, "Oh, listen, this time let's not bother sending anything to gmail addresses; they just bounce it." If there's one thing we know about spammers it's that they don't care about targeting. I find Facebook, LinkedIn, and the other social network platforms painfully irritating to use for communications compared to email; but for a lot of people they work as an elaborate form of white-listing.

But others do not. "I'm more likely to have Facebook open these days than Outlook," one such correspondent wrote just this morning when I suggested taking it to email.

The longer-term prospects, though, are for much more "legitimate" marketing email. Spamhaus has a really interesting article up about a recent flood of sales messages it's received from one of the lifetime menaces on its ROKSO list advertising cheap home delivery of the New York Times. That same article talks about the many ways email addresses find their way onto marketing lists: sharing with third-party companies and database-matching being the most significant. Then, also this week, Adobe and Yahoo! announced that we can have – oh, joy! – ads in PDFs downloaded dynamically while we try to read.

Doesn't anyone get it? The difference between marketing and spam is user choice. Take that away, and it's all just spam.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 16, 2007

Strike

The newly minted Nobel Laureate Doris Lessing has advised writers to remind themselves: "'Without me the literary industry would not exist: the publishers, the agents, the sub-agents, the sub-sub-agents, the accountants, the libel lawyers, the departments of literature, the professors, the theses, the books of criticism, the reviewers, the book pages – all this vast and proliferating edifice is because of this small, patronized, put-down and underpaid person."

TV and movie scriptwriters are usually better paid than novelists, but if you read William Goldman's several books about screenwriting the general position of the writer in Hollywood is somewhere beneath contempt. ("Did you hear the one about the Polish starlet who was so dumb she slept with the writer?") Bad casting can break the finest scripts (think Ronald Reagan and Ann Sheridan in Casablance). But casting can't make a dud script shine. Without writers, nothing.

There's no doubt that the TV studios are in a stronger position than they used to be. Current trends like reality TV, talk shows, game shows, and sports (televised poker, anyone?), plus the ever-increasing back catalogue of movies and shows, mean that the seemingly infinite number of TV hours can be filled somehow. The audience, long-term, seems secure: broadcast TV has ease of use.

But the studios are also in a weaker position. The mass audiences once commanded by the Big Three US networks are splintering into myriad smaller channels. Two decades of home video sales and rental have also demonstrated media companies' ability to turn apparently threatening technology into large, new revenue streams. And the writers' position is simple: if you're going to go on making money off my work for a century (as the term is under current copyright law), I want some of it.

The Internet is also catching the studios in a new kind of bind previously experienced primarily by politicians. In 1988, the last time writers went on strike, it was still possible to say different things to different audiences and not get found out. It was before a lot of media concentration, there were more companies involved, and fewer of those companies were public. Today, we find it easy to follow the difference between what big media companies are telling the courts (file-sharing is bankrupting us), Wall Street s (digital media are growing like crazy and creating new revenue opportunities, if not streams), and what they're telling the writers (no money, sorry). Fan support for the strike is also much easier to organise and much more visible.

The late British journalist John Diamond once set off a small firestorm in the Fleet Street Forum by arguing that writers shouldn't be paid royalties – after all, he said, you don't pay your plumber every time you use the bathtub he's put in – but should be well-paid up front. I understand that this is a variant of an analogy made famous by Lew Wasserman, who originally said it in toilets. Diamond held that this remained true even if your plumber installed a bathtub so fantastic and elegant that you were able to charge money for tours through your home for people to look at it. My own belief is if the plumber were that good he'd be mounting his own exhibitions and pocketing the ticket revenue.

But writing isn't like plumbing, in that if you know how to install a functioning toilet the chances are very good that you can keep installing them, year after year, in a reliable fashion, for enough money to make a living. Writing, by contrast, can be a completely freakish business, subject to luck, timing, and accident: you can write a billion-dollar hit one year, and then spend the rest of your life unable to write anything else that anyone wants to read or see. Participating in the profits of your work, therefore, is compensation for the high-risk nature of being a creator of any kind. It's the same trade-off as putting your money in a savings account earning a modest 4 percent per year versus buying tech stocks.

That said, Diamond was primarily talking about journalism. It's not so long since journalists by default retained the right to resell and exploit their work. Periodical publishers began to shift in the 1990s to all-rights contracts that included electronic media. Young freelances often don't know any better than to sign these contracts; older ones trying to argue can find themselves out of work. It's been bad enough in journalism, where freelance incomes haven't budged in 20 years, but at least journalists can keep working, like plumbers. A Hollywood writer's employment is far more fragile.

In an honest world, I think publishers in the 1990s and studios now should be able to say something like: "We know these new media are going to be big winners for us. But we don't understand the business model yet, and we don't know where the revenues are going to come from. Give us a five-year moratorium while we figure things out, and then we'll negotiate in good faith to ensure you get a fair share." That no one can say this and be believed is Hollywood's own damned fault after decades of "creative accounting" to ensure that big hits are never profitable enough to owe creative artists their cut. Time to pay up.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 3, 2007

Amateur hour

If you really want to date yourself, admit that you remember Ted Mack's Amateur Hour. Running from 1949 to 1970, it was the first televised amateur talent competition, the granddaddy of today's reality TV. What's new about the Internet isn't that amateurs can create content people will look at but the ability to access an audience without going through an older-media gatekeeper.

But even on the Internet, user-generated content (as the kids are calling it these days) is not new: user-uploaded messages and files are how people like CompuServe made money. But that was user-originated content. Today's user-generated content on sites like YouTube includes a mass of uploaded video, audio, and text that in fact do not belong to the users but to third parties. These issues are contentious; so much so that Ian Fletcher, the CEO of he UK's Intellectual Property Office, bailed at the thought of appearing before an audience that might publish his remarks out of context on the Net.

To hear media representatives tell it at today's Amateur Hour conference, they regarded it with a pretty benign eye for quite a while.

It wasn't, said Lisa Stancati, assistant general counsel for ESPN, until Google bought YouTube that everyone got mad. "If Google is going to be making money from my content I have a serious problem with that."

Well, fair enough. But how did it get to be your content? Media companies love theoretically paying artists when they want to expand copyright. Come contract time it's a different story, as the tableful from Actors Equity knew all too well. And what about the content of the future?

Marni Pedorella, vice president of NBC Universal, notes that the site the company runs for Battlestar Galactica fans provides raw materials for users to play with. If they upload the mashed-up results, however, NBC takes a royalty-free license in perpetuity. Are older media hoping new media will become a source of what Brian Murphy is calling CGC – for "cheaply generated content". Like reality TV?

Heather Moosnick, vice president of business development for CBS Interactive, recounted CBS's moves to share its content more widely around the Net: you can watch current shows on its Web site, for example (unless you live outside the US). But, she said sadly, if people don't care about copyright – well, there might be fewer CSIs. (Threat or promise? There are three CSI shows. At least she didn't say that less "expert content" will deprive us of Cavemen.)

Because the conference was sponsored by a law school, a lot of the moderators' questions centered on things like: How do you see your risks developing? What is your liability? What about international laws?

And: what is the difference between a professional and an amateur? You might argue that it doesn't matter as long as the content is interesting, but when it comes to the shield laws that allow journalists to protect their sources the difference is important. Should every blogger – hundreds of millions of them – have the right ? Just the ones with mass audiences who make a living from running AdSense alongside their postings? None? Is a blogger with an audience of 100,000 of the most important people in American politics more or less worthy of protection than a guy writing for a local paper with a circulation of 10,000? Is a fan taking pictures of Lindsay Lohan with a cell phone subject to California's new law limiting paparazzi?

To me, the key difference between an amateur and a professional is that the professional does the job even when he doesn't feel like it.

The source of this idea is Agatha Christie, who defined the moment she became a professional writer, some ten or 15 books into her career. She was mid-divorce, and she liked neither the book nor her work on it – but she had a contract. The amateur can say, Screw the contract, I don't feel like getting up this morning. The professional makes the work arrive, even if it stinks. Unfortunately, that practical distinction is not easily describable in law.

You could define it a different way: a professional is the guy you'll miss if he goes on strike, as TV writers are about to do over residual payments for digital reuse.

Another line: a lot of large companies operate their message boards on the basis of the safe harbor protections in the DMCA, under which you're not liable as long as you take down material when notified of infringement or other legal problems. What about mixed content? There's a case pending between the Fair Housing Council and Roommates.com because the latter site gave users a questionnaire asking such roommate-compatibility questions as age, race, gender, sexual orientation… All these are questions that landlords are not allowed to ask under the Fair Housing Act. At what point is someone looking for a roommate subject to that act? Are we really going to refuse to allow people all control over who they live with?

These aren't problems that have solutions, at least yet. They're the user-generated lawsuits of the future.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 27, 2007

There ain't no such thing as a free Benidorm

This has been the week for reminders that the border between real life and cyberspace is a permeable blood-brain barrier.

On Wednesday, Linden Labs announced that it was banning gambling in Second Life. The resentment expressed by some of SL residents is understandable but naive. We're not at the beginning of the online world any more; Second Life is going through the same reformation to take account of national laws as Usenet and the Web did before it.

Second, this week MySpace deleted the profiles of 29,000 American users identified as sex offenders. That sounds like a lot, but it's a tiny percentage of MySpace's 180 million profiles. None of them, be it noted, are Canadian.

There's no question that gambling in Second Life spills over into the real world. Linden dollars, the currency used in-world, have active exchange rates, like any other currency, currently running about L$270 to the US dollar. (When I was writing about a virtual technology show, one of my interviewees was horrified that my avatar didn't have any distinctive clothing; she was and is dressed in the free outfit you are issued when you join. He insisted on giving me L$1,000 to take her shopping. I solemnly reported the incident to my commissioning editor, who felt this wasn't sufficiently corrupt to worry about: US$3.75! In-world, however, that could buy her several cars.) Therefore: the fact that the wagering takes place online in a simulated casino with pretty animated decorations changes nothing. There is no meaningful difference between craps on an island in Second Life and poker on an official Web-based betting site. If both sites offer betting on real-life sporting events, there's even less difference.

But the Web site will, these days, have gone through considerable time and money to set up its business. Gaming, even outside the US, is quite difficult to get into: licenses are hard to get, and without one banks won't touch you. Compared to that, the $3,800 and 12 to 14 hours a day Brighton's Anthony Smith told Information Week he'd invested in building his SL Casino World is risibly small. You have to conclude that there are only two possibilities. Either Smith knew nothing about the gaming business - if he did, he know that the US has repeatedly cracked down on online gambling over the last ten years and that ultimately US companies will be forced to decide to live within US law. He'd also have known how hard and how expensive it is to set up an online gambling operation even in Europe. Or, he did know all those things and thought he'd found a loophole he could exploit to avoid all the red tape and regulation and build a gaming business on the cheap.

I have no personal interest in gaming; risking real money on the chance draw of a card or throw of dice seems to me a ridiculous waste of the time it took to earn it. But any time you have a service that involves real money, whether that service is selling an experience (gaming), a service, or a retail product, when the money you handle reaches a certain amount governments are going to be interested. Not only that, but people want them involved; people want protection from rip-off artists.

The MySpace decision, however, is completely different. Child abuse is, rightly, illegal everywhere. Child pornography is, more controversially, illegal just about everywhere. But I am not aware of any laws that ban sex offenders from using Web sites, even if those Web sites are social networks. Of course, in the moral panic following the MySpace announcement, someone is proposing such a law. The MySpace announcement sounds more like corporate fear (since the site is now owned by News International) than rational response. There is a legitimate subject for public and legislative debate here: how much do we want to cut convicted sex offenders out of normal social interaction? And a question for scientists: will greater isolation and alienation be effective strategies to keep them from reoffending? And, I suppose, a question for database experts: how likely is it that those 29,000 profiles all belonged to correctly identified, previously convicted sex offenders? But those questions have not been discussed. Still, this problem, at least in regards to MySpace, may solve itself: if parents become better able to track their kids' MySpace activities, all but the youngest kids will surely abandon it in favour of sites that afford them greater latitude and privacy.

A dozen years ago, John Perry Barlow (in)famously argued that national governments had no place in cyberspace. It was the most hyperbolic demonstration of what I call the "Benidorm syndrome": every summer thousands of holidaymakers descend on Benidorm, in Spain, and behave in outrageous and sometimes lawless ways that they would never dare indulge in at home in the belief that since they are far away from their normal lives there are no consequences. (Rinse and repeat for many other tourist locations worldwide, I'm sure.) It seems to me only logical that existing laws apply to behaviour in cyberspace. What we have to guard against is deforming cyberspace to conform to laws that don't exist.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 6, 2007

Born digital

Under one of my bookcases there is a box containing 40 or 50 5.25inch floppy disks next to an old floppy drive of the same size. The disks were created in SuperScripsit in the early 1980s, and require an emulator that pretends my Core2Duo is a TRS-80 Model III.

If, like me, you have had a computer for any length of time you, too, have stowed somewhere a batch of old files that you save because they are or were important to you but that you're not sure you could actually read, though you keep meaning to plug that old drive in and find out. But the Domesday Book, drafted in 1085, is still perfectly readable. In fact, it's more readable than a 1980s digital Domesday Book that was unreadable only 15 years after its creation because the technology it was stored on was outmoded.

The average life of an electronic document before it becomes obsolete is seven years. And that's if it survives that long. Paper can last centuries – and the National Archives, which holds 900 years of Britain's records, has to think in centuries.

This week, the National Archives announced it was teaming up with Microsoft to ensure that the last decade or two of government archives do not become a black hole in history.

The problem of preserving access to today's digital documents is not newly discovered. Digital preservation and archiving were on the list of topics of interest in 1997, when the Foundation for Information Policy Research was founded. Even before that, NASA had discovered the problem, in connection with the vast amounts of data collected at taxpayer expense by the various space missions. Librarians have known all along that the many format changes of the digital age posed far greater problems than deciphering an unfamiliar language chiseled into a chunk of stone.

But it takes a while for non-technical people to understand how complex a problem it really is. Most people, Natalie Ceeney, chief executive of the National Archives, said on Tuesday, think all you have to do is make back-ups. But for an archivist this isn't true, even for the simple case of, say, a departmental letter written in the early 1980s in WordStar. The National Archives wants not only to preserve the actual text of the letter but its look, feel, and functionality. To do that, you need to be able to open the document in the software in which it was originally created – which means having a machine you can run that software on. Lather, rinse, and repeat for any number of formerly common but now obsolete systems. The National Archives estimates it has 580Tb of data in obsolete formats. And more new formats are being invented every day: email, Web, instant messages, telephone text messages, email, databases, ministers' blogs, internal wikis…and as they begin to interact without human intervention that will be a whole new level of complication.

"We knew in the paper world what to keep," Ceeney said. "In the digital world, it's harder to know. But if we tried to keep everything we'd be spending the entire government budget on servers."

So for once Microsoft is looking like a good guy in providing the National Archives with Virtual PC 2007, which (it says here) combines earlier versions of Windows and Office in order to make sure that all government documents that were created using Microsoft products can be opened and read. Naturally, that isn't everything; but it's a good start. Gordon Frazer, Microsoft's UK managing director, promised open formats (or at least, Open XML) for the future. The whole mess is part of a four-year Europe-wide project called Planets.

Digital storage is surprisingly expensive compared to, say, books or film. A study reported by the head of preservation for the Swedish national archives shows that digital can cost up to eight times as much (PDF, see p4) as the same text on paper. But there is a valuable trade-off: the digital version can be easily accessed and searched by far more people. The National Archives' Web site had 66 million downloads in 2006, compared to the 250,000 visitors to its physical premises in Kew.

Listening to this discussion live, you longed to say, "Well, just print it all out, then." But even if you decided to waive the requirements for original look, feel, and functionality, not eveything could be printed out anyway. (Plus, the National Archives casually mentions that its current collection of government papers is 175 kilometres long already.) The most obvious case in point is video evidence, now being kept by police in huge amounts – and, in cases of unsolved crimes or people who have been sentenced for serious crimes, for long periods. Can't be printed. But even text-based government documents: when these were created on paper you saved the paper. The documents of the last 20 years were born digital. Paper is no longer the original but the copy. The National Archives is in the business of preserving originals.

Nor, of course, does it work to say, "Let the Internet archive take care of it: too much of the information is not published on the Web but held in internal government systems, from where it will be due to emerge in a few decades under Britain's 30-year rule. Hopefully we'll know before then that this initiative has been successful.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 15, 2007

Six degrees of defamation

We used to speculate about the future of free speech on the Internet if every country got to impose its own set of cultural quirks and censorship dreams on The lowest common denominator would win – probably Singapore.

We forgot Canada. Michael Geist, the Canada Research Chair of Internet and E-Commerce Law at the University of Ottawa, is being sued for defamation by Wayne Crookes, a Vancouver businessman (it says here). You might think that Geist, who doubles as a columnist for the Toronto Star (so enlightened, a newspaper with a technology law column!), had slipped up and said something unfortunate in one of his public pronouncements. But no. Geist is part of an apparently unlimited number of targets that have linked to other sites that have linked to sites that allegedly contained defamatory postings.

In Geist's words on his blog at the end of May, "I'm reportedly being sued for maintaining a blogroll that links to a site that links to a site that contains some allegedly defamatory third party comments." (Geist has since been served.)
Crookes is also suing Yahoo!, MySpace, and Wikipedia. (If you followed the link to the Wikipedia stub identifying Wayne Crookes, now you know why it's so short. Wikipedia's own logs, searchable via Google, show that it's replacing the previous entry.) Plus P2Pnet, OpenPolitics.ca, DomainsByProxy, and Google. In fact, it's arguable that if Crookes isn't suing you your Net presence is so insignificant that you should put your head in a bucket.

One of the things about a very young medium – as the Net still is – is that the legal precedents about how it operates may be set by otherwise obscure individuals. In Britain, one of the key cases determining the liability of ISPs for material they distribute was 1999's Laurence Godfrey vs Demon Internet. Godfrey was, or is, an otherwise unremarkable British physics lecturer working in Canada until he discovered Usenet; his claim to fame (see for example the Net.Legends FAQ) is a series of libel suits he launched to protect his reputation after a public dispute whose details probably few remember or understand. In 2000 Demon settled the case, paying Godfrey £15,000 and legal costs. And thus were today's notice and takedown rules forged.

The truly noticeable thing about Godfrey's case against Demon was that Demon was not Godfrey's ISP, nor was it the ISP used by the poster whose 1997 contributions to soc.culture.thai were at issue. Demon was merely the largest ISP in Britain that carried the posting, along with the rest of the newsgroup, on its servers. The case therefore is one of a string of cases that loosely circled a single issue: the liability of service providers for the material they host. US courts decided in 1991, in Cubby vs Compuserve, that an online service provider was more like a bookstore than a publisher. But under the Digital Millennium Copyright Act it has become alarmingly easy to frighten individuals and service providers into taking down material based on an official-looking lawyer's letter. (The latest target, apparently, is guitar tablature, which, speaking as a musician myself, I think is shameful.)

But the more important underlying thread is the attempt to keep widening the circle of liability. In Cubby, at least the material at issue appeared on the Journalism Forum which, though independently operated, was part of CompuServe's service. That particular judgement would not have helped any British service provider: in Britain, bookstores, as well as publishers, can be held responsible for libels that appear in the books they sell, a fact that didn't help Demon in the Godfrey case.

In the US, the next step was 2600 DeCSS case (formally known as Universal City vs Reimerdes, which covered not only posting copies of the DVD-decrypting software but linking to sites that had it available. This, of course, was a copyright infringement case, not a libel case; with respect to libel the relevant law seems to be, of all things, the 1996 Communications Decency Act, which allocated sole responsibility to the original author. Google itself has already won at least one lawsuit over including allegedly defamatory material in its search results.

But legally Canada is more like Britain than like the US, so the notion of making service providers responsible may be a more comfortable one. In his column on the subject, Geist argues that if Crookes' suits are successful Canadian free speech will be severely curtailed. Who would dare run a wiki or allow comments on their blog if they are to be held to a standard that makes them liable for everything posted there? Who would even dare put a link to a third-party site on a Web site or in a blogroll if they are to be held liable for all the content not only on that site but on all sites that site links to? Especially since Crookes's claim against Wikimedia is not that the site failed to remove the offending articles when asked, but that the site failed to monitor itself proactively to ensure that the statements did not reappear.

The entire country may have to emigrate virtually. Are you now, or have you ever been, Canadian?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 16, 2007

Going non-linear

OK, you're the BBC. How do you balance what consumers want with what the rightsholders demand, all the while trying to serve the public interest as required by your charter and bearing in mind the revenues you derive from secondary markets to non-license fee payers in other countries?

There are some things we can guess pretty accurately about what consumers want: control over schedules and access at will. People who have Tivos or other PVRs, for example, do not in general decide to throw them away and go back to scrutinizing schedules for the moment when their favorite shows will be on. ("linear viewing", in newspeak). As channels and conduits continue to proliferate, the only way to make sense of it all is to have a computer do it for you. Similarly, the big action online is in two areas: downloads (legal or illegal), and streamed video clips. In both cases, it's up to the viewer to determine when they watch.

YouTube might seem close to the traditional broadcast model, since people do have to sit on the "channel" (site) to watch streaming video and can't download or save it for future viewing without a third-party hack. But in fact it points to a very different future in which it will be more common to watch pieces of programs than whole ones. This is a trend throughout digital media – people buy songs more than albums, professors put together course packs of chapters rather than assigning whole textbooks, National Public Radio lets listeners pick the sections of "All Things Considered" that they want to hear, and ring tones capture a quick snip of a favorite sound. Why should video be any different? Great movies and TV shows are the sum of their great moments.

The BBC is trying to take the next steps into its digital future against this media landscape, and some time back it published its proposals. Ofcom has published comments (PDF), and the BBC Trust has published its comments (PDF) as part of a public consultation. You have until March 28 to respond as a member of the fee-paying public.

For the past few years, the BBC has seemed like the one big organization that could really lead the way away from Big Media's take on what digital media should look like, especially when it began opening up its archives online for anyone to mix, rip, or burn. And in fact its original proposals seem to have been similarly far-ranging, including audio podcasts of classical music, a seven-day "catch-up window" in which viewers could download shows they'd missed, and "series stacking", allowing viewers to download all the previous episodes of a series that's still in progress.

Ofcom and the BBC Trust are seeking to modify these proposals. Some of their suggestions make sense. For example, allowing series stacking on 20-years of the soap EastEnders would be pretty extreme, as the BBC Trust points out, especially since the BBC derives revenue from the earlier years of the soap in syndication on UK Gold. I think decisions about what series can be stacked should include some consideration about whether the series is going to be commercially available in other formats within a reasonable amount of time – say, a year. If it's not, that would argue for greater availability via download.

What's unnerving is to read this passage against series stacking, from the BBC Trust's report:
"A window of 13 weeks could allow users to create sizeable archives of programming on their computers."

Compare and contrast to:the decision in the Sony Betamax case, in which Universal Studios and Disney complained about the potential for "library-building" that might "result in a decrease in their revenue from licensing their works to television and from marketing them in other ways."

We now know what they didn't in 1984: that people did create libraries of videotapes – but many of those tapes were purchased, and home video/DVD sales now make up a vital source of revenues for the studios. While the BBC also makes money from its secondary markets, it has a chance to be a real innovator here. It should not echo Disney in clinging to old business models..

A bigger issue is whether audio podcasts should be protected with digital rights management. The Trust and Ofcom are in favor of this, on the grounds that without it the BBC might be distorting the market for competitors. Hogwash. The BBC has batches of free-to-air radio channels in the UK; does that mean no one listens to any other radio? We know two things about DRM. First, it puts control over access to the content into the hands of a third-party vendor, one with no public service charter or accountability to the license fee payers. Second, users hate it. The BBC could spend silly amounts of money into the infinite future DRMing its audio content, and the first thing that will happen is that someone will write a nice little program to strip it all out. It has always been true that the best way to fight copyright infringing file-sharing is to build a service that's fast, reliable, and reasonably priced. As things are, if the BBC adopts the Trust's recommendations it will just fuel the extralegal file-sharing that all these guys are supposed to be against.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 5, 2007

Stonewalling

"We made you," one or more fans once reputedly told Katharine Hepburn, chastising her for refusing to give them an autograph.

"Like hell you did," she is supposed to have replied.

On Tuesday, LA Times columnist Joel Stein wrote a column entitled, Have something to say? I don't care. From the number of people saying heatedly on blogs that in the face of such monumental arrogance they don't care, either, you have to figure Stein is totally doing the job the newspaper is paying him for: getting read and talked about. Which is why, folks, his column doesn't mean all of print media is doomed. If you really think he's an asshole, your best response is to stop writing about him.

Of course, we should also remember that this is the same newspaper that panicked and took down its (badly conceived) wikitorial as soon as people predictably started posting obscene photographs to it. But given that Stein says in the actual column that he personally spends four or five hours a week answering reader email, it might be logical to think that maybe he's just kidding.

That said, if you don't want to be accused of arrogance as a columnist you probably shouldn't compare yourself to Martin Luther. Especially if, as Brad de Long points out, that comparison is inaccurate. Luther probably didn't, as popular mythology has it, publish his 95 Theses by nailing them to the church wall. But he did send them out to scholars, friends, and even the Pope for comment, and encouraged general debate and asked people to send him their comments. The same Internet that is enabling Stein to "don't care" about his readers followed exactly the same process. Internet pioneers published Requests for Comments and incorporated the best suggestions into their work, which itself was adopted on merit, not because someone talked "at" everyone else to insist it was a good idea. Collaboration is as old as human culture.

But that's the significant difference between what Luther and the Internet pioneers were doing and what Joel Stein is doing: they were trying to build something. Not at all the same thing.

I don't know Stein, but if he's anything like me he's just showing off in public. There is some evidence to suggest that this is true: "Joel Stein is desperate for attention". Adding a comments page to the LA Times site kind of supports this thesis. The big frustration about emailed comments isn't that they're there demanding to be answered, but that they're private. A comments page, even one that is filled with entries calling you an asshole, is a public display of how important and interesting you are: look how many people had something to say about it! Much more satisfying if you're a publicity hound.

Any reader determined enough to send a letter or, more recently, make a phone call has always been able to send a journalist feedback on stories. Often this is welcomed because the feedback includes leads for new stories. Duh. Even so, it isn't always easy to face that feedback. Few journalists have hides thick enough not to panic slightly every time a reader communication arrives: this could be the one that shows us definitively that we are idiots who should not be allowed to think in public.

Aside from the silliness, there is a real point here: how much interactivity do we want, and what form should it take? When we talk about citizen journalism, is this what we mean? Chicago Tribune columnist Eric Zorn seized the opportunity to ask his readers exactly that.

One problem for anyone working these days is that adding reader interactivity in which you are expected to participate may add to your workload without adding to your overall pay. That doesn't always matter; if you're a staff writer and have a load of interesting research material that won't fit in the limited print space being able to publish the rest of it on the Web may be satisfying.

If you're freelance, not participating in the new world makes you more marginal; but the realities of making a living can make the time drain prohibitive. George Bernard Shaw estimated that he could have written another play if he had gotten less mail; he actually had a system of printed, coloured postcards he could sent as standard replies to frequently asked questions to save himself time. (The volumes of Shaw's collected letters attest to the fact that he wasn't rigorous about using them without additional comment.)

Most writers, not being Shaw, have to find the time. Because what makes it possible to earn a living as a creative person over a long period of time is the community of readers and fans you build around your work. The sign that you are really successful is that your particular fan community thinks it owns part of your success and has an emotional investment in your work. If they didn't, they wouldn't be fans. Hepburn was right, but she was also wrong.

She still didn't have to give the autographs, though – and she didn't. She told those fans to "Go sit on a tack."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 20, 2006

Spam, spam, spam, and spam

Illinois is a fine state. It is the Land of Lincoln. It is the birth place of such well-known Americans as Oprah Winfrey, Roger Ebert, and Ronald Reagan. It has a baseball team so famous that even I know it's called the Chicago Cubs. John Dewey (as in the Dewey decimal system for cataloguing library books) came from Illinois. So did the famous pro-evolution lawyer Clarence Darrow, Mormon church founder Joseph Smith, the nuclear physicist Enrico Fermi, semiconductor inventor William Shockley, and Frank Lloyd Wright.

I say all this because I don't want anyone to think I don't like or respect Illinois or the intelligence and honor of its judges, including those of Charles Kocoras, who who awarded $11.7 million in damages to e360Insight, a company branded a spammer by the Spamhaus Project.

The story has been percolating for a while now, but is reasonably simple. e360Insight says it's not a bad spammer guy but a good opt-in marketing guy; Spamhaus first said the Illinois court didn't have jurisdiction over a British company with no offices, staff, or operations in the US, then decided to appeal against the court's $11.7 million judgement. e360Insight filed a motion asking the court to haveICANN and/or Spamhaus's domain registrar, the Canadian company Tucows, remove Spamhaus's domain from the Net. The judge refused to grant this request, partly because doing so would cut off Spamhaus's lawful activities, not just those in contravention of the order he issued against Spamhaus. And a good time is being had by all the lawyers.

The case raises so many problems you almost don't know where to start. For one thing, there's the arms race that is spam and anti-spam. This lawsuit escalates it, in that if you can't get rid of an anti-spammer through DDoS attacks, well, hey, bankrupt them through lawsuits.

Spam, as we know, is a terrible, intractable problem that has broken email, and is trying to break blogs, instant messaging, online chat, and, soon, VOIP. (The net.wars blog, this week, has had hundreds of spam comments, all appearing to come from various Gmail addresses, all landing in my inbox, breaking both blogs and email in one easy, low-cost plan. The breakage takes two forms. One is the spam itself – up to 90 percent of all email. But the second is the steps people take to stop it. No one can use email with any certainty now.

Some have argued that real-time blacklists are censorship. I don't think it's fair to invoke the specter of Joseph McCarthy. For one thing, using these blacklists is voluntary. No one is forced to subscribe, not even free Webmail users. That single fact ought to be the biggest protection against abuse. For another thing, spam email in the volumes it's now going out is effectively censorship in itself: it fills email boxes, often obscuring and sometimes blocking entirely wanted email. The fact that most of it either is a scam or advertises something illegal is irrelevant; what defines spam, I have long argued, is the behavior that produces it. I have also argued that the most effective way to put spammers out of business is to lean on the credit card companies to pull their authorisations.

Mail servers are private property; no one has the automatic right to expect mine to receive unwanted email just as I am not obliged to speak to a telemarketer who phones during dinner.

That does not mean all spambusters are perfect. Spamhaus provides a valuable public service. But not all anti-spammers are sane; in 2004 journalist Brian McWilliams made a reasonable case in his book Spam Kings that some anti-spammers can be as obsessive as the spammers they chase.

The question that's dominated a lot of the Spamhaus coverage is whether an Illinois court has jurisdiction over a UK-based company with no offices or staff in the US. In the increasingly connected world we live in, there are going to be a lot of these jurisdictional questions. The first one I remember – the 1996 case United States vs. Thomas – came down in favor of the notion that Tennessee could impose its community decency standards on a bulletin board system in California. It may be regrettable – but consumers are eager enough for their courts to have jurisdiction in case of fraud. Spamhaus is arguably as much in business in the US as any foreign organisation whose products are bought or used in the US. Ultimately, "Come here and say that" just isn't much of a legal case.

The really tricky and disturbing question is: how should blacklists operate in future? Publicly listing the spammers whose mail is being blocked is an important – even vital – way of keeping blacklists honest. If you know what's being blocked and can take steps to correct it, it's not censorship. But publishing those lists makes legal action against spam blockers of all types – blacklists, filtering software, you name it – easier.

Spammers themselves, however, should not rejoice if Spamhaus goes down. Spam has broken email, that's not news. But if Spamhaus goes and we actually receive all the spam it's been weeding out for us – the flood will be so great that spam will finally break spam itself.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 13, 2006

GoogTube

Lawsuits!

That seems to have been most people's gleeful reaction earlier this week when Google announced its acquisition of YouTube for $1.65 billion. That and "They paid too much." Anyone would think you'd never heard of a site with user-submitted content before. Or stock swaps.

First, the lawsuit prospects. YouTube isn't a file-sharing network; it's not Napster, KaZAa, BitTorrent, Gnutella, or eDonkey. It has more in common with free Web site services. Except for "Director" accounts, videos posted to YouTube are limited to ten minutes. That certainly doesn't stop anyone from posting something that's copyright, but it does mean that YouTube isn't the most convenient way to share a feature film, hour-long TV drama, or even, really, a sitcom unless you get a director account, which requires you to indemnify YouTube in case of your misuse and supply a bunch of personal details. A ten-minute video clip from, say, a five-hour tennis match (or from an old comedy show) still violates someone's copyright, but there's less point to suing over it. The prediction that it will be individuals and "little guys" who sue YouTube rather than large rightsholders seems to me to be the right one; the little guys have more to lose in this situation. And also: a little guy will sue a big guy hoping for a nice settlement; a big guy will squash a little guy like Bambi Meets Godzilla; but two big guys make a deal.

The bigger copyright issue is to do with music, since you can easily upload a single DVD album cut and stay within the time/file size limits. But the purchase announcement was accompanied by the news of licensing agreements with Sony BMG and Warner Music Group. That's so smart it's almost unbelievable, given the recent history of the copyright wars: the easiest way to get people not to violate your copyrights is to do it yourself. If you can readily access good, official copies of something, why would you bother with illegal, less good ones unless those had some creative added value of their own (or the official ones were really expensive)?

There's also the point that although you can download YouTube videos doing so is the kind of hack that most mainstream watchers probably won't bother with.

It would be really helpful if someone of a statistical bent and possessed of a lot of patience would go in and do a survey of YouTube to determine what percentage of the content is in violation of copyright, what percentage of the violating material is not available commercially, and what percentage could actually damage anyone's sales. My own guess is that the answer to those are: a lot, most, and hardly any. But in any case, we've gone through this same thing with online file libraries and free home pages, and we know how it's going to come out: YouTube will operate, as it does now, a notice and takedown policy, and courts will eventually agree that as long as it operates that policy consistently and promptly it can't be held responsible for the sins of its users.

Even longer term, probably two other things will happen. Backed by Google's clout, YouTube should be able to sign long-term licensing deals with the major rightsholders that will cover a lot of the copyrighted material (though clearly not that of the "little guys"). Second, fair use could be extended to cover video and music; Google is currently arguing that its book scanning project (reminiscent of MP3.com's failed MyMP3 service) qualifies as fair use, although the publishers involved disagree violently.

The second thing, the paying too much, ignores the fact that Google paid entirely in stock. It's still a lot of money – or would be, if the YouTube guys could sell it and leave – but in real terms last year's $1 billion investment for a 5 percent stake in AOL cost the company more and ultimately will probably profit it less. But the fact is the price didn't matter: Google had to buy YouTube to keep it out of the hands of Yahoo! and MSN. With YouTube in the hands of one of those other companies, Google Video was dead in the water. Google makes its money on advertising; advertising goes where the people go and in direct proportion. People have known for a long time that video would eventually be successful on the Net; they just weren't sure how. In combining a video service with social networking, YouTube seems to have found the answer.

To the suggestion that YouTube is a flash-in-the-Net fad, I say nerts. YouTube has all the same characteristics as such passing fads as eBay, mobile phones, the Internet in general, and Google itself not so long ago. Individual videos will flash in and out of popularity, to be sure, just as you don't hear a whole lot any more about the Hampster Dance, booth in the Mojave Desert, or "I kiss you!" Mahir Cagri, all Net stars in their time. But as video channels proliferate, a networking site where you can watch just the good bits by word-of-Net is as valuable as Google News.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 8, 2006

Crossing the streams

OK, this is weird. I'm sitting at my desk in London watching a match from the U.S, Open (a modestly sized tennis tournament finishing up this week in New York City. I'm watching it on the laptop. Not so strange; lots of people watch TV on their computers these days. Only in this case I'm watching the match as broadcast on USA Network, a satellite channel people get by cable. In the US.

Some months back in the online tennis forum I hang out in, you started seeing mention of "streams" of live tennis, all coming from Asia somewhere, somehow And damn if it wasn't true. Forget all those P2P networks that make you wait a day or two while someone seeds their digital copy of last night's broadcast – if anyone else is even interested enough in that quarter-final Jankovic-Dementieva match to upload it. Pick a player, and although the picture is small, you can have it live. Complete with commercials. At last I can see the ads repeating 12 times an hour that everyone else is complaining about. Whee!

It's weird the frisson of excitement with which you can welcome ads when they're part of something exotic and slightly forbidden. Believe me, if I were sitting in my friends' living room in Pennsylvania – I'd be complaining away with the best of them about *how many times* do we have to see that Sharapova-as-Leona Helmsley commercial (what's she supposed to be selling, anyway? Noblesse oblige?). But viewed this way it's suddenly so cool, like huddling around the short wave radio and tuning in South Africa.

The closer analogy is the early days of satellite television, when satellite nuts (this was before we learned the politically correct phrase "early adopters") had big dishes in their backyards, and found all sorts of interesting things in the sky, like free HBO (in those days, still known as Home Box Office). When dish owners numbered 1.7 million, the pay-TV services got bothered began encrypting their services to force dish owners to pay cable rates. The upshot: one of the great moments of satellite television; href="http://www.findarticles.com/p/articles/mi_m1511/is_v7/ai_4293600">"Captain Midnight" hijacked HBO's output for four and a half minutesin protest. Captain Midnight was later identified as John MacDougall, a satellite TV salesman, and he was eventually fined $5,000.

Things are likely to be less kindly in the Internet era. For one thing, the companies that own the biggest broadcast networks are bigger, meaner, and have more laws. The first Internet TV casualty was probably the Canadian-targeted iCraveTv, which for a few months in 2000 had 17 American and Canadian channels online,. The service got squashed like a bug, despite offering to pay broadcasters. Bear in mind that the first cable companies operated much the way iCraveTV did: they put up a repeating and ran a bunch of wires.

Well, we know how the Internet works. Take out one guy and in return you get a lot more guys that are harder to deal with. I've lost count now of the players and sites: TVUPlayer, TVAnts, PPLive, Sopcast. All are Asian, all stream live TV, and all use peer-to-peer networking technologies to spread the load. Which means, in turn, that the single biggest expense in streaming – bandwidth – is shared among the users. Most of whom, as far as I can tell, are sports nuts, which is only logical. The picture you get from these players is, while good enough to watch, still relatively small and low-resolution. For scripted television, you can get a better experience by waiting the day and downloading a torrent or a legal copy from the pay services that are beginning to open up.

But the whole experience of sports is the fact that it is live, and no one really knows how it's going to come out. Within some limits, a bad, live picture is often preferable to a perfect, delayed one. Even if you can't really see what Federer is doing when he hits the ball, you want the emotional rush of being there with him. You can always watch the full-size version later for artistic appreciation.

Theoretically, the fact that the pictures are small ought to give broadcasters the same kind of confidence that publishers have when it comes to file-sharing. People will pay for big-screen viewing just as they'll pay for books. Nonetheless, we're standing on the brink of the WIPO broadcast treaty that net.wars wrote about in February, 2005.

James Love has a lengthy critique of the current proposals (PDF). But one thing he leaves out is that as far as I can make out, today's streaming players "rebroadcast" their signals by pointing at an IP address where the broadcaster itself is streaming its own output. Are we talking about making it illegal to access or publish IP addresses based on the content that's available at them? TEOTIAWKI. (The End of the Internet as we know it.)

I can't believe these streams are really legal, despite this argument regarding law enforcement actions in Italy. Even if they include ads, someone in London is not in the target demographic for the USTA. Presumably, eventually everybody will encrypt their streams and we'll all have to have protected players and subscriptions in order to view them. In the meantime, enjoy your giant satellite dish.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 21, 2006

I blog, therefore I am

According to a new report (PDF) from Pew Internet, the US is home to 12 million bloggers. It only seems like more. About 57 million Americans read them. And guess what? Bloggers are just like us.

Pew came up with some interesting numbers. More than half of bloggers (54 percent) are under 30. Gender is balanced. Race is not: 74 percent of American Internet users are white, but only 60 percent of bloggers. Most view it as a personal pursuit, and the biggest share – 37 percent – say that the topic of their blog is "my life and experience". Only 11 percent name politics as their chief topic. A tenth spend ten or more hours a week on their blog.

A third see blogging as a form of journalism. This bit led CBS news to crow, "Blogs not replacing journalism just yet". Foolish. About half, the report also notes (further down, past the executive summary that's all a deadlined journalist may have time to read), spend time trying to verify facts and include links to original source material, more commonly among those over 30 or with college degree.) Somewhat fewer – 40 percent – quote other people and/or media directly; understandable, since if you don't have the imprimatur of a major media outlet you are likely to think you can't get access, and sources may indeed not be willing to give their time. Fewer – 38 percent – post corrections; fewest of all – 30 percent – get permission to post copyrighted material sometimes or often. It's not clear from the report how often those correction-posters make mistakes (perhaps a better key to whether it's journalism). I think the copyright question is irrelevant; you do that if you have a lot of readers, influence, or money. You're unlikely to think it matters otherwise.

But more importantly, who cares? Certainly, some of the best blogs are written by journalists or professional writers. But not all: they're written by scientists, lawyers, and technologists. But it's sophistry to worry about whether the results are journalism. It's one of those angels-on-the-head-of-a-pin questions: what's the difference between a newspaper, a news site, a community blog, and a different community blog? In fact, although journalists seem to be obsessed with subsuming blogging into journalism, it's arguable that eventually all media will be a subset of blogging.

What's really frustrating is the stuff they didn't ask. Only 15 percent (mostly people over 50) say making money is a major or minor reason for blogging. Only 8 percent say they make any. Those who do make money do so from tip jars, selling stuff, Amazon Associates, Google Adsense, and, for one in five, premium content. Well? How much money do they make? Which of those income-producing options do they find is most successful? Have they changed how they blog to try to increase revenues? Are we including the people who are paid to write blogs for Gawker or one of the other Blog Empires? Many blogs – for example, Lawrence Lessig's – seem to me to fall under the category of "professional development": their blogs are a way of thinking through ideas that will eventually wind up in books or lectures, a process helped by the feedback they get from commenters. That's not directly making money, but it's not a hobby either. On this point, Pew demonstrated a problem I categorize as "PWJs": People With Jobs have trouble understanding the seamless lives some of us have, where there is no clear division between "work" and "recreation", and where anything that might be a "hobby" for a PWJ is subsumed, as much as possible, into what a PWJ might call "work".

Which leads to the other mainstream media complaint. More than half of bloggers blog "for themselves", which Information Week boiled down to "All About Me". Again: how silly. You can blog for yourself, while simultaneously keeping notes on things you're afraid you'll forget, documenting the weird things that happen around you, keeping your friends up-to-date, and even campaigning for political change. Do journalists criticize filmmakers for making movies to express themselves? Do journalists point the finger at themselves for writing as a way of showing off in public? Are they seriously saying that it's somehow less noble to write about things you care about than things you are required to care about if you want to keep your job (the reality for many, if not most, working journalists)? Do I sense a little envy here?

In fact, frustration – not mentioned in the Pew Internet study (which cautions, by the way, that its sample of 223 was very small, though statistically representative) – is, in my experience, a key driver of why a lot of people blog. I'd bet that the racial disproportion Pew notes is due to non-whites' frustration with traditional media, which is disproportionately white. Journalists blog so they can write about the stories they can't get into their own papers. Some people blog because they are so frustrated with the state of the nation. If Alf Garnett, recreated in the US as Archie Bunker, were alive now, he'd be blogging those pub rants.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 14, 2006

Not too cheap to meter

An old Net joke holds that the best way to kill the Net is to invent a new application everyone wants. The Web nearly killed the Net when it was young. Binaries on Usenet. File-sharing. Video on demand may finally really do it. Not, necessarily, because it swamps servers, consumes all available bandwidth. But because, like spam, it causes people to adopt destructive schemes.

Two such examples turned up this week. The first, from the IP Development Network, the brainchild of Jeremy Penston, formerly of UUnet and Pipex, HD-TV over IP: Who Pays the Bill? (PDF), argues that present pricing models will not work in the HDTV future, and ISPs will need to control or provide their own content. It estimates, for example, that a consumer's single download of a streamed HD movie could cost an ISP £21.13, more than some users pay a month. The report has been criticized, and its key assumption – that the Internet will become the chief or only gateway to high-definition content – is probably wrong. Niche programming will get downloaded because any other type of distribution is uneconomical, but broadcast will survive for mass-market.

The germ that isn't so easily dismissed is the idea that bandwidth is not necessarily going to continue to get cheaper, at least for end users.

Which leads to exhibit B, the story that's gotten more coverage, a press release – the draft discussion paper isn't available yet – from the London-based Association of Independent Music (AIM) proposing that ISPs should be brought "into the official value chain". In other words, ISPs should be required to have and pay for licenses agreed with the music industry and a new "Value Recognition Right" should be created. AIM's reasoning: according to figures they cite from MusicAlly Research, some 60 percent of Internet traffic by data volume is P2P, file-sharing, and music has been the main driver of that. Therefore, ISPs are making money from music. Therefore, AIM wants some.

Let's be plain: this is madness.

First of all, the more correct verb there is "was", and even then it's only partially true. Yes, music was the driver behind Napster eight years ago, and Gnutella six years ago, and the various eHoofers. But now Bittorrent is the biggest bandwidth gobbler, and the biggest proportion of transferred data transferred is video, not music. This ought to be obvious: MP3 4Mb, one-hour TV show 350Mb, movie 700Mb to 4.7Gb. Music downloads started first and have been commercialized first, but that doesn't make it the main driver; it just makes it the historically *first* driver. In any event, music certainly isn't the main reason people get online: that is and was email and the Web.

Second of all, one of the key, underrated problems for any charging mechanism that involves distinguishing one type of bits from another type of bits in order to compensate someone is the loss of privacy. What you read, watch, and listen to is all part of what you think about; surely the inner recesses of your mind should be your own. A regime that requires ISPs to police what their customers do – even if it's in their own financial interests to do so – edges towards Orwell's Thought Police.

Third of all, anyone who believes that ISPs are making money from P2P needs remedial education. Do they seriously think that at something like £20 per month for up to 8mbps ADSL anyone's got much of a margin? P2P is, if anything, the bane of ISPs' existence, since it turns ordinary people into bandwidth hogs. Chris Comley, managing director of Wizards, the small ISP that supplies my service (it resells BT connections), says that although his company applies no usage caps, if users begin maxing out their connections (that is, using all their available bandwidth 24 hours a day, seven days a week), the company will start getting complaining email messages from BT and face having to pay higher charges for the connections it resells. Broadband pricing, like that of dial-up before it (when telephone bills could be relied upon to cap users' online hours), is predicated on the understanding that even users on an "unlimited" service will not in fact consume all the bandwidth that is available to them. In Comley's analogy, the owner of an all-you-can-eat buffet sets his pricing on the assumption that people who walk in for a meal are not in fact going to eat everything in the place.

"The price war over bandwidth is going to have to be reversed," he says, "because we have effectively discounted what the user pays for IP to such a low level that if they start to use it they're in trouble, and they will if they start using video on demand or IPTV."

We began with an old Internet joke. We end with an old Internet saying, generally traced back to the goofy hype of Nicholas Negroponte and George Gilder: that bandwidth is or will be too cheap to meter. It ought to be, given that the price of computing power keeps dropping. But if that's what we want it looks like we'll have to fight for it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 7, 2006

If it's Wimbledon it must be television

Without a lot of fanfare, Wimbledon this year has been offering video on demand from its Web site: an all-access pass for the entire fortnight cost £9.95, and I paid it to try out the service.

In recent years, Wimbledon has become the time to catch up on New TV. This year I've been playing off the video-on-demand downloads against digital and analog terrestrial and interactive cable. On Mad Monday – the second Monday, when Wimbledon insanely packs all 16 men's and women's fourth round singles matches into one day – I had all them going at once.

The best quality, at least here, was digital terrestrial, viewed on a 15.4in widescreen laptop via an external Freeview box. Widescreen format really suits tennis. The biggest choice of channels, though (five, to digital terrestrial's three or four), was on interactive cable. Analog displays here in 4:3, and although the picture quality is nice, the format is decidedly second-rate.

The official Wimbledon downloads are 4:3. Each time you download or open a file you are required to log in with your email address and password: they are protected content. If, however, you cheat by finding a utility strip off the DRM (thereby breaking several national laws) the 1Mb versions look watchably good sized up. (Some of these hacked versions are beginning to appear on torrent sites.)

Let's leave aside the whole DRM-is-evil thing, aside from noting that the Wimbledon site says you have access to the files you have paid to download for 45 days. I assume they turn into pumpkins after that.

Traditionally, the BBC and Wimbledon collaborated so that the matches people most wanted to see were on the biggest courts and the most available channel at the most convenient viewing time for the biggest number of people. In other words, Henman on Centre at 5:30pm, when people are coming home from work. Today's interactive coverage grows out of that idea, and so beyond a few basic principles it's difficult to predict what match will be broadcast when on which channel.

What is a channel? Wimbledon publishes its match schedule by the court. You can't predict exactly what time any match after the first will start. Anything can happen: rain, player injury, straight-set wipeout, six-hour marathon. And they keep switching around, which is unhelpful if you're going out.

Logically, in our new world, a channel should be a court. Occasionally, the digital terrestrial coverage worked like this, and it was helpful during rain delays that while the main broadcast channel (BBC2) busied itself with nature documentaries and replays, you could see the covers being rolled back and estimate accurately what time play would resume. Given enough cameras on site, you, the obsessive viewer, could deploy tuners and displays so you had a window onto every court and could move among them any way you liked.

Or a single topic. Let's have the "Practice Court channel." You can learn a lot about what the players are working on and how they build their strategies and games. Or how about the "Interview room channel", perhaps complete with a competition in which viewers get to pretend to be players and prizes are awarded for the most absurdly cliched answers? IBM competitors might particularly like the "IBM Hospitality Suite channel". And all of that is without the video clips that fans film and post.

The online Wimbledon Live service works more like that, but its basic unit is the match, not the court, and in my experience when a match finishes you have to restart the stream for the next match. The more useful thing is the archive, which lets you download and watch all sorts of stuff that generally doesn't get broadcast, such as veterans' matches, juniors, and early round mixed doubles. It's still not complete – of the 64 first-round women's singles matches 35 are available for download (compared to 40 of the men's) – but it's a lot closer.

We asked what a channel was, but that's small fry: writing in the Guardian this week on the contentious Television Without Frontiers EU directive, Peter Warren asked what is television? It used to be defined by the physics of its transmission. The BBC transmission of the match between Anastasia Myskina and Amelie Mauresmo is obviously television; is it still television if it's downloaded from the Wimbledon site? Or if someone sits courtside and sends clips to YouTube? Or if you happen to live overlooking the courts and set up your own camera, which you stream only to your circle of IPTV buddies?

We are rapidly moving towards a world where what we have thought of as television is increasingly a giant pool of video clips of varying lengths made with varying levels of funding and skill and transmitted via many different means. In the traditional channels' struggle to stay afloat, it seems to me that sports are going to be increasingly important because they have a characteristic almost nothing else shares: people want the emotional experience of seeing big pictures of them from faraway places in real time when they are actually happening.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 30, 2006

Technical enough for government work

Wednesday night was a rare moment of irrelevant glamor in my life, when I played on the Guardian team in a quiz challenge grudge match.

In March, Richard Sarson (intriguingly absent, by the way) accused MPs of not knowing which end was up, technically speaking, and BT funded a test. All good fun.

But Sarson had a serious point: MPs are spending billions and trillions of public funds without the technical knowledge to them. His particular focus was the ID card, which net.wars has written about so often. Who benefits from these very large IT contracts besides, of course, the suppliers and contractors? It must come down to Yes, Minister again: commissioning a huge, new IT system gives the Civil Service a lot of new budget and bureaucracy to play with, especially if the ministers don't understand the new system. Expanded budgets are expanded power, we know this, and if the system doesn't work right the first time you need an even bigger budget to fix them with.

And at that point, the issue collided in my mind with this week's other effort, a discussion of Vernor Vinge's ideas of where our computer-ridden world might be going. Because the strangest thing about the world Vernor Vinge proposes in his new book, Rainbows End, is that all the technology pretty much works as long as no one interferes with it. For example: this is a world filled with localizer sensors and wearable computing; it's almost impossible to get out of view of a network node. People decide to go somewhere and snap! a car rolls up and pops open its doors.

I'm wondering if Vinge has ever tried to catch a cab when it was raining in Manhattan.

There are two keys to this world. First: it is awash in so many computer chips that IPv6 might not have enough addresses (yeah, yeah, I know, no electron left behind and all that). Second: each of these chips has a little blocked off area called the Secure Hardware Environment (SHE), which is reserved for government regulation. SHE enables all sorts of things: detailed surveillance, audit trails, the blocking of undesirable behavior. One of my favorite of Vinge's ideas about this is that the whole system inverts Lawrence Lessig's idea of code is law into "law is code". When you make new law, instead of having to wait five or ten years until all the computers have been replaced so they conform to the new law, you can just install the new laws as a flash regulatory update. Kind of like Microsoft does now with Windows Genuine Advantage. Or like what I call "idiot stamps" – today's denominationless stamps, intended for people who can never remember how much postage is.

There are a lot of reasons why we don't want this future, despite the convenience of all those magically arriving cars, and despite the fact that Vinge himself says he thinks frictional costs will mean that SHE doesn't work very well. "But it will be attempted, both by the state and by civil special interest petitioners." For example, he said, take the reaction of a representative he met from a British writers' group who thought it was a nightmare scenario – but loved the bit where microroyalties were automatically and immediately transmitted up the chain. "If we could get that, but not the monstrous rest of it…"

For another, "You really need a significant number of people who are willing to be Amish to the extent that they don't allow embedded microprocessors in their lifestyle." Because, "You're getting into a situation where that becomes a single failure point. If all the microprocessors in London went out, it's hard to imagine anything short of a nuclear attack that would be a deadlier disaster."

Still, one of the things that makes this future so plausible is that you don't have to posit the vast, centralized expenditure of these huge public IT projects. It relies instead on a series of developments coming together. There are examples all around us. Manufacturers and retailers are leaping gleefully onto RFID in everything. More and more desktop and laptop computers are beginning to include the Trusted Computing Module, which is intended to provide better security through blocking all unsigned programs from running but as a by-product could also allow the widescale, hardware-level deployment of DRM. The business of keeping software updated has become so complex that most people are greatly relieved to be able to make it automatic. People and municipalities all over the place are installing wireless Internet for their own use and sharing it. To make Vinge's world, you wait until people have voluntarily bought or installed much of the necessary infrastructure and then do a Project Lite to hook it up to the functions you want.

What governments would love about the automatic regulatory upgrade is the same thing that the Post Office loves about idiot stamps: you can change the laws (or prices) without anyone's really being aware of what you're doing. And there, maybe, finally, is some real value for those huge, failed IT projects: no one in power can pretend they aren't there. Just, you know, God help us if they ever start being successful.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 28, 2006

Who's afraid of the big, bad Google?

I honestly think that one of the happiest days in my life online was the day I found Google. At the time, the best search engine was Altavista, and it had become dog-awful to use: cluttered, slow, messy, and annoying. Google was the online user's dream: clean, white screen, cute logo, speed, good results. Home. No one may ever really beat a path to the door of the manufacturer of a better mouse trap, but they certainly did to a better search engine.
It took a long time - or it seemed like it - for the world to catch up. "Only geeks use it," I was told for some years.

"But how can they make any money?" financial analysts complained. The received wisdom, even when Google reinvented advertising with paid search, was that Google's audience could vanish overnight if someone came along with a better search engine. Two problems with that. First, it's really hard to come up with a better search engine than Google. Second, Google was capable of finding people working on technology to improve search engines - and hiring them.

Search engine audiences turn out to be more locked in than it appeared in the rearview mirror. True, most people don't care what search engine they use as long as they get good results; but that also means they are unlikely to change. Google won on usability as much as quality of results, and the longer you use it the more you learn about how to work with it.

The first stirrings of Google dislike probably showed up when Google bought Dejanews and began constructing the most comprehensive Usenet archive available. People who had posted to Usenet in the early days had thought of their ramblings as ephemeral. Now, they were going to be available for every Tom, Dick, and Maureen in Human Resources to search. Of course, then everyone loved Google again when it did Google Earth.

Concern about Google seemed to begin among privacy advocates with Gmail, because of its vast storage and the automatic searching that inserts ads. Log in, read your email, do your searches, and Google collects all the data. Valuable stuff. Search your hard drive. Upload its contents to Google as a backup.

At this point, there seems to be no doubt that Google is becoming the Microsoft of online information. You do not have to own the information itself - any more than Microsoft had to make computers. It is sufficient to control the gateways. In Microsoft's case, that was the operating system. In Google's case, it's the search engines and, just as important, the advertising. Almost every blog that can carry advertising - even Nick Carr's discussion this week of Google's float - carries Google AdSense.

In it, Carr noted something I've been pondering for some time: the vastness of the universe of people who have signed up for AdSense, and the revenues Google has derived from the clicks on their sites, and the percentage of those people whose payouts have not reached the $100 necessary to actually get paid. I am one of the people in that universe; there must be millions of us.

The float thus generated seems hardly likely to make a dent in a company of the size that Google now is - but on the other hand, float is how Warren Buffett became the second richest man in America.

Carr also notes that the terms and conditions that accompany signing up for AdSense ban people from disparaging Google. And it appears that you can be thrown out of AdSense for other reasons, such as displaying Google ads next to content that might upset the advertisers. I'm not convinced that being dumped out of AdSense has to end a successful blog, though it certainly means you need to rethink where your income will come from. But AdSense is like eBay: it matches, through search, buyers and sellers. And like eBay, the bigger its user base gets the more successful it will be at doing so.

Every move Google makes now is offending someone. Google Print upsets publishers and some authors. Google's plans to digitize books for online reading upsets many libraries. Telephone companies. Newspapers publishers losing classified ads. All media sources, convinced that Google News will cost them differentiation between the Kew Society Newsletter and CNN. Human rights groups, when Google cooperated with China. eBay, which would like to reduce its dependence on foreign search engines. Business Week has a long list. Even its doodles, surely the best part of its service - a search engine that's fast and sometimes makes me laugh! - got it in trouble with the Miró estate. You must, I suppose, be doing something right in business terms if this many people feel threatened.

Even so, the reality is that Google is never going to be hated the way Microsoft, which is back in antitrust court in the EU this week, is. For a very simple reason. Everyone's primary contact with Microsoft is the daily frustrations of using Windows. It is all negative. Everyone's primary contact with Google, however, is that it finds you things you wanted. That's a hard burst of positive fuzzies to overcome.

April 21, 2006

Adblogging

LiveJournal, home to approximately 1.3 million active blogs and 10 million overall, announced this week that it's inaugurating ads on its site, following on from a post on the subject from the founder about six weeks ago. The general idea isn't all that dissimilar to what Salon has been doing for the last five years with its Premium service: you pay for the value you receive with either ads or money, your choice. LiveJournal has always offered free and paid accounts, basing the incentive to pay on limiting the features available to the free accounts. Now, it will offer an intermediate "Sponsored" level which will include ads. If you're a logged-in paid user you will never see ads; if you're a free or sponsored user (or visitor) you will see ads on LiveJournal's main site and on sponsored journals. No one has to display ads on their journal.

Almost simultaneously, Six Apart, the owner of LiveJournal, announced that it had secured $12 million in venture capital funding. LiveJournal was a cooperative community; now it's a business.

We will pass over quickly the storm-in-a-Slashdot about the changes to LiveJournal's terms of service that banned the use of ad-blocking software on pain of having your account deleted. LiveJournal has already said the clause was a lawyer's error. It must have been: it would have been such a pointless, stupid, and self-destructive clause that it's hard to believe anyone ever seriously intended to implement it. For one thing, such a stick would have had no effect on anyone not logged in. For another, we've had so many of these TOS thrashes by now that there can't be a remotely technically savvy management team that doesn't know how much negative attention they'll get from this kind of announcement. Posting new TOS that no one has read and considered carefully isn't too bright either, but it's a mistake not evidence of Evil.

The advent of ads will be interesting. LiveJournal isn't just a blogging site; it's a powerful cross between blogging and social networking. On most social networking sites, the links between people are tenuous and ill-defined. There is no distinction on Orkut, for example, between a friend I've known intimately for years and a "friend" I met for five minutes last week. There is only one kind of link, and it doesn't tell you much.

On LiveJournal, however, it implies real interest, if not friendship, if I add someone to the list of blogs on my friends list. There is still only one kind of link, but it is more meaningful, and some conclusions about its strength can be derived from seeing whether it's one-way or two-way, and how large a cluster it's part of. If the goal is ads targeted to people's interests – which would be logical – LiveJournal has a very rich structure to mine and an even richer base of information about all its users based on what they post about themselves in their blogs. You can see why ad sales people might salivate over the notion, particularly in the wake of this week's other blogging story, the one about graphing the mood of the blogosphere, specifically LiveJournal's part of it.

That technology strikes me as gimmicky: it relies on self-reporting. I look forward to the first mass protest via LiveJournal mood tags, when users club together to post specific moods so they'll show up on the graphs. The concept, however, is likely to develop further into datamining and textual analysis of the blog entries themselves (which are less likely to be spoofed, because of the amount of effort involved), and it's easy to imagine how valuable those results could be to marketing people trying to spot trends they can capitalize on.

The problem LiveJournal is up against in all this is that the lock-in they have for customers isn't as good as it might seem at first. In fact, the lock-in for LiveJournal is considerably less than it is for Google, whose audience financial analysts used to regard as alarmingly easily lost. It wasn't true of Google, largely because Google was so much better at what it does than everyone else is.

But it doesn't take much to start a blog somewhere else. The tools aren't much different, and you don't have to take down or lose the old blog to do it. You just start the new one with an entry pointing at the old one, and move on. The phemonenon of RSS readers, online and off, mean that the socially networked LiveJournal structure can be mimicked almost anywhere by anyone – and those tools are going to continue improving. More than that, if you have any hope of turning your blog into a source of income, however tiny, you move off LiveJournal because you can't run your own ads. If LiveJournal wants its new gambit to work, it's going to have to give sponsored users a slice of the action. Because, aside from anything else, sponsored users are going to find themselves at the bottom of the social LiveJournal totem pole. Everyone will know who they are. And they won't get anything for it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her | | Comments (0) | TrackBacks (0)