" /> net.wars: August 2012 Archives

« July 2012 | Main

August 31, 2012

Remembering the moon

"I knew my life was going to be lived in space," a 50-something said to me in 2009 on the anniversary of the moon landings, trying to describe the impact they had on him as a 12-year-old. I understood what he meant: on July 20, 1969, a late summer Sunday evening in my time zone, I was 15 and allowed to stay up late to watch; awed at both the achievement and the fact that we could see it live, we took Polaroid pictures (!) of the TV image showing Armstrong stepping onto the Moon's surface.

The science writer Tom Wilkie remarked once that the real impact of those early days of the space program was the image of the Earth from space, that it kicked off a new understanding of the planet as a whole, fragile ecosystem. The first Earth Day was just nine months later. At the time, it didn't seem like that. "We landed on the moon" became a sort of yardstick; how could we put a man on the moon yet be unable to fix a bicycle? That sort of thing.

To those who've grown up always knowing we landed on the moon in ancient times (that is, before they were born), it's hard to convey what a staggering moment of hope and astonishment that was. For one thing, it seemed so improbable and it happened so fast. In 1962, President Kennedy promised to put a man on the moon by the end of the decade - and it happened, even though he was assassinated. For another, it was the science fiction we all read as teens come to life. Surely the next steps would be other planets, greater access for the rest of us. Wouldn't I, in my lifetime, eventually be able also to look out the window of a vehicle in motion and see the Earth getting smaller?

Probably not. Many years later, I was on the receiving end of a rant from an English friend about the wasteful expense of sending people into space when unmanned spacecraft could do so much more for so much less money. He was, of course, right, and it's not much of a surprise that the death of the first human to set foot on the Moon, Neil Armstrong, so nearly coincided with the success of the Mars investigator robot, Curiosity. What Curiosity also reminds us, or should, is that although we admire Armstrong as a hero, the fact is that landing on the Moon wasn't so much his achievement as that of probably thousands, of engineers, programmers, and scientists who developed and built the technology necessary to get him there. As a result, the thing that makes me saddest about Armstrong's death on August 25 is the loss of his human memory of the experience of seeing and touching that off-Earth orbiting body.

The science fiction writer Charlie Stross has a lecture transcript I particularly like about the way the future changes under your feet. The space program - and, in the UK and France, Concorde - seemed like a beginning at the time, but has so far turned out to be an end. Sometime between 1950 and 1970, Stross argues, progress was redefined from being all about the speed of transport to being all about the speed of computers or, more precisely, Moore's Law. In the 1930s, when the moon-walkers were born, the speed of transport was doubling in less than a decade; but it only doubled in the 40 years from the late 1960s to 2007, when he wrote this talk. The speed of acceleration had slowed dramatically.

Applying this precedent to Moore's Law, Intel founder Gordon Moore's observation that the number of transistors that could fit on an integrated circuit doubled about every 24 months, increasing computing speed and power proportionately, Stross was happy to argue that despite what we all think today and the obsessive belief among Singularitarians that computers will surpass the computational power of humans oh, any day now, but certainly by 2030, "Computers and microprocessors aren't the future. They're yesterday's future, and tomorrow will be about something else." His suggestion: bandwidth, bringing things like lifelogging and ubiquitous computing so that no one ever gets lost; if we'd had that in 1969, the astronauts would have been sending back first-person total-immersion visual and tactile experiences that would now be in NASA's library for us all to experience as if at first hand instead of the just the external image we all know.

The science fiction I grew up with assumed that computers would remain rare (if huge) expensive items operated by the elite and knowledgeable (except, perhaps, for personal robots). Space flight, and personal transport, on the other hand, would be democratized. Partly, let's face it, that's because space travel and robots make compelling images and stories, particularly for movies, while sitting and typing...not so much. I didn't grow up imagining my life being mediated and expanded by computer use; I, like countless generations before me, grew up imagining the places I might go and the things I might see. Armstrong and the other astronauts, were my proxies. One day in the not-too-distant future, we will have no humans left who remember what it was actually like to look up and see the Earth in the sky while standing on a distant rock. There only ever have been, Wikipedia tells me, 12, all born in the 1930s.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 24, 2012

Look and feel

Reading over the accounts of the deliberations in Apple vs Samsung, the voice I keep hearing in my head is that of Philippe Kahn, the former CEO of Borland, one of the very first personal computing software companies, founded in 1981. I hear younger folks scratching their heads at that and saying, "Who?" Until 1992 Borland was one of the top three PC software companies, dominant in areas like programming languages and compilers; it faltered when it tried to compete with Lotus (long since swallowed by IBM) and Microsoft in office suites. In 1995 Kahn was ousted, going on to found three other companies.

What Kahn's voice is saying is, "Yes, we copied."

The occasion was an interview I did with him in July 1994 for the now-defunct magazine Personal Computer World, then a monthly magazine the size of a phone book. (Oh - phone book. Let's call it two 12.1 inch laptops, stacked, OK?). Among the subjects we rambled through was the lawsuit between Borland and Lotus, one of the first to cover the question of whether and when reverse-engineering infringes copyright. After six years of litigation, the case was finally decided by the Supreme Court in 1996.

The issue was spreadsheet software; Lotus 1-2-3 was the first killer application that made people want - need - to buy PCs. When Borland released its competing Quattro Pro, the software included a mode that copied Lotus's menu structure and a function to run Lotus's macros (this was when you could still record a macro with a few easy keyboard strokes; it was only later that writing macros began to require programming skills). In the district court, Lotus successfully argued that this was copyright infringement. In contrast, Borland, which eventually won the case on appeal, argued that the menu structure constituted a system. Kahn felt so strongly about pursuing the case that he called it a crusade and the company spent tens of millions of dollars on it.

"We don't believe anyone ever organized menus because they were expressive, or because the looked good," Kahn said at the time. "Print is next to Load because of functional reasons." Expression can be copyrighted; functionality instead is patented. Secondly, he argued, "In software, innovation is driven fundamentally by compatibility and interoperability." And so companies reverse-engineer: someone goes in a room by themselves and deconstructs the software or hardware and from that produces a functional specification. The product developers then see only that specification and from it create their own implementation. I suppose a writer's equivalent might be if someone read a lot of books (or Joseph Campbell's Hero With a Thousand Faces), broke down the stories to their essential elements, and then handed out pieces of paper that specified, "Entertaining and successful story in English about an apparently ordinary guy who finds out he's special and is drawn into adventures that make him uncomfortable but change his life." Depending on whether the writer you hand that to is Neil Gaiman, JRR Tolkien, or JK Rowling, you get a completely different finished product.

The value to the public of the Lotus versus Borland decision is that it enabled standards. Imagine if every piece of software had to implement a different keystroke to summon online help, for example (or pay a license fee to use F1). Or think of the many identical commands shared among Internet Explorer, Firefox, Opera, and Chrome: would users really benefit if each browser had to be completely different, or if Mosaic had been able to copyright the lot and lock out all other comers? This was the argument that As the EFF made in its amicus brief, that allowing the first developer of a new type of software to copyright its interface could lock up that technology and its market or 75 years or more.

In the mid 1990s, Apple - in a case that, as Harvard Business Review highlights, was very similar to this one - sued Microsoft over the "look and feel" of Windows. (That took a particular kind of hubris, given that everyone knows that Apple copied what it saw at Xerox to make that interface in the first place.) Like that case (and unlike Lotus versus Borland), Apple versus Samsung revolves around patents (functionality) rather than copyright (expression). But the fundamental questions in all three cases are the same: what is a unique innovation, what builds on prior art, and what is dictated by such externalities as human anatomy and psychology and the expectations we have developed over decades of phone and computer use?

What matters to Apple and Samsung is who gets to sell what in which markets. We, however, have a lot more important skin in this game: what is the best way to foster innovation and serve consumers? In Apple's presentation on Samsung's copying, Apple makes the same tired argument as the music industry: that if others can come along and copy its work it won't have any incentive to spend five years coming up with stuff like the iPad. Really? As Allworth notes, is that what they did after losing the Microsoft case? If Apple had won then and owned the entire desktop market, do you think they'd have ever had the incentive to develop the iPad? We have to hope that copying wins.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 17, 2012

Bottom dwellers

This week Google announced it would downgrade in its search results sites with an exceptionally high number of valid copyright notices filed against them. As the EFF points out, the details of exactly how this will work are scarce and there is likely to be a big, big problem with false positives - that is, sites that are downgraded unfairly. You have only to look at the recent authorial pile-on that took down the legitimate ebook lending site LendInk for what can happen when someone gets hold of the wrong side of the copyright stick.

Unless we know how the inclusion of Google's copyright notice stats will work, how do we know what will be affected, how, and for how long? There is no transparency to let a site know what's happening to it, and no appeals process. Given the many abuses of the Digital Millennium Copyright Act, under which such copyright notices are issued, it's hard to know how fair such a system will be. Though, granted: the company could have simply done it and not told us. How would we know?

The timing of this move is interesting because it comes only a few months after Google began advocating for the notion that search engine results are, like newspaper editorial matter, a form of free speech under the First Amendment. The company went as far as to commission the legal scholar Eugene Volokh to write a white paper outlining the legal arguments. These basically revolve around the idea that a search algorithm is merely a new form of editorial judgment; Google returns search results in the order in which, in its opinion, they will be most helpful to users.

In response, Tim Wu, author of The Master Switch, argued in the New York Times that conceding the right of free speech to computerized decisions brings serious problems with it in the long run. Supposing, for example, that antitrust authorities want to regulate Google to ensure that it doesn't use its dominance in search to unfairly advantage its other online properties - YouTube, Google Books, Google Maps, and so on. If search results are free speech, that type of regulation becomes unconstitutional. On BoingBoing, Cory Doctorow responded that one should regulate the bad speech without denying it is speech. Earlier, in the Guardian Doctorow argued that Google's best gambit was making the argument about editorial integrity; publications make esthetic judgments, but Google famously loves to live by numbers.

This part of the argument is one that we're going to be seeing a lot of over the next few decades, because it boils down to this bit of Philip K. Dick territory: should machines programmed by humans have free speech rights? And if so, under what circumstances? If Google search results are free speech, is the same true of the output of credit-scoring algorithms or speed cameras? A magazine editor can, if asked, explain the reasoning process by which material was commissioned for, placed in, or rejected by her magazine; Google is notoriously secretive about the workings of its algorithms. We do not even know the criteria Google uses to judge the quality of its search results.

These are all questions we're going to have to answer as a society; and they are questions that may be answered very differently in countries without a First Amendment. My own first inclination is to require some kind of transparency in return: for every generation of separation between human and result, there must be an additional layer of explanation detailing how the system is supposed to work. The more people the results affect, the bigger the requirement for transparency. Something like that.

The more immediate question, of course, is, whether Google's move will have an impact on curbing unauthorized file-sharing. My guess is not that much; few file-sharers of my acquaintance use Google for the purpose of finding files to download.

Yet, in an otherwise sensible piece about the sentencing of Surfthechannel.com owner Anton Vickerman to four years in prison in the Guardian, Dan Sabbagh winds up praising Google's decision with a bunch of errors. First of all, he blames the music industry's problems on mistakes "such as failing to introduce copy protection". As the rest of us know, the music industry only finally dropped copy protection in 2009 - because consumers hate it. Arguably, copy protection delayed the adoption of legal, paid services by years. He also calls the decision to sell all-you-can-eat subscriptions to music back catalogues a mistake; on what grounds is not made clear.

Finally, he argues, "Had Google [relegated pirate sites' results] a decade ago, it might not have been worthwhile for Vickerman to set up his site at all."

Ten years ago? In 2002, Napster had been gone for less than a year. Gnutella and BitTorrent were measuring their age in months. iTunes was a year old. The Pirate Bay wouldn't exist for some months more. Google was two years away from going public. The mistake then wasn't downgrading sites oft accused of copyright infringement. The mistake then was not building legal, paid downloading services and getting them up and running as fast as possible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 10, 2012

Wiped out

There are so many awful things in the story of what happened this week to technology journalist Matt Honan that it's hard to know where to start. The fundamental part - that through not particularly clever social engineering an outsider was able in about 20 minutes to take over and delete his Google account, take over and defame his Twitter account, and then wipe all the data on his iPhone, iPad, and MacBook - would make a fine nightmare, or maybe a movie with some of the surrealistic quality of Martin Scorsese's After Hours (1985). And all, as Honan eventually learned, because the hacker fancied an outing with his three-digit Twitter ID, a threat so unexpected there's no way you'd make it your model.

Honan's first problem was the thing Suw Charman-Anderson put her finger on for an Infosecurity Magazine piece I did earlier this year: gaining access to a single email address to which every other part of your digital life - ecommerce accounts, financial accounts, social media accounts, password resets all over the Web - is locked puts you in for "a world of hurt". If you only have one email account you use for everything, given access to it, an attacker can simply request password resets all over the place - and then he has access to your accounts and you don't. There are separate problems around the fact that the information required for resets is both the kind of stuff people disclose without thinking on social networks and commonly reused. None of this requires fancy technology fix, just smarter, broader thinking

There are simple solutions to the email problem: don't use one email account for everything and, in the case of Gmail, use two-factor authentication. If you don't operate your own server (and maybe even if you do) it may be too complicated to create a separate address for every site you use, but it's easy enough to have a public address you use for correspondence, a private one you use for most of your site accounts, and then maybe a separate, even less well-known one for a few selected sites that you want to protect as much as you can.

Honan's second problem, however, is not so simple to fix unless an incident like this commands the attention of the companies concerned: the interaction of two companies' security practices that on their own probably seemed quite reasonable. The hacker needed just two small bits of information: Honan's address (sourced from the Whois record for his Internet domain name), and the last four digits of a credit card number, The hack to get the latter involved adding a credit card to Honan's Amazon.com account over the phone and then using that card number, in a second phone call, to add a new email address to the account. Finally, you do a password reset to the new email address, access the account, and find the last four digits of the cards on file - which Apple then accepted, along with the billing address, as sufficient evidence of identity to issue a temporary password into Honan's iCloud account.

This is where your eyes widen. Who knew Amazon or Apple did any of those things over the phone? I can see the point of being able to add an email address; what if you're permanently locked out of the old one? But I can't see why adding a credit card was ever useful; it's not as if Amazon did telephone ordering. And really, the two successive calls should have raised a flag.

The worst part is that even if you did know you'd likely have no way to require any additional security to block off that route to impersonators; telephone, cable, and financial companies have been securing telephone accounts with passwords for years, but ecommerce sites do not (or haven't) think of themselves as possible vectors for hacks into other services. Since the news broke, both Amazon and Apple have blocked off this phone access. But given the extraordinary number of sites we all depend on, the takeaway from this incident is that we ultimately have no clue how well any of them protect us against impersonation. How many other sites can be gamed in this way?

Ultimately, the most important thing, as Jack Schofield writes in his Guardian advice column is not to rely on one service for everything. Honan's devastation was as complete as it was because all his devices were synched through iCloud and could be remotely wiped. Yet this is the service model that Apple has and that Microsoft and Google are driving towards. The cloud is seductive in its promises: your data is always available, on all your devices, anywhere in the world. And it's managed by professionals, who will do all the stuff you never get around to, like make backups.

But that's the point: as Honan discovered to his cost, the cloud is not a backup. If all your devices are hooked to it, it is your primary data pool, and, as Apple co-founder Steve Wozniak pointed out this week it is out of your control. Keep your own backups, kids. Develop multiple personalities. Be careful out there.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 3, 2012

Social advertising

It only takes two words to sum up Facebook's sponsored stories, the program under which you click the "Like" button on a brand's page and the system picks up your name and photograph and includes it in ads seen by your friends. The two words: social engineering.

The cooption of that phrase into the common language and the workings of time mean that the origins of that phrase are beginning to be lost. In fact, it came from 1980s computer hacking, and was, to the best of my knowledge, created by Kevin Mitnick in the days when he was the New York Times's most dangerous hacker. (Compared to today's genuinely criminal hacking enterprises, Mitnick was almost absurdly harmless; but he scared the wrong people at the wrong time.) The thing itself, of course, is basically the confidence game that is probably as old as consciousness: you, the con man, get the mark to trust you so you can then manipulate that trust to your benefit. By the time the mark figures out the game, you yourself expect to be long gone and out of reach. Trust can be abruptly severed, but the results of having granted it in the first place can't be so easily undone.

Where Facebook messed up was in that last bit: it's hard for a company to leave town, opening the way for the inevitable litigation. Naturally, there was litigation, and now there's a settlement under consideration that would require the company to pay millions to privacy advocacy organisations.

This hasn't, of course, been a good week for Facebook for other reasons: it released its first post-IPO financial statements last week. And, for the same reasons we gave when the IPO failed to impress us, as predicted, those earnings were disappointing, At the same time, the company admitted that 83 million of its user accounts are fakes or duplicates (so the service's user base is maybe 912 million instead of 995 million). And, a music company complains that it was paying for ads clicked on by bots, a claim Facebook says it can't substantiate. Small wonder the shares have halved in price since the IPO - and I'd say they're still too expensive.

The comment that individuals whose faces and names were used were being used as spokespeople without being paid, however, sparks some interesting thoughts about the democratization of celebrity endorsements and product placement. Ever since I first encountered MIT's work on wearable computing in the mid 1990s, I've wondered when we would start seeing people wearing clothing that's not just branded but displaying video ads. In the early 2000s, I recall attending an Internet Advertising Bureau event, where one of the speakers talked baldly about the desirability of getting messages into the workplace, which until then had been a no-go area. Well, I say no-go; to them I think it seemed more like a green field or an unbroken pasture of fresh snow.

Spammers were way ahead on this one, invading people's email inboxes and instant messaging and then, when filtering got good, spoofing the return addresses of people you know and trust in order to get you to click on the bad stuff. It's hard not to see Facebook's sponsored stories as the corporate version of this.

But what if they did pay, as that blog posting suggested? What if instead of casually telling your friends how great Lethal Police Hogwarts XXII is, you could get paid to do so? You wouldn't get much, true, but if sports stars can be paid millions of dollars to endorse tennis racquets (which are then customized to the point where they bear little resemblance to the mass market product sold to the rest of us) why shouldn't we be paid a few cents? Of course, after a while you wouldn't be able to trust your friends' opinions any more, but is that too high a price?

Recently, I've spent some time corresponding with a couple of people from Premiumlinkadvertising.com, who contacted me with the offer to pay me to insert a link to Musician's Friend into one of the music pages on my Web site. Once I realized that the deal was that the link could not be identified in any way as a paid link - it couldn't be put in a box, or a different font, or include the text, paid for, or anything like that - I bailed. They then offered more money. Last offer was $250 for a year, I think. I do allow ads on my site - a few pages have AdSense, and in the past a couple had paid-for text ads clearly labeled as such - but not masquerading as personal recommendations. I imagine there's some price at which I could be bought, but $250 is several orders of magnitude too low.


Week links:

- Excellent debunking of the "cybercrime costs $1 trillion" urban legend (is that including Facebook's vanishing market cap?)

- The Federated Artists Coalition has an interesting proposal to give artists and creators some rights in the proposed Universal/EMI merger.

- Wouldn't you think people would test their software before unleashing it on an unsuspecting stock market?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.