" /> net.wars: December 2012 Archives

« November 2012 | Main

December 28, 2012

Apocalypse interrupted

We seem to have survived - again. This does not mean the *Maya* were wrong: as The Skeptic's Dictionary points out, their Long Count Calendar was coming to an end, not the world. >/p>.

Yet predictions of apocalypse, Internet-style, are still all around us, largely inspired by the recently concluded - or imploded - World Congress on International Telecommunications (WCIT). The Guardian's John Naughton, for example, ponders the Net's ownership by politicians and big corporations. As he says (and as I wrote in New Scientist), we will not know for at least a generation, maybe two, whether the Internet will, like radio and television before it, become a closed, controlled medium. It is that specter that this column and the many digital rights organizations argue against.

In that article, Naughton refers to John Perry Barlow's Declaration of the Independence of Cyberpace. It's interesting that there's a sort of revisionism going on about this document. Naughton says, "We nodded approvingly".

Maybe he did, but at the time Barlow's Declaration was widely viewed with embarrassment, even by his peers in the Internet punditry business. Quite apart from the hyperbole, it was obviously wrong to think that governments could be kept entirely out of the Internet - and it was also obvious that when it came to ecommerce and protection from fraud and crime, their citizens would demand government action.

A little more history: the Declaration was written in response to a very specific threat: it was dated February 8, 1996, the day President Clinton signed the Communications Decency Act into law. Ultimately, the censorship provisions of that law, a rider to the 1996 Telecommunications Act, were struck down as infringing the First Amendment; the rest of the act, largely overlooked at the time, paved the way for the telecommunications companies formed in the 1984 court-mandated breakup of AT&T to merge back together.

The issues surrounding the CDA are of course still with us and formed one of the key areas of concern (and disagreement) at WCIT. The concern of the mid-1990s - that the Internet would evolve into a sort of Least Common Denominator medium, where only material acceptable to every government would be allowed online - has not materialized. Instead, the organizations founded then have managed to push back as, for example, last month in Britain, where the Open Rights Group led the way on campaigning against a system that would have required adults to opt in to receive...well, material intended for adults. ("Hi. I'm a subscriber and I'm calling because I want pornography with my broadband? Yes, I am over 18. Yes, I understand some of the material I see may be objectionable. OK, I'll hold...") Thousands of parents wrote in to object, and the result is that parents are to be given help in installing filtering software if they ask for it. Sense - and perhaps the British horror of embarrassment - has prevailed.

Note, however, that the Declaration - like the US Constitution before it - says nothing about corporations. In 1996 the largest New Era online businesses were eBay, Yahoo!, and Amazon, minnows compared to older computer industry behemoths and telcos, like IBM, AT&T, and Microsoft. Apple was a non-entity, and mobile phones were still a separate industry. Net pioneers tended to think these companies would not be able to adapt quickly enough to colonize the Internet.

Today, however, it's clearer that arguably the biggest threat to the open and free Internet is in fact not governments - who spectacularly failed to agree at WCIT - but large companies and the progressive closure of the devices we use to access it. Today's biggest computing trend - tablets - emulates mobile phones in curating the additions and alterations users can make, and ordinary Web browsers, too, are beginning to lock down their extensions.

All in the name of security and convenience, of course, and the companies leading the way - Apple and Google - are not wrong in trying to protect us from the constantly increasing array of threats. The material on Blackhole in Sophos' 2013 Security Threat report is truly scary in terms of the complexity of this particular malware; its morphing abilities let it take advantage of any and every hole. We're going to need a layered system, in which the devices we use for sensitive applications - medical interactions, banking, - are locked down but we still have open devices for less sensitive ones. Hard to do, when everyone wants to do everything on their phones, making those the most lucrative targets. Nonetheless, the result will be increased control over the gateways to the Net by a few large companies.

What is likely to be the emerging threat of 2013 and beyond is the convergence of the physical and virtual worlds. All of these will be factors in this: smart street furniture, DNA databases, the large collections of tagged faces being built by social media becoming useful tools to law enforcement and others watching the output of CCTV, tracking via GPS in phones that then in turn enables personalized real-world environments that emulate today's personalized Web pages.

All of that is much more complicated than the last 20 years of threats to the Internet.

So, onward to 2013. Happy apocalypse, everyone!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

December 22, 2012

The personal connection

This week my previously rather blasé view of online education - known in its latest incarnation as MOOCs, for massive open online courses - got a massive shove toward the enthusiastic by the story of 17-year-old Daniel Bergmann's experience in one such course in modern poetry, a ten-week class given by Al Filreis at the University of Pennsylvania and offered by Coursera.

How Daniel, the son of two of my oldest friends, got there is a long story better told on Filreis's blog by Filreis and Daniel himself. The ultra-brief summary is this: Daniel is, as he writes to Filreis (reproduced in that blog entry), "emerging from autism"; he communicates by spelling out words - sentences - paragraphs - on a letterboard app on his iPad. That part of the story was told here in 2010. But the point is this: 36,000 students signed up for the class, and 2,200 finished with a certificate. Daniel was one of them.

The standard knocks on online education have been that:

- students lose most or all of the interaction with each other that really defines the benefit of the college experience;

- the drop-out rate is staggering;

- there are issues surrounding accreditation and credentials;

- it's just not as good as "the real thing";

- there may not be a business model (!);

- but maybe for some people who don't have good access to traditional education and who have defined goals it will work.

To some extent all these things are true, but with caveats. For one thing, what do we mean by "as good"? If we mean that the credential from an online course isn't as likely to land you a high-paying or high-influence job as a diploma from Cornell or Cambridge, that's true of all but some handfuls of the world's institutions of higher learning. If we mean the quality of the connections you make with professors who write the textbooks and students who are tomorrow's stars, that's true of many real-world institutions as well. If you mean the quality of the education - which hardly anyone seems to mean these days - that's less clear. Only a small minority can get into - or afford - the top universities of this world; education you can have has to be better than education you can't.

The drop-out numbers are indeed high, but as The Atlantic points out, we're at the beginning of an experiment. The 160,000 people who signed up for Sebastian Thrun's Udacity course on AI aren't losing their only chance by not completing it; how you spend the four years between 18 and 22 is a zero-sum game, but education in other contexts is not.

In July 1999, when I wrote about the first push driving education online for Scientific American, a reader wrote in accusing me of elitism. She was only a little bit right: I was and am dubious that in the credential-obsessed United States any online education will be carry as much clout as the traditional degree from a good school. But the perceived value of that credential lies behind the grotesque inflation of tuition fees. The desire to learn is entirely different, and I cannot argue against anything that will give greater opportunities to exercise that.

At this year's Singularity Summit, Peter Norvig, the director of research at Google, recounted his experience of teaching Udacity's artificial intelligence class with Udacity founder Sebastian Thrum. One of the benefits of MOOCs, he said, is that the scale helps you improve the teaching. They found, for example, a test problem where the good students were not doing well; analysis showed the wording was ambiguous. Vernor Vinge, the retired mathematics professor and science fiction writer, at the same press conference, was impressed: you can do that in traditional education, but it would take 20 years to build up an adequate (though still comparatively tiny) sample size. Norvig also hopes that watching millions of people learn might help inform research in modeling intelligence. There's a certain elegance to this.

Of course in education you always hope for a meritocracy in which the best minds earn both the best grades and the best attention. But humans being what they are, we know from studies that prejudices apply here as elsewhere. In his recently published book, Oddly Nomal, John Schwartz of the New York Times recounts the many difficulties his gay son faced in navigating a childhood in which his personal differences from the norm sentenced him to being viewed as trouble. If on the Internet nobody knows you're a dog, equally, nobody has to know you're a ten-year-old boy wearing pink light-up shoes. Or, as in Daniel's case, a 17-year-old who has struggled with severe autism for nearly all his life and for whom traditional classroom-based education is out of reach physically - but not mentally.

In Filreis's blog entry, Daniel writes, "Your notion that digital learning need not be isolating is very right where I am concerned."

Norvig, from the other side of the teaching effort similarly said: "We thought it was all about recording flawless videos and then decided it was not that important. We made mistakes and students didn't care. What mattered was the personal connection."

This is the priceless thing that online education has struggled to emulate from successful classrooms. Maybe we're finally getting there.

"Emerging from autism." Such a wonderful and hopeful phrase.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 14, 2012

Defending Facebook

The talks at the monthly Defcon London are often too abstruse for the "geek adjacent". Not so this week, when Chris Palow, Facebook's engineering manager, site integrity London, outlined the site's efforts to defend itself against attackers.

This is no small thing: the law of truly large numbers means that a tiny percentage of a billion users is still a lot of abusers. And Palow has had to scale up very quickly: when he joined five years ago, the company had 30 million users. Today, that's just a little more than a third of the site's *fake* accounts, based on the 83 million the company claimed in its last quarterly SEC filing.

As became rapidly apparent, there are fakes and there are fakes. Most of those 83 million are relatively benign: accounts for people's dogs, public/private variants, duplicate accounts created when a password is lost, and so on. The rest, about 1.5 percent - which is still 14 million - are the troublemakers, spreading spam and malicious links such as the Koobface worm. Eliminating these is important; there is little more damaging to a social network than rampant malware that leverages the social graph to put users in danger in a space they use because they believe it is safe.

This is not an entirely new problem, but none of the prior solutions are really available to Facebook. Prehistoric commercial social environments like CompuServe and AOL, because people paid to use them, could check credit cards. (Yes, the irony was that in the window between sign-up and credit card verification lay a golden opportunity for abusers to harass the rest of the Net from throwaway email accounts.) Usenet and other free services were defenseless against malicious posters, and despite volunteer community efforts most of the audience fled as a result. As a free service whose business model requires scale, Facebook can't require a credit card or heavyweight authentication, and its ad-supported business model means it can't afford to lose any of its audience, so it's damned in all directions. It's also safe to say that the online criminal underground is hugely more developed and expert now.

Fake accounts are the entry points for all sorts of attacks; besides the usual issues of phishing attacks and botnet recruitment, the more fun exploit is using those links to vacuum up people's passwords in order to exploit them on all the other sites across the Web where those same people have used those same passwords.

So a lot of Palow's efforts are directed at making sure those accounts don't get opened in the first place. Detection is a key element; among other techniques is a lightweight captcha-style request to identify a picture.

"It's still easy for one user to have three or four accounts," he said, "but we can catch anyone registering 1 million fakes. Most attacks need scale."

For the small-scale 16-year-old in the bedroom, he joked that the most effective remedy is found in the site's social graph: their moms are on Facebook. In a more complicated case from the Philippines using cheap human labor to open 500 accounts a day in order to spam links selling counterfeit athletic shoes the miscreants talked about their efforts *on* Facebook.

Another key is preventing, or finding and fixing, bugs in the code that runs the site. Among the strategies Palow listed for this, which included general improvements to coding practice such as better testing, regular reviews, and static and dynamic analysis, is befriending the community of people who find and report bugs.

Once accounts have been created, spotting the spammers involves looking for patterns that sound very much like the ones that characterize Usenet spam: are the same URLs being posted across a range of accounts, do those accounts show other signs of malware infection, are they posted excessively on a single channel, and so on.

Other more complex historical attacks include the Tunisian government's effort to steal passwords. Palow also didn't have much nice to say about ad-replacement schemes such as the now-defunct Phorm.

The current hot issue is what Palow calls "toolbars" and I would call browser extensions. Many of these perform valuable functions from the user's point of view, but the price, which most users don't see until it's too late, is that they operate across all open windows, from your insecure reading of the tennis headlines to your banking session. This particular issue is beginning to be locked down by browser vendors, who are implementing content security policies, essentially the equivalent of the Android and iOS curated app stores. As this work is progressing at different rates, in some cases Facebook can leverage the browsers' varying blocking patterns to identify malware.

More complex responses involve partnerships with software and anti-virus vendors. There will be more of this: the latest trend is stealing tokens on Facebook (such as the iPhone Facebook app's token) to enable spamming off-site.

A fellow audience member commented that sometimes it's more effective long-term to let the miscreants ride for a month while you formulate a really heavy response and then drop the anvil. Perhaps: but this is the law of truly large numbers again. When you have a billion users the problem is that during that month a really shocking number of people can be damaged. Palow's life, therefore, is likely to continue to be patch, patch, patch.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

December 7, 2012

The fifth estate

Lord Justice Leveson, who has been largely silent since introducing his 2,000-page report on press standards and ethics a little over a week ago, commented at an event in Australia yesterday that we would be needing laws to maintain privacy and freedom of expression on the Internet and that it would "take time to civilize the Internet".

Excuse me? I've been on the Internet for more than 20 years, and I think I'm pretty civilized.

Those of us who wish to protect the Internet as a still-fledgling medium get nervous whenever anyone starts talking about laws governing it. Leveson's comments, as far as they've been reported, are not particularly extreme; perhaps more so is the single page of his report on the press, in which he calls the Internet an ethical vacuum (PDF, p 736) (or see, more accessibly, the Guardian).

Even with Internet blinkers on I can see that to someone who has little contact with the Internet outside of mainstream press reporting the online world must seem like a vast wasteland of idle, undifferentiated self-importance punctuated by illegal acts. What does the general press cover? Hacking, crime, "piracy" (that is, file-sharing, without the nuance of whether the material being shared is legal or not), vigilanteism, the amount of strange and stupid stuff people post about themselves on social networks, and flash mobs. Small wonder that Leveson, so recently steeped in the workings of the nation's newsrooms, worries about "mob rule" and "trial by Twitter".

I had actually been well on the way to not caring much about the conclusions of the Leveson Report until or unless the regime proposed in its wake turns out to have nasty ramifications. The journalism of power it describes - intimately intertwined with politicians' agendas and police corruption - could not be further removed from any work I've ever done. What the inquiry did in exposing the level of illegal activity in the UK media was highly important, and exposing people who hacked into others' phones and hired rogue investigators is entirely appropriate. The thousands of pages documenting all this remain as a necessary and vital historical document. But what's the result? More self-regulation, hopefully better enough to avoid all this happening again in another ten years: as Leveson himself said in introducing the report, this is the seventh such inquiry in less than 70 years. Ten years from now, what will the press look like? Are the conditions that prevailed 30 years ago and set the ground for the shameful culture that's grown up since replicable?

It is clearly nonsense to speak of "the Internet" as though it were a single medium operating in a consistent way; "the Internet" is billions of people grouping and ungrouping in astronomical numbers of ways. The community standards and norms of Facebook are entirely different from those of 4chan or Anonymous, and none of those has a clear offline counterpart; CNN operates little differently online than it does offline; bloggers may be anything from a clearly identifiable person from a recognized institution or a troll with a camera phone. It's certainly understandable that the commercial press resents being held to a different standard: how come France gets to publish its nude photos of English royalty and Britain doesn't? 'Snot fair. Everyone in England can access them, and we can't still publish?

Leveson argues that there is "a qualitative difference between photographs being available online and being displayed, or blazoned, on the front page of a newspaper such as The Sun". The imprimatur of the newspaper, he argues, gives the photographs greater importance and weight. But - as he does not say but a newspaper proprietor might - the Internet gives the photographs greater longevity and wider, much faster distribution.

Ultimately, what is going to happen with the Internet is what has happened with the press: we are going to learn to live with a medium that is only imperfectly regulated. The toxic culture that Leveson studied was made up of professional people paid to work for commercial organizations during a time when most of their income was earned offline. People who hack into other people's phones and publish the results are breaking the law and deserve to be punished (unless there's a really good public interest defense); it doesn't matter whether the publication is online or offline, professional or amateur. And yet his recommendations ultimately merely seek to create sterner replicas of the structure that's already in place: no one will accept trading away the traditional freedoms of the press in the interests of forcing them to behave better. We take on the risk that the press will overstep because we believe that their role as the fourth estate - "speaking truth to power" - is a vital one in a democracy.

I would say that if anything keeps the press in line it will be the Internet. Who investigates press stories when they've gotten it wrong, shames the perpetrators, and publishes the results? People on the Internet. Where can factual errors be most quickly corrected? The Internet. Who speaks truth to the fourth estate? The Internet. That's not all it does, of course, and it's not all good. But we will have to accept the same trade-offs for the fifth estate that we have in other media.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.