" /> net.wars: November 2012 Archives

« October 2012 | Main | December 2012 »

November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

November 23, 2012

Democracy theater

So Facebook is the latest to discover that it's hard to come up with a governance structure online that functions in any meaningful way. This week, the company announced plans to disband the system of voting on privacy changes that it put in place in 2009. To be honest, I'm surprised it took this long.

Techcrunch explains the official reasons. First, with 1 billion users, it's now too easy to hit the threshold of 7,000 comments that triggers a vote on proposed changes. Second, with 1 billion users, amassing the 30 percent of the user base necessary to make the vote count has become...pretty much impossible. (Look, if you hate Facebook's policy changes, it's easier to simply stop using the system. Voting requires engagement.) The company also complained that the system as designed encourages comments' "quantity over quality". Really, it would be hard to come up with an online system that didn't unless it was so hard to use that no one would bother anyway.

The fundamental problem for any kind of online governance is that no one except some lawyers thinks governmance is fun. (For an example of tedious meetings producing embarrassing results, see this week's General Synod.) Even online, where no one can tell you're a dog watching the Outdoor Channel while typing screeds of debate, it takes strong motivation to stay engaged. That in turn means that ultimately the people who participate, once the novelty has worn off, are either paid, obsessed, or awash in free time.

The people who are paid - either because they work for the company running the service or because they work for governments or NGOs whose job it is to protect consumers or enforce the law - can and do talk directly to each other. They already know each other, and they don't need fancy online governmental structures to make themselves heard.

The obsessed can be divided into two categories: people with a cause and troublemakers - trolls. Trolls can be incredibly disruptive, but they do eventually get bored and go away, IF you can get everyone else to starve them of the oxygen of attention by just ignoring them.

That leaves two groups: those with time (and patience) and those with a cause. Both tend to fall into the category Mark Twain neatly summed up in: "Never argue with a man who buys his ink by the barrelful." Don't get me wrong: I'm not knocking either group. The cause may be good and righteous and deserving of having enormous amounts of time spent on it. The people with time on their hands may be smart, experienced, and expert. Nonetheless, they will tend to drown out opposing views with sheer volume and relentlessness.

All of which is to say that I don't blame Facebook if it found the comments process tedious and time-consuming, and as much of a black hole for its resources as the help desk for a company with impenetrable password policies. Others are less tolerant of the decision. History, however, is on Facebook's side: democratic governance of online communities does not work.

Even without the generic problems of online communities which have been replicated mutatis mutandem since the first modem uploaded the first bit, Facebook was always going to face problems of scale if it kept growing. As several stories have pointed out, how do you get 300 million people to care enough to vote? As a strategy, it's understandable why the company set a minimum percentage: so a small but vocal minority could not hijack the process. But scale matters, and that's why every democracy of any size has representative government rather than direct voting, like Greek citizens in the Acropolis. (Pause to imagine the complexities of deciding how to divvy up Facebook into tribes: would the basic unit of membership be nation, family, or circle of friends, or should people be allocated into groups based on when they joined or perhaps their average posting rate?)

The 2009 decision to allow votes came a time when Facebook was under recurring and frequent pressure over a multitude of changes to its privacy policies, all going one way: toward greater openness. That was the year, in fact, that the system effectively turned itself inside out. EFF has a helpful timeline of the changes from 2005 to 2010. Putting the voting system in place was certainly good PR: it made the company look like it was serious about listening to its users. But, as the Europe vs Facebook site says, the choice was always constrained to old policy or new policy, not new policy, old policy, or an entirely different policy proposed by users.

Even without all that, the underlying issue is this: what company would want democratic governance to succeed? The fact is that, as Roger Clarke observed before Facebook even existed, social networks have only one business model: to monetize their users. The pressure to do that has only increased since Facebook's IPO, even though founder Mark Zuckerberg created a dual-class structure that means his decisions cannot be effectively challenged. A commercial company- especially a *public* commercial company - cannot be run as a democracy. It's as simple as that. No matter how much their engagement makes them feel they own the place, the users are never in charge of the asylum. Not even on the WELL.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

November 16, 2012

Grabbing at governance

Someday the development of Internet governance will look like a continuous historical sweep whose outcome, in hindsight, is obvious. At the beginning will be one man, Jon Postel, who in the mid-1990s was, if anyone was, the god of the Internet. At the end will be...well, we don't know yet. And the sad thing is that the road to governance is so long and frankly so dull: years of meetings, committees, proposals, debate, redrafted proposals, diplomatic language, and, worst of all, remote from the mundane experience of everyday Internet users, such as spam and whether they can trust their banks' Web sites.

But if we care about the future of the Internet we must take an interest in what authority should be exercised by the International Telecommunications Union or the Internet Corporation for Assigned Names and Numbers or some other yet-to-be-defined. In fact, we are right on top of a key moment in that developmental history: from December 3 to 14, the ITU is convening the World Conference on International Telecommunications (WCIT, pronounced "wicket"). The big subject for discussion: how and whether to revise the 1988 International Telecommunications Regulations.

Plans for WCIT have been proceeding for years. In May, civil society groups concerned with civil liberties and human rights signed a letter to ITU secretary-general Hamadeoun Touré asking the ITU to open the process to more stakeholders. In June, a couple of frustrated academics changed the game by setting up WCITLeaks asking anyone who had copies of the proposals being submitted to the ITU to send copies. crutiny of those proposals showed the variety and breadth of some countries' desires for regulation. On November 7, the ITU's secretary-general, Hamadoun Touré, wrote an op-ed for Wired arguing that nothing would be passed except by consensus.

On Monday, he got a sort of answer from the International Trade Union Congress secretary, Sharon Burrow who, together with former ICANN head Paul Twomey, and, by video link, Internet pioneer Vint Cerf , launched the Stop the Net Grab campaign. The future of the Internet, they argued, is too important to too many stakeholders to leave decisions about its future up to governments bargaining in secret. The ITU, in its response, argued that Greenpeace and the ITUC have their facts wrong; after the two sides met, the ITUC reiterated its desire for some proposals to be taken off the table.

But stop and think. Opposition to the ITU is coming from Greenpeace and the ITUC?

"This is a watershed," said Twomey. "We have a completely new set of players, nothing to do with money or defending the technology. They're not priests discussing their protocols. We have a new set of experienced international political warriors saying, 'We're interested'."

Explained Burrow, "How on earth is it possible to give the workers of Bahrain or Ghana the solidarity of strategic action if governments decide unions are trouble and limit access to the Internet? We must have legislative political rights and freedoms - and that's not the work of the ITU, if it requires legislation at all."

At heart for all these years, the debate remains the same: who controls the Internet? And does governing the Internet mean regulating who pays whom or controlling what behavior is allowed? As Vint Cerf said, conflating those two is confusing content and infrastructure.

Twomey concluded, "[Certain political forces around the world] see the ITU as the place to have this discussion because it's not structured to be (nor will they let it be) fully multi-stakeholder. They have taken the opportunity of this review to bring up these desires. We should turn the question around: where is the right place to discuss this and who should be involved?"

In the journey from Postel to governance, this is the second watershed. The first step change came in 1996-1997, when it was becoming obvious that governing the Internet - which at the time primarily meant managing the allocation of domain names and numbered Internet addresses (under the aegis of the Internet Assigned Numbers Authority) - was too complex and too significant a job for one man, no matter how respected and trusted. The Internet Society and IANA formed the Internet Ad-Hoc Committee, which, in a published memorandum, outlined its new strategy. And all hell broke loose.

Long-term, the really significant change was that until that moment no one had much objected to either the decisions the Internet pioneers and engineers made or their right to make them. After some pushback, in the end the committee was disbanded and the plan scrapped, and instead a new agreement was hammered out, creating ICANN. But the lesson had been learned: there were now more people who saw themselves as Internet stakeholders than just the engineers who had created it, and they all wanted representation at the table.

In the years since, the make-up of the groups demanding to be heard has remained pretty stable, as Twomey said: engineers and technologists; representatives of civil society groups, usually working in some aspect of human rights, usually civil liberties, such as EFF, ORG, CDT, and Public Knowledge, all of whom signed the May letter. So yes, for labor unions and Greenpeace to decide that Internet freedoms are too fundamental to what they do to not participate in the decision-making about its future, is a watershed.

"We will be active as long as it takes," Burrow said Monday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

November 9, 2012

The billion-dollar spree

"This will be the grossest money election we've seen since Nixon," Lawrence Lessig predicted earlier this year. And the numbers are indeed staggering.

Never mind the 1%. In October, Lessig estimated that 42 percent of the money spent so far in the 2012 election cycle had come from just 47 Americans - the .000015 percent. At this rate, politicians - congressional as well as presidential - are perpetual candidates; fundraising leaves no time to do anything else. By comparison, the total UK expenditure by all candidates in the last general election (PDF) was £31 million - call it $50 million. A mere snip.

Some examples. CNN totals up $506,417,910 spent on advertising in just the eight "battleground" states - since April 10, 2012. Funds raised - again since April 10, 2012 - $1,021,265,691, much of it from states not in the battleground category - like New York, Texas, and California. In October, the National Record predicted that Obama's would be the first billion-dollar campaign.

The immediate source of these particular discontents is the 2010 Supreme Court decision in Citizens United v. Federal Election Commission that held that restricting political expenditure on "electioneering communications" by organizations contravened the First Amendment's provisions on freedom of expression. This is a perfectly valid argument if you accept the idea that organizations - corporations, trade unions, and so on - are people who should not be checked from spending their money to buy themselves airtime in which to speak freely.

An earlier rule retained in Citizens United was that donors to so-called SuperPACs (that is, political action committees that can spend unlimited amounts on political advertising as long as their efforts are independent of those of the campaigns) must be identified. That's not much of a consolation: just like money laundering in other contexts, if you want to buy yourself a piece of a president and don't want to be identified, you donate to a non-profit advocacy group and they'll spend or donate it for you and you can remain anonymous, at least to the wider public outside the SuperPAC..

And they worry about anonymous trolling on the Internet.

CNN cites Public Citizen as the source of the news that 60 percent of PACS spend their funds on promoting a single candidate, and that often these are set up and run by families, close associates, or friends of the politicians they support. US News has a handy list of the top 12 donors willing to be identified. Their interests vary; it's not like they're all ganging up on the rest of us with a clear congruence of policy desires; similarly, SuperPACs cover causes I like as well as causes I don't. And even if they didn't, it's not the kind of straightforward corruption where there is an obvious chain where you can say, money here, policy there.

If securing yourself access to put your views is your game, donating huge sums of money to a single candidate or party traditionally you want to donate to both sides, so that no matter who gets into office they'll listen to you. It's equally not a straightforward equation of more money here, victory there, although it's true: Obama outcompeted Romney on the money front, perhaps because so many Democrats were so afraid he wouldn't be able to keep up. But, as Lessig, has commented, even if the direct corrupt link is not there, the situation breeds distrust, doubt, and alienation in voters' minds.

The Washington Post argues that the big explosion of money this time is at least partly due to the one cause most rich people can agree on: tax policy. Some big decisions - the fiscal cliff - lie ahead in the next few months, as tax cuts implemented during the Bush (II) administration automatically expire. When those cuts were passed, the Republicans must have expected the prospect would push the electorate to vote them back in. Oops.

Some more details. Rootstrikers, the activist group Lessig founded to return the balance of power in American politics to the people, has a series of graphics intended to illustrate the sources of money behind superPACs; the president; and their backers. The Sunlight Foundation has an assessment of donors' return on investment

An even better one comes from the Federal Election Commission via Radio Boston, showing the distribution of contributions. The pattern is perfectly clear: the serious money is coming from the richer, more populated, more urbanized states. The way this can distort policy is also perfectly clear.

One of the big concerns in this election was that measures enacted in the name of combating voter fraud (almost non-existent) would block would-be voters from being able to cast ballots. Instead, it seems that Obama was more successful in getting out the vote.

The conundrum I'd like answered is this. Money is clearly a key factor in US elections - it can't get you elected, but the lack of it can certainly keep you out of office. It's clearly much less so elsewhere. So, if the mechanism by which distorted special-interest policies get adopted in the US is money, then what's the mechanism in other countries? I'd really like to know.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

November 2, 2012

Survival instincts

The biggest divide in New York this week, in the wake of Hurricane Sandy has been, as a friend pointed out, between the people who had to worry about getting to work and the people who didn't. Reuters summed this up pretty well. Slightly differently, The Atlantic had it as three New Yorks: one underwater, one dark and dry, and one close to normal. The stories I've read since by people living in "dark and dry" emerging into the light at around 40th street bear out just how profound the difference is between the powerless and the empowered - in the electrical sense.

This is not strictly speaking about rich and poor (although the Reuters piece linked above makes the point that the city is more economically divided than it has been in some time); the Lower Manhattan area known as Tribeca, for example, is home to plenty of wealthy people - and was flooded. Instead, my friend's more profound divide is about whether you do the kind of work that requires physical presence. Freelance writers, highly paid software engineers, financial services personnel, and a load of other people can work remotely. If your main office is a magazine or a large high-technology company like, say, Google, whose New York building is at 15th and 8th, as long as you have failover systems in place so that your network and data centers keep operating, your staff can work from wherever they can find power and Internet access. Even small companies and start-ups can keep going if their systems are built on or can failover to the right technology.

One of my favorite New York retailers, J&R (they sell everything from music to computers from a series of stores in lower Manhattan, not far from last year's Occupy Wall Street site), perfectly demonstrated this digital divide. The Web site noted yesterday (Thursday) that its shops, located in "dark and dry", are all closed, but the Web site is taking orders as normal.

Plumbers, doormen, shop owners, restaurateurs, and fire fighters, on the other hand, have to be physically present - and they are vital in keeping the other group functioning. So in one sense the Internet has made cities much more resilient, and in another it hasn't made a damn bit of difference.

The Internet was still very young when people began worrying about the potential for a "digital divide". Concerns surfaced early about the prospects for digital exclusion of vulnerable groups such as the elderly and, the cognitively impaired, as well as those in rural areas poorly served by the telecommunications infrastructure and the poor. And these are the groups that, in the UK, efforts at digital engagement are intended to help.

Yet the more significant difference may be not who *is* online - after all, why should anyone be forced online who doesn't want to go? - but who can *work* online. Like traveling with only carry-on luggage, it makes for a more flexible life that can be altered to suit conditions. If your physical presence is not required, today you avoided long lines and fights at gas stations, traffic jams approaching the bridges and tunnels, waits for buses, and long trudges from the last open subway station to your actual destination.

This is not the place to argue about climate change. A single storm is one data point in a pattern that is measured in timespans longer than those of individual human lives.

Nonetheless, it's interesting to note that this storm may be the catalyst the the US needed to stop dragging its feet. As Business Week indicates , the status quo is bad for business, and the people making this point are the insurance companies, not scientists who can be accused of supporting the consensus in the interests of retaining their grant money (something that's been said to me recently by people who normally view a scientific consensus as worth taking seriously).

There was a brief flurry of argument this week on Dave Farber's list about whether the Internet was designed to survive a bomb outage or not. I thought this had been made clear by contemporary historians long ago: that while the immediate impetus was to make it easy for people to share files and information, DARPA's goal was very much also to build resilient networks. And, given that New York City is a telecommunications hub it's clear we've done pretty well with this idea, especially after the events of September 11, 2001 forced network operators to rethink their plans for coping with emergencies.

It seems clear that the next stage will be to do better at coming up with better strategies for making cities more resilient. Ultimately, the cause of climate change doesn't matter: if there are more and more "freak" weather patterns resulting on more and more extreme storms and natural disasters, then it's only common sense to try to plan for them: disaster recovery for municipalities rather than businesses. The world's reinsurance companies - the companies that eventually bear the brunt of the costs - are going to insist on it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.