" /> net.wars: September 2012 Archives

« August 2012 | Main | October 2012 »

September 28, 2012

Don't take ballots from smiling strangers

Friends, I thought it was spam, and when I explain I think you'll see why.

Some background. Overseas Americans typically vote in the district of their last US residence. In my case, that's a county in the fine state of New York, which for much of my adult life, like clockwork, has sent me paper ballots by postal mail. Since overseas residents do not live in any state, however, you are eligible to vote only in federal elections (US Congress, US Senate, and President). I have voted in every election I have ever been eligible for back to 1972.

So last weekend three emails arrived, all beginning, "Dear voter".

The first one, from nysupport@secureballotusa.com, subject line "Electronic Ballot Access for Military/Overseas Voters":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access www.secureballotusa.com/NY to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

The second, from "NYS Board of Elections", move@elections-ny.gov, subject "Your Ballot is Now Available":

An electronic ballot has been made available to you for the November 6, 2012 General Election. Please access https://www.secureballotusa.com/NY to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

If you have any questions or experience any problems, please email NYsupport@secureballotusa.com or visit the NYS Board of Elections' website at http://www.elections.ny.gov for additional information.

The third, from nysupport@secureballot.com, subject, "Ballot Available Notification":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access www.secureballotusa.com/diaspora_ny-1.5/NY_login.action to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

In all my years as a voter, I've never had anything to do with the NY Board of Elections. I had not received any notification from the county board of elections telling me to expect an email, confirming the source, or giving the Web site address I would eventually use. But the county board of elections Web site had no information indicating they were providing electronic ballots for overseas voters. So I ask you: what would you think?

What I thought was that the most likely possibilities were both evil. One was that it was just ordinary, garden-variety spam intended to facilitate a more than usually complete phishing job. That possibility made me very reluctant to check out the URL in the message, even by typing it in. The security expert Rebecca Mercuri, whose PhD dissertation in 2000 was the first to really study the technical difficulties of electronic voting, was more intrepid. She examined the secureballotusa.com site and noted errors, such as the request for the registrant's Alabama driver's license number on this supposedly New York state registration page. Plus, the amount of information requested for verification is unnerving; I don't know these people, even though secureballotusa.com checks out as belonging to the Spanish company Scytl, which provides election software to a variety of places, including New York state.

The second possibility was that these messages were the latest outbreak of longstanding deceptive election practices which include disseminating misinformation with the goal of disenfranchising particular groups of voters. All I know about this comes from the 2008 Computers, Freedom, and Privacy conference, a panel organized by EPIC's Lillie Coney. And it's nasty stuff: leaflets, phone calls, mailings, saying stuff like Republicans vote on Tuesday (the real US election day), Democrats on Wednesday. Or that you can't vote if you've ever been found guilty of anything. Or if you voted in an earlier election this year. Or the polling location has changed. Or you'll be deported if you try to vote and you're an illegal immigrant. Typically, these efforts have been targeted at minorities and the poor. But the panel fully expected them to move increasingly online and to target a wider variety of groups, particularly through spam email. So that was my second thought. Is this it? Someone wants me not to vote?

This election year, of course, the efforts to disenfranchise groups of voters are far more sophisticated. Why send out leaflets when you can push for voter identification laws on the basis that voter fraud is a serious problem? This issue is being discussed at length by the New York Times, the Atlanticelsewhere. Deceptive email seems amateurish by comparison.

I packed up the first two emails and forwarded them to an email address at my county's board of elections from which I had previously received a mailing. On Monday, there came a prompt response. No, the messages are genuine. Some time that I don't remember I ticked a box saying "OR email", and hence I was being notified that an electronic ballot was available. I wrote back, horrified: paper ballot, by postal mail, please. And get a security expert to review how you've done this. Because seriously: the whole setup is just dangerously wrong. Voting security matters. Think like a bank.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


September 21, 2012

This is not (just) about Google

We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content0 if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 14, 2012

What did you learn in school today?

One of the more astonishing bits of news this week came from Big Brother Watch: 207 schools across Britain have placed 825 CCTV cameras in toilets or changing rooms. The survey included more than 2,000 schools, so what this is basically saying is that a tenth of the schools surveyed apparently saw nothing wrong in spying on its pupils in these most intimate situations. Overall, the survey found that English, Welsh, and Scottish secondary schools and academies have a total of 106,710 cameras overall, or an average camera-to-pupil ratio of 1:38. As a computer scientist would say, this is non-trivial.

Some added background: the mid 2000s saw the growth of fingerprinting systems for managing payments in school cafeterias, checking library books in and out, and registering attendance. In 2008, the Leave Them Kids Alone campaign, set up by a concerned parent, estimated that more than 2 million UK kids had been fingerprinted, often without the consent of their parents. The Protection of Freedoms Act 2012 finally requires schools and colleges to get parental consent before collecting children's biometrics. That doesn't stop the practice but at least it establishes that these are serious decisions whose consequences need to be considered.

Meanwhile, Ruth Cousteau, the editor of the Open Rights Group's ORGzine, one of the locations where you can find net.wars every week, sends the story that a Texas school district is requiring pupils to carry RFID-enabled cards at all times while on school grounds. The really interesting element is that the real goal here is primarily and unashamedly financial, imposed on the school by its district: the school gets paid per pupil per day, and if a student isn't in homeroom when the teacher takes attendance, that's a little less money to finance the school in doing its job. The RFID cards enable the school to count the pupils who are present somewhere on the grounds but not in their seats, as if they were laptops in danger of being stolen. In the Wired write-up linked above, the school's principal seems not to see any privacy issues connecting to the fact that the school can track kids anywhere on the campus. It's good for safety. And so on.

There is constant debate about what kids should be taught in schools with respect to computers. In these discussions, the focus tends to be on what kids should be directly taught. When I covered Young Rewired State in 2011, one of the things we asked the teams I followed was about the state of computer education in their schools. Their answers: dire. Schools, apparently under the impression that their job was to train the office workforce of the previous decade, were teaching kids how to use word processors, but nothing or very little about how computers work, how to program, or how to build things.

There are signs that this particular problem is beginning to be rectified. Things like the Raspberry Pi and the Arduino, coupled with open source software, are beginning provide ways to recapture teaching in this area, essential if we are to have a next generation of computer scientists. This is all welcome stuff: teaching kids about computers by supplying them with fundamentally closed devices like iPads and Kindles is the equivalent of teaching kids sports by wheeling in a TV and playing a videotape of last Monday's US Open final between Andy Murray and Novak Djokovic.

But here's the most telling quote from that Wired article: "The kids are used to being monitored."

Yes, they are. And when they are adults, they will also be used to being monitored. I'm not quite paranoid enough to suggest that there's a large conspiracy to "soften up" the next generation (as Terri Dowty used to put it when she was running Action for the Rights of Children), but you can have the effect whether or not you have the intent. All these trends are happening in multiple locations: in the UK, for example, there were experiments in 2007 with school uniforms with embedded RFID chips (that wouldn't work in the US, where school uniforms are a rarity); in the trial, these not only tracked students' movements but pulled up data on academic performance.

These are the lessons we are teaching these kids indirectly. We tell them that putting naked photos on Facebook is a dumb idea and may come back to bite them in the future - but simultaneously we pretend to them that their electronic school records, down to the last, tiniest infraction, pose no similar risk. We tell them that plagiarism is bad and try to teach them about copyright and copying - but real life is meanwhile teaching them that a lot of news is scraped almost directly from press releases and that cheating goes on everywhere from financial markets and sports to scientific research. And although we try to tell them that security is important, we teach them by implication that it's OK to use sensitive personal data such as fingerprints and other biometrics for relatively trivial purposes, even knowing that these data's next outing may be to protect their bank accounts and validate their passports.

We should remember: what we do to them now they will do to us when we are old and feeble, and they're the ones in charge.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

.

September 7, 2012

Robocops

Great, anguished howls were heard on Twitter last Sunday when Ustream silenced Neil Gaiman's acceptance speech at the Hugo awards, presented at the World Science Fiction Convention. On Tuesday, something similar happened when, Slate explains, YouTube blocked access to Michelle Obama's speech at the Democratic National Convention once the live broadcast had concluded. Yes, both one of our premier fantasy writers and the First Lady of the United States were silenced by over-eager, petty functionaries. Only, because said petty functionaries were automated copyright robots, there was no immediately available way for the organizers to point out that the content identified as copyrighted had been cleared for use.

TV can be smug here: this didn't happen when broadcasters were in charge. And no, it didn't: because a large broadcaster clears the rights and assumes the risks itself. By opening up broadcasting to the unwashed millions, intermediaries like Google (YouTube) and UStream have to find a way to lay off the risk of copyright infringement. They cannot trust their users. And they cannot clear - or check - the rights manually for millions of uploads. Even rights holder organizations like the RIAA, MPAA, and FACT, who are the ones making most of the fuss, can't afford to do that. Frustration breeds market opportunity, and so we have automated software that crawls around looking for material it can identify as belonging to someone who would object. And then it spits out a complaint and down goes the material.

In this case, both the DNC and the Hugo Awards had permission to use the bit of copyrighted material the bots identified. But the bot did not know this; that's above its pay grade.

This is all happening at a key moment in Europe: early next week, the public consultation closes on the notice-and-takedown rules that govern, among other things, what ISPs and other hosts are supposed to do when users upload material that infringes copyright. There's a questionnaire for submitting your opinions; you have until Tuesday, September 11.

Today's notice and takedown rules date to about the mid-1990s and two particular cases. One, largely but not wholly played out in the US, was the several-years fight between the Church of Scientology and a group of activists who believed that the public interest was served by publishing as widely as possible the documents Scientology preserves from the view of all but it3s highest-level adherents, which I chronicled for Wired in 1995. This case - and other early cases of claimed copyright infringement - let to the passage in 1998 of the Digital Millennium Copyright Act, which is the law governing the way today's notice-and-takedown procedures operate in the US and therefore, since many of the Internet's biggest user-generated content sites are American, worldwide.

The other important case was the 1997 British case of Laurence Godfrey, who sued Demon Internet for libel over a series of Internet postings, spoofed to appear as though they came from him, which the service failed to take down despite his requests. At the time, a fair percentage of Internet users believed - or at least argued - that libel law did not apply online; Godfrey, through the Demon case and others, set out to prove them wrong, and succeeded. The Demon case was eventually settled in 2000, and set the precedent that ISPs could be sued for libel if they failed to have procedures in place for dealing with complaints like these. Result: everyone now has procedures and routinely operates notice-and-takedown, just as cyber rights lawyer Yaman Akdeniz predicted in 1999.

A different set of notice-and-takedown regime is operated, of course, by the Internet Watch Foundation, which was founded in 1996 and recommends that ISPs remove material that IWF have staff have examined and believe is potentially illegal. This isn't what we're talking about here: the IWF responds to complaints from the public and at all stages humans are involved in making the decisions.

Granted that it's not unreasonable that there should be some mechanism to enable people to complain about material that infringes their copyrights or is libellous, what doesn't get sufficient attention is that there should also be a means of redress for those who are unjustly accused. Even without this week's incidents we have enough evidence - thanks to the detailed collection of details showing how DMCA notices have been used and abused in the years since the law's passage being continuously complied at Chilling Effects - to be able to see the damage that overbroad, knee-jerk deletion can do.

It's clear that balance needs to be restored. Users should be notified promptly when the content they have posted is removed; there should be a fast turnaround means of redress; and there clearly needs to be a mechanism by which users can say, "This content has been cleared for use".

By those standards, Ustream has actually behaved remarkably well. It hasapologized and is planning to rebroadcast the Hugo Awards on Sunday, September 9. Meanwhile, it's pulled its automated copyright policing system to understand what went wrong. To be fair, the company that supplies the automated copyright policing software, Vobile, argues that its software wasn't at fault: it merely reports what it finds. It's up to the commissioning company to decide how to act on those reports. Like we said: above the bot's pay grade.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.