" /> net.wars: August 2019 Archives

« July 2019 | Main

August 16, 2019

The law of the camera

compressed-King's_Cross_Western_Concourse-wikimedia.jpgAs if cued by the end of last week's installment, this week the Financial Times (paywalled), followed by many others broke the news that Argent LLP, the lead developer in regenerating the Kings Cross district of London in the mid-2000s, is using facial recognition to surveil the entire area. The 67-acre site includes two mainline railway stations, a major Underground interchange station, schools, retailers, the Eurostar terminal, a local college, ten public parks, 20 streets, 50 buildings, 1,900 homes...and, because it happens to be there, Google's UK headquarters. (OK, Google: how do you like it when you're on the receiving end instead of dishing it out?)

So, to be clear: this system has been installed - doubtless "for your safety" - even though over and over these automated facial recognition systems are being shown to be almost laughably inaccurate: in London, Big Brother Watch found a 95% inaccuracy rate (PDF); in California, the ACLU found that the software incorrectly matched one in five lawmakers to criminals' mugshots. US cities - San Francisco, Oakland, Somerville, Massachusetts - are legislating bans as a result. In London, however, Canary Wharf, a large development area in east London, told the BBC and the Financial Times that it is considering following Kings Cross's lead.

Inaccuracy is only part of the problem with the Kings Cross situation - and the deeper problem will persist even if and when the systems become accurate enough for prime time (which will open a whole new can of worms). The deeper problem is the effective privatization of public space: here, a private entity has installed a facial recognition system with no notice to any of the people being surveilled, with no public debate, and, according to the BBC, no notice to either local or central government.

To place this in context, it's worth revisiting the history of the growth of CCTV cameras in the UK, the world leader (if that's the word you want) in this area. As Simon Davies recounts in his recently-published memoir about his 30 years of privacy campaigning (and as I also remember), the UK began embracing CCTV in the mid-1990s (PDF), fueled in part by the emotive role it played in catching the murderers in the 1993 Jamie Bulger case. Central government began offering local councils funding to install cameras. Deployment accelerated after 9/11, but the trend had already been set.

By 2012, when the Protection of Freedoms Act was passed to create the surveillance camera commissioner's office, public resistance had largely vanished. At the first Surveillance Camera Conference, in 2013, representatives from several local councils said they frequently received letters from local residents requesting additional cameras. They were not universally happy about this; around that time the responsibility for paying for the cameras and the systems to run them was being shifted to the councils themselves, and many seemed to be reconsidering their value. There has never been much research assessing whether the cameras cut crime; what there is suggests CCTV diverts it rather than stops it. A 2013 briefing paper by the College of Policing (PDF) says CCTV provides a "small, but statistically significant, reduction in crime", though it notes that effectiveness depends on the type of crime and the setting. "It has no impact on levels of violent crime," the paper concludes. A 2014 summary of research to date notes the need to balance privacy concerns and assess cost-effectiveness. Adding on highly unreliable facial recognition won't change that - but it will entrench unnecessary harassment.

The issue we're more concerned about here is the role of private operators. At the 2013 conference, public operators complained that their private counterparts, operating at least ten times as many cameras, were not required to follow the same rules as public bodies (although many did). Reliable statistics are hard to find. A recent estimate claims London hosts 627,707 CCTV cameras, but it's fairer to say that not even the Surveillance Camera Commissioner really knows. It is clear, however, that the vast majority of cameras are privately owned and operated.

Twenty years ago, Davies correctly foresaw that networking the cameras would enable tracking people across the city. Neither he nor the rest of us saw that (deeply flawed) facial recognition would arrive this soon, if only because it's the result of millions of independent individual decisions to publicly post billions of facial photographs. This is what created the necessary mass of training data that, as Olivia Solon has documented, researchers have appropriated.

For an area the size and public importance of Kings Cross to be monitored via privately-owned facial recognition systems that have attracted enormous controversy in the public sector is profoundly disturbing. You can sort of see their logic: Kings Cross station is now a large shopping mall surrounding a major train station, so what's the difference between that and a shopping mall without one? But effectively, in setting the rules of engagement for part of our city that no one voted to privatize, Argent is making law, a job no one voted to give it. A London - or any other major city - carved up into corporately sanitized districts connected by lawless streets - is not where any of us asked to live.


Illustrations: The new Kings Cross Western Concourse (via Colin on Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.