Main

November 8, 2019

Burn rate

One of my favorite moments in the 1996 sitcom 3rd Rock from the Sun was when Dick (John Lithgow), the high commander of the aliens' mission to Earth, marveled at humans' ability to live every day as though they didn't know they were going to die. For everyone but Woody Allen and the terminally ill, that denial is useful: it allows us to get up every day and do things like watch silly sitcoms without being overwhelmed by the sense of doom.

In other contexts, the denial of existential limits is less helpful: being aware of the limits of capital reminds to use it wisely. During those 3rd Rock years, I was baffled by the recklessly rapid adoption of the Internet for serious stuff - banking, hospital systems - apparently without recognizing that the Internet was still a somewhat experimental network and lacked the service level agreements and robust engineering provided by the legacy telephone networks. During Silicon Valley's 2007 to 2009 bout of climate change concern it was an exercise in cognitive dissent to watch CEOs explain the green values they were imposing on themselves and their families while simultaneously touting their companies' products and services, which required greater dependence on electronics, power grids, and always-on connections. At an event on nanotechnology in medicine, it was striking that the presenting researchers never mentioned power use. The mounting consciousness of the climate crisis has proceeded in a separate silo from the one in which the "there's an app for that" industries have gone on designing a lifestyle of total technological dependence, apparently on the basis that electrical power is a constant and the Internet is never interrupted. (Tell that to my broadband during those missing six hours last Thursday.)

The last few weeks of California have shown that we need to completely rethink this dependence. At The Verge, Nicole Westman examines the fragility of American hospital systems. Many do have generators, but few have thought-out plans for managing during a black-out. As she writes, hospitals may be overwhelmed by unexpected influxes of patients from nursing homes that never mentioned the hospital was their fallback plan and local residents searching for somewhere to charge their phones. And, Westman notes, electronic patient records bring hard choices: do you spend your limited amount of power on keeping the medicines cold, or do you keep the computer system running?

Right now, with paper records still so recent, staff may be able to dust off their old habits and revert, but ten years hence that won't be true. British Airways' 2018 holiday weekend IT collapse at Heathrow provides a great example of what happens when there is (apparently) no plan and less experience.

At the Atlantic, Alexis Madrigal warns that California's blackouts and wildfires are samples of our future; the toxic "technical debt" of accumulated underinvestment in American infrastructure is being exposed by the abruptly increased weight of climate change. How does it happen that the fifth largest economy in the world has millions of people with no electric power? The answer, Madrigal (and others) writes is the diversion of capital that should have been spent improving the grid and burying power lines to shareholders' dividends. Add higher temperatures, less rainfall, and exceptional drought, and here's your choice: power outages or fires?

Someone like me, with a relatively simple life, a lot of paper records, sufficient resources, and a support network of friends and shopkeepers, can manage. Someone on a zero-hours contract, whose life and work depend on their phone, who can't cook, and doesn't know how to navigate the world of people if they can't check the website to find out why the water is out...can't. In these crises we always hear about the sick and the elderly, but I also worry about the 20-somethings whose lives are predicated on the Internet always being there because it always has been.

A forgotten aspect is the loss of social infrastructure, as Aditya Chakrabortty writes in the Guardian. Everyone notes that since online retail has bitten great chunks off Britain's high streets, stores have closed and hub businesses like banks have departed. Chakrabortty points out that this is only half of the depredation in those towns: the last ten years of Conservative austerity have sliced away social support systems such as youth clubs and libraries. Those social systems are the caulk that gives resilience in times of stress, and they are vanishing.

Both pieces ought to be taken as a serious warning about the many kinds of capital we are burning through, especially when read in conjunction with Matt Stoller's contention that the "millennial lifestyle" is ending. "If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you've interacted with seven companies that will collectively lose nearly $14 billion this year," he observes. He could have added Netflix, whose 2019 burn rate is $3 billion. And, he continues, WeWork's travails are making venture capitalists and bond markets remember that losing money, long-term, is not a good bet, particularly when interest rates start to rise.

So: climate crisis, brittle systems, and unsustainable lifestyles. We are burning through every kind of capital at pace.

Illustrations: California wildfire, 2008.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2019

I never paid for it in my life

lanier-lrm-2017.jpgSo Jaron Lanier is back, arguing that we should be paid for our data. He was last seen in net.wars two years back, arguing that if people had started by charging for email we would not now be the battery fuel for "behavior modification empires". In a 2018 TED talk, he continued that we should pay for Facebook and Google in order to "fix the Internet".

Lanier's latest disquisition goes like this: the big companies are making billions from our data. We should have some of it. That way lies human dignity and the feeling that our lives are meaningful. And fixing Facebook!

The first problem is that fixing Facebook is not the same as fixing the Internet, a distinction Lanier surely understands. The Internet is a telecommunications network; Facebook is a business. You can profoundly change a business by changing who pays for its services and how, but changing a telecommunications network that underpins millions of organizations and billions of people in hundreds of countries is a wholly different proposition. If you mean, as Lanier seems to, that what you want to change is people's belief that content on the Internet should be free, then what you want to "fix" is the people, not the network. And "fixing" people at scale is insanely hard. Just ask health professionals or teachers. We'd need new incentives,

Paying for our data is not one of those incentives. Instead of encouraging people to think more carefully about privacy, being paid to post to Facebook would encourage people to indiscriminately upload more data. It would add payment intermediaries to today's merry band of people profiting from our online activities, thereby creating a whole new class of metadata for law enforcement to claim it must be able to access.

A bigger issue is that even economists struggle to understand how to price data; as Diane Coyle asked last year, "Does data age like fish or like wine?" Google's recent announcement that it would allow users to set their browser histories to auto-delete after three or 12 months has been met by the response that such data isn't worth much three months on, though the privacy damage may still be incalculable. We already do have a class of people - "influencers" - who get paid for their social media postings, and as Chris Stokel-Walker portrays some of their lives, it ain't fun. Basically, while paying us all for our postings would put a serious dent into the revenues of companies like Google, and Facebook, it would also turn our hobbies into jobs.

So a significant issue is that we would be selling our data with no concept of its true value or what we were actually selling to companies that at least know how much they can make from it. Financial experts call this "information asymmetry". Even if you assume that Lanier's proposed "MID" intermediaries that would broker such sales will rapidly amass sufficient understanding to reverse that, the reality remains that we can't know what we're selling. No one happily posting their kids' photos to Flickr 14 years ago thought that in 2014 Yahoo, which owned the site from 2005 to 2015, was going to scrape the photos into a database and offer it to researchers to train their AI systems that would then be used to track protesters, spy on the public, and help China surveil its Uighur population.

Which leads to this question: what fire sales might a struggling company with significant "data assets" consider? Lanier's argument is entirely US-centric: data as commodity. This kind of thinking has already led Google to pay homeless people in Atlanta to scan their faces in order to create a more diverse training dataset (a valid goal, but oh,.the execution).

In a paywalled paper for Harvard Business Review, Lanier apparently argues that instead he views data as labor. That view, he claims, opens the way to collective bargaining via "data labor unions" and mass strikes.

Lanier's examples, however, are all drawn from active data creation: uploading and tagging photos, writing postings. Yet much of the data the technology companies trade in is stuff we unconsciously create - "data exhaust" - as we go through our online lives: trails of web browsing histories, payment records, mouse movements. At Tech Liberation, Will Rinehart critiques Lanier's estimates, both the amount (Lanier suggests a four-person household could gain $20,000 a year) and the failure to consider the differences between and interactions among the three classes of volunteered, observed, and inferred data. It's the inferences that Facebook and Google really get paid for. I'd also add the difference between data we can opt to emit (I don't *have* to type postings directly into Facebook knowing the company is saving every character) and data we have no choice about (passport information to airlines, tax data to governments). The difference matters: you can revise, rethink, or take back a posting; you have no idea what your unconscious mouse movements reveal and no ability to edit them. You cannot know what you have sold.

Outside the US, the growing consensus is that data protection is a fundamental human right. There's an analogy to be made here between bodily integrity and personal integrity more broadly. Even in the US, you can't sell your kidney. Isn't your data just as intimate a part of you?


Illustrations: Jaron Lanier in 2017 with Luke Robert Mason (photo by Eva Pascoe).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2019

The China Syndrome

800px-The_Great_wall_-_by_Hao_Wei.jpgAbout five years ago, a friend commented that despite the early belief - promulgated by, among others, then-US president Bill Clinton and vice-president Al Gore - that the Internet would spread democracy around the world, so far the opposite seemed to be the case. I suggested perhaps it's like the rising sea level, where local results don't give the full picture.

Much longer ago, I remember wondering how Americans would react when large parts of the Internet were in Chinese. My friend shrugged. Why should they care? They don't have to read them.

This week's news shows that we may both have been wrong in both cases. The reality, as the veteran technology journalist Charles Arthur suggested in the Wednesday and Thursday editions of his weekday news digest, The Overspill, is that the Hong Kong protests are exposing and enabling the collision between China's censorship controls and Western standards for free speech, aided by companies anxious to access the Chinese market. We may have thought we were exporting the First Amendment, but it doesn't apply to non-government entities.

It's only relatively recently that it's become generally acknowledged that governments can harness the Internet themselves. In 2008, the New York Times thought there was a significant domestic backlash against China's censors; by 2018, the Times was admitting China's success, first in walling off its own edited version of the Internet, and second in building rival giant technology companies and speeding past the US in areas such as AI, smartphone payments, and media creation.

So, this week. On Saturday, Demos researcher Carl Miller documented an ongoing edit war at Wikipedia: 1,600 "tendentious" edits across 22 articles on topics such as Taiwan, Tiananmen Square, and the Dalai Lama to "systematically correct what [officials and academics from within China] argue are serious anti-Chinese biases endemic across Wikipedia".

On Sunday, the general manager of the Houston Rockets, an American professional basketball team, withdrew a tweet supporting the Hong Kong protesters after it caused an outcry in China. Who knew China was the largest international market for the National Basketball Association? On Tuesday, China responded that it wouldn't show NBA pre-season games, and Chinese fans may boycott the games scheduled for Shanghai. The NBA commissioner eventually released a statement saying the organization would not regulate what players or managers say. The Americanness of basketball: restored.

Also on Tuesday, Activision Blizzard suspended Chung Ng Wai, a professional player of the company's digital card game, Hearthstone, after he expressed support for the Hong Kong protesters in a post-win official interview and fired the interviewers. Chung's suspension is set to last for a year, and includes forfeiting his thousands of dollars of 2019 prize money. A group of the company's employees walked out in protest, and the gamer backlash against the company was such that the moderators briefly took the Blizzard subreddit private in order to control the flood of angry posts (it was reopened within a day). By Wednesday, EU-based Hearthstone gamers were beginning to consider mounting a denial-of-service-attack against Blizzard by sending so many subject access requests under the General Data Protection Regulation that it will swamp the company's resources complying with the legal requirement to fulfill them.

On Wednesday, numerous media outlets reported that in its latest iOS update Apple has removed the Taiwan flag emoji from the keyboard for users who have set their location to Hong Kong or Macau - you can still use the emoji, but the procedure for doing so is more elaborate. (We will save the rant about the uselessness of these unreadable blobs for another time.)

More seriously, also on Wednesday, the New York Times reported that Apple has withdrawn the HKmap.live app that Hong Kong protesters were using to track police after China's state media accusing and protecting the protesters.

Local versus global is a long-standing variety of net.war, dating back to the 1991 Amateur Action bulletin board case. At Stratechery, Ben Thompson discusses the China-US cultural clash, with particular reference to TikTok, the first Chinese company to reach a global market; a couple of weeks ago, the Guardian revealed the site's censorship policies.

Thompson argues that, "Attempts by China to leverage market access into self-censorship by U.S. companies should also be treated as trade violations that are subject to retaliation." Maybe. But American companies can't win at this game.

In her recent book, The Big Nine, Amy Webb discusses China AI advantage as it pours resources and, above all, data into becoming the world leader via Baidu, Ali Baba, and Tencent, which have grown to rival Google, Amazon, and Facebook, without ever needing to leave home. Beyond that, China has been spreading its influence by funding telecommunications infrastructure. The Belt and Road initiative has projects in 152 countries. In this, China is taking advantage of the present US administration's inward turn and worldwide loss of trust.

After reviewing the NBA's ultimate decision, Thompson writes, "I am increasingly convinced this is the point every company dealing with China will reach: what matters more, money or values?" The answer will always be money; whose values count will depend on which market they can least afford to alienate. This week is just a coincidental concatenation of early skirmishes; just wait for the Internet of Things.

Illustrations: The Great Wall of China (by Hao Wei, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 20, 2019

Jumping the shark

800px-Guadalupe_Island_Great_White_Shark_with_Horizon_Charters.pngThis week, the Wall Street Journal claimed that Amazon has begun ordering item search results according to their profitability for the company. ( (The story is summarized at Ars Technica, for non-WSJ subscribers.) Amazon has called the story "not factually accurate", though, unsurprisingly, it declined to explain its algorithm's inner workings.

My reaction: "Well, that's a jump the shark moment."

Of course we know that every business seeks to optimize profits. Supermarkets - doubtless including Amazon's Whole Foods - choose the products to place at the ends of aisles and at cash registers only partly because those are the ones that tempt customers to make impulse buys but also because the product manufacturers pay them to do so. Both halves of that motivation have to be there. But Amazon's business and reputation are built on being fiercely devoted to putting customers first. So what makes this story different is the - perhaps only very slight - change in the weighting given to customer welfare.

In this, Amazon is following a time-honored Silicon Valley tradition (despite being based 800 miles north, in Seattle). In 2017, the EU fined Google $2.7 billion for favoring its own services in its shopping search results.

Obviously, Amazon has done and is doing far worse things. Just a few days earlier, the company announced changes that will remove health benefits for nearly 2,000 part-time employees at Whole Foods. It seems capriciously cruel: the richest man in the world, who last year told Business Insider he couldn't think of anything to spend his money on other than space travel, is willing to actively harm (given the US health system) some of the most vulnerable people who work for him. Even if he can't see it himself, you'd think the company's PR department would.

And that's just the latest in the catalogue. The company's warehouse workers regularly tell horror stories about their grueling jobs - and have for years. It will pay no US federal taxes this year for the second year in a row.

Whether or not it's true, one reason the story is so plausible is that increasingly we have no idea how businesses make their money. We *assume* we know that Coca-Cola's primary business is selling soft drinks, airlines' is selling seats on planes, and Spotify's is the sort of combination of subscriptions and advertising that has sustained many different media for a century. But not so fast: in 2017, Bloomberg reported that actually airlines make more money selling miles than they do from selling seats. Maybe the miles can't exist without the seats, but motives go where the money is, so this business reality must have consequences. Spotify, it turns out, has been building itself up into the third-largest player in digital advertising, collaborating with the PR and advertising holding company WPP to mine the billions of data points collected daily from its users' playlists and giving advertisers a new meaning for the term "mood music".

In the most simple mental model, we might expect Amazon to profit more from items it sells itself than from those sold on its platform by marketplace sellers. In fact, Amazon noted in its 2008 annual report (PDF, see p32) that its profits were about the same either way. This year, however, the EU opened an investigation into whether the company is taking advantage of the data it collects about third-party sales to identify profitable products it can cherry-pick and make for itself. No one, Lina Khan wrote in 2017 in a discussion of the modern failings of the US's antitrust enforcement, valued the data Amazon collects from smaller sellers' transactions, not even in those annual reports. Revenue-neutral, indeed.

In fact, Amazon's biggest source of profits is not its retail division, which even The Motley Fool can't figure out if it makes money. Amazon's biggest profit center is Amazon Web Services; *Netflix* was built on it. It may in fact be the case that the cloud business enables Amazon to act as an increasingly rapacious predator feasting on the rest of retail, a business model familiar from Uber (though it's far from the only one).

So Spotify is a music service in the same sense that Adobe and Oracle are software companies. Probably none of their original business plans focused on data exploitation, and their "pivot" (or bait and switch) into data passes us by while Facebook and Google get all the stick. Amazon may be the most problematic; it is, as Kashmir Hill discovered earlier this year, hard to do without Google but impossible to excise Amazon from your life. Finding alternatives for retail can still be done with enough diligence, but opting out of every business that depends on its cloud services can't be done.

Amazon was doing very well at escaping the negative scrutiny accruing to Facebook, Uber, and Google, all while becoming arguably the bigger threat, in part because we think of it as a nice company that sends us things. But if its retail customers are becoming just fungible piles of data to be optimized, that's a systemic failure the company can't reverse by restoring 2,000 people's health benefits, or paying taxes, or getting its owner to say, oh, yeah, space travel...what was I thinking?


Illustrations: Great white shark (via Sharkcrew at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 28, 2019

Failure to cooperate

sweat-nottage.jpgIn her 2015 Pulitzer Prize-winning play, Sweat, on display nightly in London's West End until mid-July, Lynn Nottage explores class and racial tensions in the impoverished, post-industrial town of Reading, PA. In scenes alternating between 2000 and 2008, she explores the personal-level effects of twin economic crashes, corporate outsourcing decisions, and tribalism: friends become opposing disputants; small disagreements become violent; and the prize for "winning" shrinks to scraps. Them who has, gets; and from them who have little, it is taken.

Throughout, you wish the characters would recognize their real enemies: the company whose steel tubing factory has employed them for decades, their short-sighted union, and a system that structurally short-changes them. The pain of the workers when they are locked out is that of an unwilling divorce, abruptly imposed.

The play's older characters, who would be in their mid-60s today, are of the age to have been taught that jobs were for life. They were promised pensions and could look forward to wage increases at a steady and predictable pace. None are wealthy, but in 2000 they are financially stable enough to plan vacations, and their children see summer jobs as a viable means of paying for college and climbing into a better future. The future, however, lies in the Spanish-language leaflets the company is distributing to frustrated immigrants the union has refused to admit and who will work for a quarter the price. Come 2008, the local bar is run by one of those immigrants, who of necessity caters to incoming hipsters. Next time you read an angry piece attacking Baby Boomers for wrecking the world, remember that it's a big demographic and only some were the destructors. *Some* Baby Boomers were born wreckage, some achieved it, and some had it thrust upon them.

We leave the characters there in 2008: hopeless, angry, and alienated. Nottage, who has a history of researching working class lives and the loss of heavy industry, does not go on to explore the inner workings of the "digital poorhouse" they're moving into. The phrase comes from Virginia Eubanks' 2018 book, Automating Inequality, which we unfortunately missed reviewing before now. If Nottage had pursued that line, she might have found what Eubanks finds: a punitive, intrusive, judgmental, and hostile benefits system. Those devastated factory workers must surely have done something wrong to deserve their plight.

Eubanks presents three case studies. In the first, struggling Indiana families navigate the state's new automated welfare system, a $1.3 billion, ten-year privatization effort led by IBM. Soon after its 2006 launch, it began sending tens of thousands of families notices of refusal on this Kafkaesque basis: "Failure to cooperate". Indiana eventually canceled IBM's contract, and the two have been suing each other ever since. Not represented in court is, as Eubanks says, the incalculable price paid in the lives of the humans the system spat out.

In the second, "coordinated entry" matches homeless Los Angelenos to available resources in order of vulnerability. The idea was that standardizing the intake process across all possible entryways would help the city reduce waste and become more efficient while reducing the numbers on Skid Row. The result, Eubanks finds, is an unpredictable system that mysteriously helps some and not others, and that ultimately fails to solve the underlying structural problem: there isn't enough affordable housing.

In the third, a Pennsylvania predictive system is intended to identify children at risk of abuse. Such systems are proliferating widely and controversially for varying purposes, and all raise concerns about fairness and transparency: custody decisions (Durham, England), gang membership and gun crime (Chicago and London), and identifying children who might be at risk (British local councils). All these systems gather and retain, perhaps permanently, huge amounts of highly intimate data about each family. The result in Pennsylvania was to deter families from asking for the help they're actually entitled to, lest they become targets to be watched. Some future day, those same records may pop when a hostile neighbor files a minor complaint, or haunt their now-grown children when raising their own children.

All these systems, Eubanks writes, could be designed to optimize access to benefits instead of optimizing for efficiency or detecting fraud. I'm less sanguine. In prior art, Danielle Citron has written about the difficulties of translating human law accurately into programming code, and the essayist Ellen Ullman warned in 1996 that even those with the best intentions eventually surrender to computer system imperatives of improving data quality, linking databases, and cross-checking, the bedrock of surveillance.

Eubanks repeatedly writes that middle class people would never put up with this level of intrusion. They may have no choice. As Sweat highlights, many people's options are shrinking. Refusal is only possible for those who can afford to buy their help, an option increasingly reserved for a privileged few. Poor people, Eubanks is frequently told, are the experimental models for surveillance that will eventually be applied to all of us.

In 2017, Cathy O'Neil argued in Weapons of Math Destruction that algorithmic systems can be designed for fairness. Eubanks' analysis suggests that view is overly optimistic: the underlying morality dates back centuries. Digitization has, however, exacerbated its effects, as Eubanks concludes. County poorhouse inmates at least had the community of shared experience. Its digital successor squashes and separates, leaving each individual to drink alone in that Reading bar.


Illustrations: Sweat's London production poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 15, 2019

Schrödinger's Brexit

Parliament_Clock_Westminster-wikimedia.jpg

"What's it like over there now?" American friends keep asking as the clock ticks down to midnight on March 29. Even American TV seems unusually interested: last week's Full Frontal with Samantha Bee had Amy Hoggart explain in detail; John Oliver made it a centerpiece two weeks ago, and US news outlets are giving it as much attention as if it were a US story. They're even - so cute! - trying to pronounce "Taoiseach". Everyone seems fascinated by the spectacle of the supposedly stoic, intellectual British holding meaningless "meaningful" votes and avoiding making any decisions that could cause anyone to lose face. So this is what it's like to live through a future line in the history books: other countries fret on your behalf while you're trying to get lunch.

In 14 days, Britain will either still be a member of the European or it won't. It will have a deal describing the future relationship or it won't. Ireland will be rediscovering civil war or it won't. In two months, we will be voting in the European Parliamentary elections as if nothing has happened, or we won't. All possible outcomes lead to protests in Parliament Square.

No one expects to be like Venezuela. But no one knows what will happen, either. We were more confident approaching Y2K. At least then you knew that thousands of people had put years of hard work into remediating the most important software that could fail. Here...in January, returning from CPDP and flowing seamlessly via Eurostar from Brussels to London, my exit into St Pancras station held the question: is this the last time this journey will be so simple? Next trip, will there be Customs channels and visa checks? Where will they put them? There's no space.

A lot of the rhetoric both at the time of the 2016 vote and since has been around taking back control and sovereignty. That's not the Britain I remember from the 1970s, when the sense of a country recovering from the loss of its empire was palpable, middle class people had pay-as-you-go electric and gas meters, and the owner of a Glasgow fruit and vegetable shop stared at me when I asked for fresh garlic. In 1974, a British friend visiting an ordinary US town remarked, "You can tell there's a lot more money around in this country." And another, newly expatriate and struggling: "But at least we're eating real meat here." This is the pre-EU Britain I remember.

"I've worked for them, and I know how corrupt they are," a 70-something computer scientist said to me of the EU recently. She would, she said, "man the barriers" if withdrawal did not go through. We got interrupted before I could ask if she thought we were safer in the hands of the Parliament whose incompetence she had also just furiously condemned.

The country remains profoundly in disagreement. There may be as many definitions of "Brexit" as there are Leave voters. But the last three years have brought everyone together on one thing: no matter how they voted, where they're from, which party they support, or where they get their news, everyone thinks the political class has disgraced itself. Casually-met strangers laugh in disbelief at MPs' inability to put country before party or self-interest or say things like "It's sickening". Even Wednesday's hair's width vote taking No Deal off the table is absurd: the clock inexorably ticks toward exiting the EU with nothing unless someone takes positive action, either by revoking Article 50, or by asking for an extension, or by signing a deal. But action can get you killed politically. I've never cared for Theresa May, but she's prime minister because no one else was willing to take this on.

NB for the confused: in the UK "tabling a motion" means to put it up for discussion; in the US it means to drop it.

Quietly, people are making just-in-case preparations. One friend scheduled a doctor's appointment to ensure that he'd have in hand six months' worth of the medications he depends on. Others stockpile EU-sourced food items that may be scarce or massively more expensive. Anyone who can is applying for a passport from an EU country; many friends are scrambling to research their Irish grandparents and assemble documentation. So the people in the best position are the recent descendants of immigrants that would would not now be welcome. It is unfair and ironic, and everyone knows it. A critical underlying issue, Danny Dorling and Sally Tomlinson write in their excellent and eye-opening Rule Britannia: Brexit and the End of Empire is education that stresses the UK's "glorious" imperial past. Within the EU, they write, UK MEPs are most of the extreme right, and the EU may be better off - more moderate, less prone to populism - without the UK, while British people may achieve a better understanding of their undistinguished place in the world. Ouch.

The EU has never seemed irrelevant to digital rights activists. Computers, freedom, and privacy (that is, "net.wars") shows the importance of the EU in our time, when the US refuses to regulate and the Internet is challenging national jurisdiction. International collaboration matters.

Just as I wrote that, Parliament finally voted to take the smallest possible action and ask the EU for a two-month extension. Schrödinger needs a bigger box.

Illustrations: "Big Ben" (Aldaron, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 28, 2019

Systemic infection

Thumbnail image for 2001-hal.png"Can you keep a record of every key someone enters?"

This question brought author and essayist Ellen Ullman up short when she was still working as a software engineer and it was posed to her circa 1996. "Yes, there are ways to do that," she replied after a stunned pause.

In her 1997 book Close to the Machine, Ullman describes the incident as "the first time I saw a system infect its owner". After a little gentle probing, her questioner, the owner of a small insurance agency, explained that now that he had installed a new computer system he could find out what his assistant, who had worked for him for 26 years and had picked up his children from school when they were small, did all day. "The way I look at it," he explained, "I've just spent all this money on a system, and now I get to use it the way I'd like to."

Ullman appeared to have dissuaded this particular business owner on this particular occasion, but she went on to observe that over the years she saw the same pattern repeated many times. Sooner or later, someone always realizes that they systems they have commissioned for benign purposes can be turned to making checks and finding out things they couldn't know before. "There is something...in the formal logic of programs and data, that recreates the world in its own image," she concludes.

I was reminded of this recently when I saw a report at The Register that the US state of New Jersey, along with two dozen others, may soon require any contractor working on a contract worth more than $100,000 to install keylogging software to ensure that they're actually working all the hours - one imagines that eventually, it will be minutes - they bill for. Veteran reporter Thomas Claburn goes on to note that the text of the bill was provided by TransparentBusiness, a maker of remote work management software, itself a trend.

Speaking as a taxpayer, I can see the point of ensuring that governments are getting full value for our money. But speaking as a freelance writer who occasionally has had to work on projects where I'm paid by the hour or day (a situation I've always tried to avoid by agreeing a rate for the whole job), the distrust inherent in such a system seems poisonous. Why are we hiring people we can't trust? Most of us who have taken on the risks of self-employment do so because one of the benefits is autonomy and a certain freedom from bosses. And now we're talking about the kind of intensive monitoring that in the past has been reserved for full-time employees - and that none of them have liked much either.

One of the first sectors that is already fighting its way through this kind of transition is trucking. In 2014, Cornell sociologist Karen Levy published the results of three years of research into the arrival of electronic monitoring into truckers' cabs as a response to safety concerns. For truckers, whose cabs are literally their part-time homes, electronic monitoring is highly intrusive; effectively, the trucking company is installing a camera and other sensors not just in their office but also in their living room and bedroom. Instead of using electronics to try to change unsafe practices, she argues, alter the economic incentives. In particular, she finds that the necessity of making a living at low per-mile rates pushes truckers to squeeze the unavoidable hours of unpaid work - waiting for loading and unloading, for example - into their statutory hours of "rest".

The result sounds like it would be familiar to Uber drivers or modern warehouse workers, even if Amazon never deploys the wristbands it patented in 2016. In an interview published this week, Data & Society Institute researcher Alex Rosenblat outlines the results of a four-year study of ride-hail drivers across the US and Canada. Forget the rhetoric that these drivers are entrepreneurs, she writes; they have a boss, and it's the company's algorithm, which dictates their on-the-job behavior and withholds the data they need to make informed decisions.

If we do nothing, this may be the future of all work. In a discussion last week, University of Leicester associate professor Phoebe Moore located "quantified work" at the intersection of two trends: first, the health-oriented self-quantified movement, and second the succeeding waves of workplace management from industrialization through time and motion study, scientific management, and today's organizational culture, where, as Moore put it, we're supposed to "love our jobs and identify with our employer". The first of these has led to "wellness" programs that, particularly in the US, helped grant employers access to vastly more detailed personal data about their employees than has ever been available to them before.

Quantification, the combination of the two trends, Moore warns at Medium, will alter the workplace's social values by tending to pit workers against each other, race track style. Vendors now claim predictive power for AI: which prospective employees fit which jobs, or when staff may be about to quit or take sick leave. One can, as Moore does, easily imagine that, despite the improvements AI can bring, the AI-quantified workplace, will be intensively worker-hostile. The infection continues to spread.


Illustrations: HAL, from 2001: A Space Odyssey (1968).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.