" /> net.wars: October 2020 Archives

« September 2020 | Main

October 23, 2020

The Amazing

randi-library-20160330_135220.jpg
"It's my birthday tomorrow," I said to my friend Bill Steele in early 1982. "Let's do something new and different."

"I can't," he said. "I have to go to this lecture-demonstration on psychic surgery." He was writing about it for one of Cornell's publications.

"That sounds new and different. Can I come?"

The lecturer-demonstrator was, of course, the paranormal investigator James "The Amazing" Randi, who died Wednesday. He was 92, and had survived cancer, bypass surgery, and a stroke, but it still seems unbelievable that his enormous energy and enduring curiosity could be permanently quieted. That first sighting had probably 1,000 people packed into Cornell's Stadtler auditorium; the last time I saw him, in 2016, he was telling debunking stories to a tiny group of Florida skeptics near his home, both with equal enthusiasm.

His curiosity was endless and comprehensive, even mid-controversy on climate change. Do these two differently-shaped glasses hold the same volume of liquid? Why did his refrigerator light stay off every Saturday? (Answer.)

Countless people say that seeing Randi speak launched them into skepticism. I was certainly one of them; Randi permanently altered parts of how I see the world. Just at a quick glance: Chris French, Richard Wiseman, Edzard Ernst, and Penn Jillette.

randi-at-home-2016-fixed-159256804575376112.jpegMartin Gardner's presence among the founders of the Committee for Scientific Inquiry that validated them for me, Randi was skepticism's secret sauce, upending claims so you could see their fatal flaws and offering openings for "citizen science" long before it was fashionable. One of his most important lessons was the importance of a background in deception when approaching paranormal claims. "Extraordinary claims require extraordinary proof," he often said, and he proved over and over that scientists have trouble doubting "extraordinary" when it's shown to them in their own lab, where they think they're in charge and where their test tubes do not lie to them. In 1987, he teamed up with Nature editor John Maddox to examine an apparently successful homeopathy experiment.

Most of us skeptics didn't meet Randi until long after his exploits as an escape artist made him famous. Every so often, though, I'd get a sense of his earlier life building his career. In one story I remember, a magician friend touring on a shoestring (so like the folk scene) would call himself at his agent's number person-to-person collect to get his itinerary. The agent would answer with something like, "He's not here, but you can reach him at the . He'll be staying on the fourth floor." Free call! The number was really the fee. When, on one occasion, the magician asked if he couldn't get something higher up, the agent said, "You can tell the gentleman in question that he's very lucky to be above the first floor."

One of those reminiscences taught me that I probably encountered Randi much earlier than I'd realized. As an insomniac teenager in the late 1960s, I used to listen to the radio - Long John Nebel's overnight talk show on New York's WOR. Randi was a frequent caller, though the only guest I really know I heard is the folksinger Michael Cooney.

I also remember Randi saying he got on Johnny Carson so often because he'd call up around when they'd be trying to fill the show with a suggestion like, "How would you like to freeze me in a block of ice?" Carson's own background in magic made him insistent that anyone claiming psychic powers on his show had better be genuine, a principle Randi was happy to help with, as Uri Geller discovered. Challenging Geller, who for a time was the world's most famous psychic, became something of an obsession for Randi - and another book, The Truth About Uri Geller.

One of his most stunning Carson appearances was his 1986 expose of televangelist faith healer Peter Popoff - also later a book, The Faith Healers. Popoff claimed that God spoke to him, directing him to call out specific audience members, their addresses, and their ailments. Randi enlisted Alexander Jason to find the radio frequency Popoff's wife was using from off-stage to feed those details, which she collected in pre-show chats, into his earpiece. That year he won a MacArthur award.

Every skeptic has their own motivations. Randi was driven both by a general love of truth and by a specific fury at seeing people cheated, deceived, and defrauded. The Popoff demonstration was great entertainment - but Popoff's operation was deadly, in that at the end of each show there would be a huge bag of life-saving medications to dispose of after Popoff told his audience to throw them away. Randi's long-time challenge - first $1,000, then $10,000, finally $1 million - to anyone who could demonstrate paranormal abilities under proper observing conditions - drew many hopefuls but no winners.

If the many obits are the first time you've encountered Randi, I'd suggest you start by reading Flim-Flam!. The James Randi Foundation has a YouTube channel with many videos showing off different aspects of his and others' work. And the 2014 documentary An Honest Liar gives a full and entertaining account of his life.

As the news broke, the science writer Charles Arthur quipped on Twitter, "The worst part is that now he can't haunt Uri Geller, because he didn't believe in ghosts." True. Even Randi has his limits. Had. Damn it. But it turned out, even there he had a plan.


Illustrations: James Randi's home library (there's a hidden skeleton behind the glass case); Randi in his library in 2016 showing off a trick he'd invented.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 16, 2020

The rights stuff

Humans-Hurt-synth.pngIt took a lake to show up the fatuousness of the idea of granting robots legal personality rights.

The story, which the AI policy expert Joanna Bryson highlighted on Twitter, goes like this: in February 2019 a small group of people, frustrated by their inability to reduce local water pollution, successfully spearheaded a proposition in Toledo, Ohio that created the Lake Erie Bill of Rights. Its history since has been rocky. In February 2020, a farmer sued the city and a US district judge invalidated the bill. This week, three-judge panel from Ohio's Sixth District Court of Appeals ruled the February judge made a mistake. For now, the lake still has its rights. Just.

We will leave aside the question of whether giving lakes and the other ecosystems listed in the above-linked Vox article is an effective means of environmental protection. But given that the idea of giving robots rights keeps coming up - the EU is toying with the possibility - it seems worth teasing out the difference.

In response to Bryson, Nicholas Bohm noted the difference between legal standing and personality rights. The General Data Protection Regulation, for example, grants legal standing in two new ways: collective action and civil society representing individuals seeking redress. Conversely, even the most-empowered human often lacks legal standing; my outrage that a brick fell on your head from the top of a nearby building does not give me the right to sue the building's owner on your behalf.

Rights as a person, however, would allow the brick to sue on its own behalf for the damage done to it by landing on a misplaced human. We award that type of legal personhood to quite a few things that aren't people - corporations, most notoriously. In India, idols have such rights, and Bohm cites a case in which the trustee of a temple, because the idol they represented had these rights in India, was allowed to join a case claiming improper removal in England.

Or, as Bohm put it more succinctly, "Legal personality is about what you are; standing is about what it's your business to mind."

So if lakes, rivers, forests, and idols, why not robots? The answer lies in what these things represent. The lakes, rivers, and forests on whose behalf people seek protection were not human-made; they are parts of the larger ecosystem that supports us all, and most intimately the people who live on their banks and verges. The Toledoans who proposed granting legal rights to Lake Erie were looking for a way to force municipal action over the lake's pollution, which was harming them and all the rest of the ecosystem the lake feeds. At the bottom of the lake's rights, in other words, are humans in existential distress. Granting the lake rights is a way of empowering the humans who depend on it. In that sense, even though the Indian idols are, like robots, human-made, giving them personality rights enables action to be taken on behalf of the human community for whom they have significance. Granting the rights does not require either the lake or the idol to possess any form of consciousness.

In a paper to which Bryson linked, S.G. Solaiman argues that animals don't quality for rights, even though they have some consciousness, because a legal personality must be able to "enjoy rights and discharge duties". The Smithsonian National Zoo's giant panda, who has been diligently caring for her new cub for the last two months, is not doing so out of legal obligation.

Nothing like any of this can be said of rights for robots, certainly not now and most likely not for a long time into the future, if ever. Discussions such as David Gunkel's How to Survive a Robot Invasion, which compactly summarizes the pros and cons, generally assume that robots will only qualify for rights after a certain threshold of intelligent consciousness has been met. Giving robots rights in order to enable suffering humans to seek redress does not come up at all, even when the robots' owners hold funerals because the manufacturer has discontinued the product. Those discussions rightly focus on manufacturer liability.

In the 2015 British TV series Humans (a remake of the 2012 Swedish series Äkta människor), an elderly Alzheimer's patient (William Hurt) is enormously distressed when his old-model carer robot is removed, taking with it the only repository of his personal memories, which he can no longer recall unaided. It is not necessary to give the robot the right to sue to protect the human it serves, since family or health workers could act on his behalf. The problem in this case is an uncaring state.

The broader point, as Bryson wrote on Twitter, is that while lakes are unique and can be irreparably damaged, digital technology - including robots - "is typically built to be fungible and upgradeable". Right: a compassionate state merely needs to transfer George's memories into a new model. In a 2016 blog posting, Bryson also argues against another commonly raised point, which is whether the *robots* suffer: if designers can install suffering as a feature, they can take it out again.

So, the tl;dr: sorry, robots.


Illustrations: George (William Hurt) and his carer "synth", in Humans.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 9, 2020

Incoming

fifth-element-destroys-evil.pngThis week saw the Antitrust Subcommittee of the (US) House Judiciary Committee release the 449-page report (PDF) on its 16-month investigation into Google, Apple, Facebook, and Amazon - GAFA, as we may know them. Or, if some of the recommendations in this report get implemented, *knew* them. The committee has yet to vote on the report, and the Republican members have yet to endorse it. So this is very much a Democrats' report...but depending how things go over the next month, come January that may be sufficient to ensure action.

At BIG, Matt Stoller has posted a useful and thorough summary. As he writes, the subcommittee focused on a relatively new idea of "gatekeeper power", which each of the four exercises in its own way (app stores, maps, search, phone operating systems, personal connections), and each of which is aided by its ability to surveil the entirety of the market and undermine current and potential rivals. It also attacks the agencies tasked with enforcing the antitrust laws for permitting the companies to make some 500 acquisitions. The resulting recommendations fall into three main categories: restoring competition in the digital economy, strengthening the antitrust laws, and reviving antitrust enforcement.

In a discussion while the report was still just a rumor, a group of industry old-timers seemed dismayed at the thought of breaking up these companies. A major concern was the impact on research. The three great American corporate labs of the 1950s to 1980s were AT&T's Bell Labs, Xerox PARC, and IBM's Watson. All did basic research, developing foundational ideas for decades to come but that might never provide profits for the company itself. The 1984 AT&T breakup effectively killed Bell Labs. Xerox famously lost out on the computer market. IBM redirected its research priorities toward product development. GAFA and Microsoft operate substantial research labs today, but they are more focused on the technologies, such as AI and robotics, that they envision as their own future.

The AT&T case is especially interesting. Would the Internet have disrupted AT&T's business even without the antitrust case, or would AT&T, kept whole, been able to use its monopoly power to block the growth of the Internet? Around the same time, European countries were deliberately encouraging competition by ending the monopolies of their legacy state telcos. Without that - or with AT&T left intact - anyone wanting to use the arriving Internet would have been paying a small fortune to the telcos just to buy a modem to access it with. Even as it was, the telcos saw Voice over IP as a threat to their lucrative long distance business, and it was only network neutrality that kept them from suppressing it. Today, Zoom-like technology might be available, but likely out of reach for most of us.

The subcommittee's enlistment of Lina Khan as counsel suggests GAFA had this date from the beginning. Khan made waves while still a law student by writing a lengthy treatise on Amazon's monopoly power and its lessons for reforming antitrust law, back when most of us still thought Amazon was largely benign. One of her major points was that much opposition to antitrust enforcement in the technology industry is based on the idea that every large company is always precariously balanced because at any time, a couple of guys in a garage could be inventing the technology that will make them obsolete. Khan argued that this was no longer true, partly because those two garage guys were enabled by antitrust enforcement that largely ceased after the 1980s, and partly because GAFA are so powerful that few start-ups can find funding to compete with them directly and rich enough to buy and absorb or shut down anyone who tries. The report, like the hearings, notes the fear of reprisal among business owners asked for their experiences, as well as the disdain with which these companies - particularly Facebook - have treated regulators. All four companies have been repeat offenders, apparently not inspired to change their behavior by even the largest fines.

Stoller thinks that we may now see real action because our norms have shifted. In 2011, admiration for monopolists was so widespread, he writes, that Occupy Wall Street honored Steve Jobs' death, whereas today US and EU politicians of all stripes are targeting monopoly power and intermediary liability. Stoller doesn't speculate about causes, but we can think of several: the rapid post-2010 escalation of social media and smartphones; Snowden's 2013 revelations; the 2016 Cambridge Analytica scandal; and the widespread recognition that, as Kashmir Hill found, it's incredibly difficult to extricate yourself from these systems once you are embedded in them. Other small things have added up, too, such as Mark Zuckerberg's refusal to appear in front of a grand committee assembled by nine nations.

Put more simply, ten years ago GAFA and other platforms and monopolists made the economy look good. Today, the costs they impose on the rest of society - precarious employment, lost privacy, a badly damaged media ecosystem, and the difficulty of containing not just misinformation but anti-science - are clearly visible. This, too, is a trend that the pandemic has accelerated and exposed. When the cost of your doing business is measured in human deaths, people start paying attention pretty quickly. You should have paid your taxes, guys.


Illustrations: The fifth element breaks up the approaching evil in The Fifth Element.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 2, 2020

Searching for context

skyler-gundason-social-dilemma.pngIt's meant, I think, to be a horror movie. Unfortunately, Jeff Orlowski's The Social Dilemma comes across as too impressed with itself to scare as thoroughly as it would like.

The plot, such as it is: a group of Silicon Valley techies who have worked on Google, Facebook, Instagram, Palm (!), and so on present mea culpas. "I was co-inventor...of the Like button," Tristan Harris says by way of introduction. It seems such a small thing to include. I'm sure it wasn't that easy, but Slashdot was upvoting messages when Mark Zuckerberg was 14. The techies' thoughts are interspersed with those of outside critics. Intermittently, the film inserts illustrative scenarios using actors, a technique better handled in The Big Short. In these, Vincent Kartheiser plays a multiplicity of evil algorithmic masterminds doing their best to exploit their target, a fictional teenage boy (Skyler Gisondo) who has accepted the challenge of giving up his phone for a week with the predictable results of an addiction film. As he becomes paler and sweatier, you expect him to crash out in a grotty public toilet, like Julia Ormond's character in Traffik. Instead, he face-plants when the police arrest him at Charlottesville.

The first half of the movie is predominantly a compilation of favorite social media nightmares: teens are increasingly suffering from depression and other mental health issues; phone addiction is a serious problem; we are losing human connection; and so on. As so often, causality is unclear. The fact that these Silicon Valley types consciously sought to build addictive personal tracking and data crunching systems and change the world does not automatically tie every social problem to their products.

I say this because so much of this has a long history the movie needs for context. The too-much-screen-time of my childhood was TV, though my (older) parents worried far more about the intelligence-drainage perpetrated by comic books. Girls who now seek cosmetic surgery in order to look more like filter-enhanced Instagram images were preceded by girls who starved themselves to look like air-brushed, perfect models in teen magazines. Today's depressed girls could have been those profiled in Mary Pipher's 1994 Reviving Ophelia, and she, too, had forerunners. Claims about Internet addiction go back more than 20 years, and until very recently were focused on gaming. Finally, though data does show that teens are going out less, less interested in learning to drive, and are having less sex and using less drugs, is social media the cause or the compensation for a coincidental overall loss of physical freedom? Even pre-covid they were growing up into a precarious job market and a badly damaged planet; depression might just be the sane response.

In the second half the film moves on to consider social media divisions as assault on democracy. Here, it's on firmer ground, but really only because the much better film The Great Hack has already exposed how Facebook (in particular) was used to spark violence and sway elections even before 2016. And then it wraps up: people are trapped, the companies have no incentive to change, and (says Jaron Lanier) the planet will die. As solutions, the film's many spokespeople suggest familiar ideas: regulation, taxation, withdrawal. Shoshana Zuboff is the most radical: outlaw them. (Please don't take Twitter! I learn so much from Twitter!)

"We are allowing technologists to frame this as a problem that they are equipped to solve," says data scientist Cathy O'Neil. "That's a lie." She goes on to say that AI can't distinguish truth. Even if it could, truth is not part of the owners' business model.

Fair enough, but remove Facebook and YouTube, and you still have Fox News, OANN, and the Daily Mail inciting anger and division with expertise honed over a century of journalistic training - and amoral world leaders. This week, a study published this week from Cornell University found that Donald Trump is implicated in 38% of the coronavirus misinformation circulating on online and traditional media. Knock out a few social media sites...and that still won't change because his pulpit is too powerful.

Most of the film's speakers eventually close by recommending we delete our social media accounts. It seems a weak response, in part because the movie does a poor job of disentangling the dangers of algorithmic manipulation from the myriad different reasons why people use phones and social media: they listen to music, watch TV, connect with their friends, play games, take pictures, and navigate unfamiliar locations. It's absurd to ask them to give that up without suggesting alternatives for fulfilling those functions.

A better answer may be that offered this week by the 25-odd experts who have formed an independent Facebook oversight board (the actual oversight board Facebook announced months ago is still being set up and won't begin meeting until after the US presidential election). The expertise assembled is truly impressive, and I hope that, like the Independent SAGE group of scientists who have been pressuring the UK government into doing a better job on coronavirus, they will have a mind-focusing effect on our Facebook overlords, perhaps later to be copied for other sites. The problem - an aspect also omitted from The Social Dilemma - is that under the company's shareholder structure Zuckerberg is under no requirement to listen.


Illustrations: Skyler Gisondo as Ben, in The Social Dilemma.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.