" /> net.wars: January 2022 Archives

« December 2021 | Main | February 2022 »

January 28, 2022

The user in charge

Thumbnail image for Wilcox, Dominic - Stained Glass car.jpgLast week, we learned that back last October prosecutors in California filed two charges of vehicular manslaughter against the driver of a Tesla on Autopilot who ran a red light in 2019, hit another car, and killed two people.

As they say, we've had this date since the beginning.

Meanwhile, in the UK the Law Commission is trying to get ahead of the market by releasing a set of proposals covering liability for automated driving. Essentially, its report concludes that when automation is driving a car liability for accidents and dangerous driving should shift to the "Authorized Self-Driving Entity" - that is, the company that was granted the authorization to operate on public roads . Perfectly logical; what's jarring is the report's linguistic shift that turns "drivers" into "users", with all the loss of agency that implies. Of course, that's always been the point, particularly for those who contend that automated driving will (eventually) be far safer. Still.

Ever since the first projects developing self-driving cars appeared, there have been questions about how liability would be assigned. The fantasy promulgated in science fiction and Silicon Valley is that someday cars will reach Level 5 automation, in which human intervention no longer exists. Ten years ago, when many people thought it was just a few years away, there were serious discussions about whether these future cars should have steering wheels when they could have bedroom-style acommodation instead. You also heard a lot about improving accessibility and inclusion for those currently barred from driving because of visual impairment, childhood, or blood alcohol level. And other benefits were mooted: less congestion, less pollution, better use of resources through sharing. And think, Clive Thompson wrote at Mother Jones in 2016, of all the urban parking space we could reclaim for cyclists, pedestrians, and parks.

In 2018, Christian Wolmar argued that a) driverless cars were far away, and b) that they won't deliver the benefits hypsters were predicting. Last year, he added that self-driving cars won;t solve the problems people care about, like congestion and pollution, drivers will become deskilled, and shared use is not going to be a winning argument. I agree with most of this. For example, if we take out all the parking, then aren't you going to increase congestion as they ferry themselves back home to wait for the end of the day after dropping off their owners?

So far, Wolmar appears to have been right. Several of the best-known initiatives have either closed down or been sold, and the big trend is consolidation into the hands of large companies that can afford to invest and wait. Full automation seems as far away as ever.

Instead, we are mired in what everyone eventually agreed would be the most dangerous period in the shift to automated driving: the years or decades of partial and inconsistent automation. As the Tesla crash shows, humans overestimating their cars' capabilities is one problem. A second is the predictability gap between humans and AIs. As humans ourselves, we're pretty good at guessing how other human drivers will likely behave. We all tend to try to put distance between ourselves and cars with lots of dents and damage or cars exhibiting erratic behavior, and pedestrians learn young to estimate the speed at which a car is approaching in order to decide whether it's safe to cross the street. We do not have the same insight into how a self-driving car is programmed to behave - and we are not appear predictable to its systems. One bit of complexity I imagine will increasingly matter is that the car's sensors will react to differences we can't perceive.

At the 2016 We Robot conference, Madeleine Clare Elish introduced the idea moral crumple zones. In a hybrid AI-human system, she argued, the blame when anything goes wrong will be assigned to the human element. The Tesla autopilot crash we began with is a perfect example, and inevitable under current US law: the US National Highway Traffic Safety Administration holds that the human in charge of the car is always responsible. Since a 2018 crash, Tesla has reportedly tried to make it clearer to customers that even its most sophisticated cars cannot drive themselves, and, according to the Associated Press, updated its software to make it "harder for drivers to abuse it".

Pause for bafflement. What does "abuse" mean in that sentence? That a driver expects something called "Autopilot" to...drive the car? It doesn't help the accuracy of people's perceptions of their car's capabilities that in December Tesla decided to add a gaming console to its in-car display. Following an announcement by the US National Highway Traffic Safety Administration that it would investigate, Tesla is updating the software so that the gaming feature locks when the car is moving. Shouldn't the distraction potential have been obvious? That's Theranos-level recklessness.

This is where the Law Commission's report makes a lot of sense. It pins the liability squarely on the ASDE for things like misleading marketing, and it sets requirements for handling transitions to human drivers, the difficulty of which was so elegantly explored in Dexter Palmer's Version Control. The user in charge is still responsible for things like insurance and getting kids into seatbelts. The proposals will now be considered by the UK's national governments.

Illustrations: Dominic Wilcox's concept driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 21, 2022

Power plays

thames-kew-2022-01-17.jpegWe are still catching up on updates and trends.

Two days before the self-imposed deadline, someone blinked in the game of financial chicken between Amazon UK and Visa. We don't know which one it was, but on January 17 Amazon said it wouldn't stop accepting Visa credit cards after all. Negotiations are reportedly ongoing.

Ostensibly, the dispute was about the size of Visa's transaction fees. At Quartz, Ananya Bhattacharya quotes Banked.com's Ben Goodall's alternative explanation: the dispute allowed Amazon to suck up a load of new data that will help it build "the super checkout for the future. For Visa, she concludes, resolving the dispute has relatively little value beyond PR: Amazon accounts for only 1% of its UK credit card volume. For the rest of us, it remains disturbing that our interests matter so little. If you want proof of market dominance, look no further.

In June 2021, the Federal Trade Commission tried to bring an antitrust suit against Facebook, and failed when the court ruled that in its complaint the FTC had failed to prove its most basic assumption: that Facebook had a dominant market position. Facebook was awarded the dismissal it requested. This week, however, the same judge ruled that the FTC's amended complaint, which was filed in August, will be allowed to go ahead, though he suggests in his opinion that the FTC will struggle to substantiate some of its claims. Essentially, the FTC accuses Facebook of a "buy or bury" policy when faced with a new and innovative competitor and says it needed to make up for its own inability to adapt to the mobile world.

We will know if Facebook (or its newly-renamed holding company owner, Meta) is worried if it starts claiming that damaging the company is bad for America. This approach began as satire, Robert Heller explained in his 1994 book The Fate of IBM. Heller cites a 1990 PC Magazine column by William E. Zachmann, who used it as the last step in an escalating list of how the "IBMpire" would respond to antitrust allegations.

This week, Google came close to a real-life copy in a blog posting opposing an amendment to the antitrust bill currently going through the US Congress. The goal behind the bill is to make it easier for smaller companies to compete by prohibiting the major platforms from advantaging their own products and services. Google argues, however, that if the bill goes through Americans might get worse service from Google's products, American technology companies could be placed at a competitive disadvantage, and America's national security could be threatened. Instead of suggesting ways to improve the bills, however, Google concludes with the advice that Congress should delay the whole thing.

To be fair, Google isn't the only one that dislikes the bill. Apple argues its provisions might make it harder for users to opt out of unwanted monitoring. Free Press Action argues that it will make it harder to combat online misinformation and hate speech by banning the platforms from "discriminating" against "similarly situated businesses" (the bill's language), competitor or not. EFF, on the other hand, thinks copyright is a bigger competition issue. All better points than Google's.

A secondary concern is the fact that these US actions are likely to leave the technology companies untouched in the rest of the world. In Africa, Nesrine Malik writes at the Guardian, Facebook is indispensable and the only Internet most people know because its zero-rating allows its free use outside of (expensive) data plans. Most African Internet users are mobile-only, and most data users are on pay-as-you-go plans. So while Westerners deleting their accounts is a real threat to the company's future - not least because, as Frances Haugen testified, they produce the most revenue - the company owns the market in Africa. There, it is literally the only game in town for both businesses and individuals. Twenty-five years ago, we thought the Internet would be a vehicle for exporting the First Amendment. Instead...

Much of the discussion about online misinformation focuses on content moderation. In a new report the Royal Society asks how to create a better information environment. Despite its harm, the report comes down against simply removing scientific misinformation. Like Charles Arthur in his 2021 book Social Warming, the report's authors argue for slowing the spread by various methods - adding a friction to social media sharing, reconfiguring algorithms, in a few cases de-platforming superspreaders. I like the scientists' conclusion that simple removal doesn't work; in science you must show your work, and deletion fuels conspiracy theories. During this pandemic, Twitter has been spectacular at making it possible to watch scientists grapple with uncertainty in real time.

The report also disputes some of our longstanding ideas about how online interaction works. A literature review finds that the filter bubbles and echo chambers Eli Pariser posited in 2011 are less important than we generally think. Instead most people have "relatively diverse media diets" and the minority who "inhabit politically partisan online news echo chambers" is about 6% to 8% of users.

Keeping it that way, however, depends on having choices, which leads back to these antitrust cases. The bigger and more powerful the platforms are, the less we - as both individuals and societies - matter to them.

Illustrations: The Thames at an unusually quiet moment, in January 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 14, 2022

The visible computer

Windows_Xp_of_Medea.JPGI have a friend I would like to lend anyone who thinks computers have gotten easier in the last 30 years.

The other evening, he asked how to host a Zoom conference. At the time, we were *in* a Zoom call, and I've seen him on many others, so he seemed competent enough.

"Di you have a Zoom account?" I said.

"How do I get that?"

I directed him to the website. No, not the window with our faces; that's the client. "Open up - what web browser do you use?"

"Er...Windows 10?"

"That's the computer's operating system. What do you use to go to a website?"


Did he know how to press ALT-TAB to see the open windows on his system? He did not. Not even after instruction.

But eventually he found the browser, Zoom's website, and the "Join" menu item. He created a password. The password didn't work. (No idea.) He tried to reset the password. More trouble. He decided to finish it later...

To be fair, computers *have* gotten easier. On a 1992 computer, I would have had to write my friend a list of commands to install the software, and he'd have had to type them perfectly every time and learn new commands for each program's individual interface. But the comparative ease of use of today's machines is more than offset by the increased complexity of what we're doing with them. It would never have occurred to my friend even two years ago that he could garnish his computer with a webcam and host video chats around the world.

I was reminded of this during a talk on new threats to privacy that touched on ubiquitous computing and referenced the 1991 paper The Computer for the 21st Century, by Marc Weiser, then head of the famed Xerox PARC research lab.

Weiser imagined the computer would become invisible, a theme also picked up by Donald Norman in his 1998 book, The Invisible Computer. "Invisible" here means we stop seeing it, even though it's everywhere around us. Both Weiser and Norman cited electric motors, which began as large power devices to which you attached things, and then disappeared inside thousands of small and large appliances. When computers are everywhere, they will stop commanding our attention (except when they go wrong, of course). Out of sight, out of mind - but in constant sight also means out of mind because our brains filter out normal background conditions to focus on the exceptional.

Weiser's group built three examples, which they called tabs (inch-scale), pads (foot-scale), and boards (yard-scale). His tabs sound rather like today's tracking tags. Like the Active Badges at Olivetti Research in Cambridge they copied (the privacy implications of which horrified the press at the time), they could be used to track people and things, direct calls, automate diary-keeping, and make presentations and research portable throughout the networked area. In 2013, when British journalist Simon Bisson revisited this same paper, he read them more broadly as sensors and effectuators. Pads, in Weiser's conception, were computerized sheets of "scrap" paper to be grabbed and used anywhere and left behind for the next person. Weiser called them an "antidote to windows", in that instead of cramming all programs into a window you could spread dozens of pads across a full-sized desk (or floor) to work with. Boards were displays, more like bulletin boards, that could be written on with electronic "chalk" and shared across rooms.

"The real power of the concept comes not from any one of these devices; it emerges from the interaction of all of them," Weiser wrote.

In 2013, Bisson suggested Weiser's "embodied virtuality" was taking shape around us as sensors began enabling the Internet of Things and smartphones became the dominant interface to the Internet. But I like Weiser's imagined 21st century computing better than what we actually have. While cloud services can make our devices more or less interchangeable as long as we have the right credentials, that only works if broadband is uninterruptedly reliable. But even then, has anyone lost awareness of the computer - phone - in their hand or the laptop on their desk? Compare today to what Weiser thought would be the case 20 years later - which would have been 2011:

Most important, ubiquitous computers will help overcome the problem of information overload. There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.

Who feels like that? Certainly not the friend we began with. Even my computer expert friends seem one and all convinced that their computers hate them. People in search of relaxation watch TV (granted, maybe on a computer), play guitar (even if badly), have a drink, hang with friends and family, play a game (again, maybe on a computer), work out, tale a bath. In fact, the first thing people do when they want to relax is flee their computers and the prying interests that use them to spy on us. Worse, we no longer aspire to anything better. Those aspirations have all been lost to A/B testing to identify the most profitable design.

Illustrations: Windows XP's hillside wallpaper (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 7, 2022


Winnie-the-Pooh-north pole_143.pngWe start 2022 with some catch-ups.

On Tuesday, the verdict came down in the trial of Theranos founder Elizabeth Holmes: guilty on four counts of wire fraud, acquitted on four counts, jury hung on three. The judge said he would call a mistrial on those three, but given that Holmes will already go to prison, expectations are that there will be no retrial.

The sad fact is that the counts on which Holmes was acquitted were those regarding fraud against patients. While investment fraud should be punished, the patients were the people most harmed by Theranos' false claims to be able to perform multiple accurate tests on very small blood samples. The investors whose losses saw Holmes found guilty could by and large afford them (though that's no justification). I know the $350 million collectively lost by Trump education secretary Betsy DeVos, Rupert Murdoch, and the Cox family is a lot of money, but it's a vanishingly tiny percentage of their overall wealth (which may help explain DeVos family investment manager Lisa Peterson's startlingly tcasual approach to research). By contrast, for a woman who's already had three miscarriages, the distress of being told she's losing a fourth, despite the eventual happy ending, is vastly more significant.

I don't think this case by itself will make a massive difference in Silicon Valley's culture, despite Holmes's prison sentence - how much did bankers change after the 2008 financial crisis? Yet we really do need the case to make a substantial difference in how regulators approach diagnostic devices, as well as other cyber-physical hybrid offerings, so that future patients don't become experimental subjects for the unscrupulous.


On New Year's Eve, Mozilla, the most important browser that ">only 3% of the market uses, reminded people it accepts donations in cryptocurencies through Bitpay. The message set off an immediate storm, not least among two of the organization's co-founders, one of whom, Jamie Zawinski, tweeted that everyone involved in the decision should be "witheringly ashamed". At The Register, Liam Proven points out that it's not new for Mozilla to accept cryptocurrencies; it's just changed payment providers.

One reason to pay attention to this little fiasco is that while Mozilla (and other Internet-related non-profits and open software projects) appeal greatly to the same people who care about the environment and believe that cryptocurrency mining is wasteful and energy-intensive and deplore the anti-government rhetoric of its most vocal libertarian promoters, the richest people willing to donate to such projects are often those libertarians. Trying to keep both onside is going to become increasingly difficult. Mozilla has now suspended its acceptance of cryptocurrencies to consider its position.


In 2010, fatally frustrated with Google, I went looking for a replacement search engine and found DuckDuckGo. It took me a little while to get the hang of formulating successful queries, but both it and I got better. It's a long time since I needed to direct a search elsewhere.

At the time, a lot of people thought it was bananas for a small startup to try to compete against Google. In an interview, founder Gabriel Weinberg explained that the decision had been driven by his own frustration with Google's results. Weinberg talked most about getting to the source you want more efficiently.

Even at that early stage, embracing privacy was part of his strategy. Nearly 12 years on from the company's founding, its 35.3 billion searches last year - up 46% from 2020 - remain a rounding error compared to Google's many hundreds of billions per day. But the company continues to offer things I actually want. I have its browser on my phone, and (despite still having a personal email server) have signed up for one of its email addresses because it promises to strip out the extensive tracking inserted into many email newsletters. And all without having to buy into Apple's ecosystem.

Privacy has long been a harder sell than most privacy advocates would like to admit, usually because it involves giving up a lot of convenience to get it. In this case...it's easy. So far.


Never doubt that tennis is where cultural clashes come home to roost. Tennis had the first transgender athlete; it was at the forefront of second wave feminism; and now it's the venue for science versus anti-science. And now, as even people who *aren't* interested in tennis have seen, it is the foremost venue for the clash between vaccine mandates and anti-vaxx refuseniks. Result: the men's world number one, Serbian player Novak Djokovic (and, a day later, doubles specialist Renata Voracova), was diverted to a government quarantine hotel room like any non-famous immigrant awaiting deportation.

Every tennis watcher saw this coming months ago. On one side, Australian rules; on the other, a tennis tournament that apparently believed it could accommodate a star's balking at an immigration requirement as unyieldingly binary as pregnancy or the Northern Ireland protocol

Djokovic is making visible to the world a reality that privacy advocates have been fighting to expose: you have no rights at borders. If you think Djokovic, with all his unique resources, should be meeting better treatment, then demand better treatment for everyone, legal or illegal, at all borders, not just Australia's.

Illustrations: Winnie the Pooh, discovering the North Pole, by Ernest Howard Shepard, finally in the public domain (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.