" /> net.wars: September 2022 Archives

« August 2022 | Main | October 2022 »

September 30, 2022


Queen_Elizabeth_II's_Funeral_and_Procession_(19.Sep.2022)-370.jpgThey got what they wanted, and now they're screwing it up.

"They" in that sentence, is entertainment industry rights holders, who campaigned for years in bad ways and worse ways to get rid of "piracy" - that is, unauthorized copying and digital distribution of their products. In pursuit of that ideal, they sued popular companies (Napster, MP3.com) out of existence; prosecuted users and demanded ISPs' help in doing so; applied digital rights management to everything from software and classic books to tractors and wheelchairs; and pursued national legislation and trade treaties to entrench their business model.

"What they wanted" was people paying for the cultural artefacts they finance. How they got it, in the end, was not through any of the above efforts. Instead, as many scholars and activists told them during those years it would be, the solution was legally authorized services for which people were willing to pay. And thus grew and flourished video services such as Netflix, YouTube, Hulu, and, latterly, Disney, Amazon Prime, Apple TV, and and music services like Apple iTunes, Spotify, and Amazon Music. The industry began making money from digital downloads. So yay?

You would think. Instead, we're going backwards. The reality now is that paid services are becoming a chore to use: users complain the interfaces are frustrating, and that the thing they want to watch is always on some other service. Newspapers now track where to find popular older shows, and know only a sliver of the mass audience will be able to see some of the new material they review.

Result: pirate sites are back on top You can find almost anything in one search, it's yours to watch any way you want within minutes, and any ads have been neatly excised. Like I said: they got what they wanted and then...

This tiny rant had two immediate provocations. The first was the release of Glyn Moody's new book, Walled Culture (available here as a freely downloadable PDF). The other was two Guardian stories by Jim Waterson about Buckingham Palace's wrangle with the UK's national broadcasters over the footage of the recent state funeral of Queen Elizabeth II. The BBC, ITV, and Channel 4 are allowed future use of juar one hour's worth of clips; for anything else they must ask permission.

This was a state occasion, paid for by taxpayers, held on public streets and in public buildings, and the video recording was made by broadcasters, which are financed by univeral license fees (BBC) and their own commercial activities (all of them). It's particularly bonkers because the entirety of the day's footage is readily available on torrent sites. The palace literally cannot control the footage as it could at the 1953 coronation - though it can limit broadcast. Waterson also reveals that behind the scenes during the various services palace staff and broadcasters shared a WhatsApp group in which the staffers sent a message every five minutes to approve or refuse the use of the previous video block. In our world of 2022, this power to micromanage how they are seen is more power than most people think the monarchy has. The palace is also claiming the right to veto the use of footage of the new monarch's ascension service. This is the rawest form of copyright as entrenched power.

In Walled Culture, Moody recounts the Internet's three decades of copyright wrangles, and the resulting shrinkage of public access to culture. It's a great romp through a legal regime that, as Jessica Litman said circa 1998, people would reject if they understood it. Moody begins with the shift from analogue to digital media, then goes through the lawsuits, the battle to make the results of publicly funded research open to the public, web blocking and other censorship, the EU's copyright directive, and the regulatory capture that, as Moody says, leaves impoverished the artists and creators copyright law was originally designed to benefit.

My favorite chapter, however, is the one on copyright absurdities. Half of the commercial movies ever made are unavailable to view. Because of the way streaming is licensed, Netflix 2022 has a library perhaps a tenth the size of Netflix 2012 - or 2002, when the rental service's copy of a DVD could not be withdrawn. Yet digital media have a notoriously short life before they must be migrated to newer media and formats. Copyright is even why statisticians continue to use suboptimal statistical analysis because in the 1920s Kendall Pearson refused fellow statistician Ronald A. Fisher permission to use his statistical tables.

As Moody shows, the impact of copyright law is widely felt, and its abuse even more so. Bear in mind that the original purpose was to balance the public interest (as opposed to the public's interest) in its own culture against the desirability of encouraging creators and artists to go on creating new works by giving them a relatively brief period of exclusivity in which to exploit their work. For that reason, a world in which piracy is the best option for accessing culture is not a good world. Moody' proposes numerous fixes that roll back the worst elements and change the power imbalance. We do want to pay artists and creators, especially those whose voices have largely gone unheard in the past. Rights holders should not be - ahem - kings.

Illustrations: Queen Elizabeth II's funeral procession (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 23, 2022

Insert a human

We Robot - 2022 - boston dynamics.JPGRobots have stopped being robots. This is a good thing.

This is my biggest impression of this year's We Robot conference: we have moved from the yay! robots! of the first year, 2012, through the depressed doldrums of "AI" systems that make the already-vulnerable more vulnerable circa 2018 to this year, when the phrase that kept twanging was "sociotechnical systems". For someone with my dilettantish conference-hopping habit, this seems like the necessary culmination of a long-running trend away from robots as autonomous mobile machines to robots/AI as human-machine partnerships. We Robot has never talked much about robot rights, instead focusing on considering the policy challenges that arise as robots and AI become embedded in our lives. This is realism; as We Robot co-founder Michael Froomkin writes, we're a long, long way from a self-aware and sentient machine.

The framing of sociotechnical systems is a good thing in part because so much of what passes for modern "artificial intelligence" is humans all the way down, as Mary L. Gray and Siddhart Suri documented in their book, Ghost Work. Even the companies that make self-driving cars, which a few years ago were supposed to be filling the streets by now, are admitting that full automation is a long way off. "Admitting" as in consolidating or being investigated for reckless hyping.

If this was the emerging theme, it started with the first discussion, of a paper on humans in the loop, by Margot Kaminski, Nicholson Price, and Rebecca Crootof. Too often, the proposed policy-making proposal for handling problems with decision making systems is to insert a human, a "solution" they called the "MABA-MABA trap", for "Machines Are Better At / Men Are Better At". While obviously humans and machines have differing capabilities - people are creative and flexible, machines don't get bored - just dropping in a human without considering what role that human is going to fill doesn't necessarily take advantage of the best capabilities of either. Hybrid systems are of necessity more complex - this is why cybersecurity keeps getting harder - but policy makers may not take this into account or think clearly about what the human's purpose is going to be.

At this conference in 2016, Madeleine Claire Elish foresaw that the human would become a moral crumple zone or liability sponge, absorbing blame without necessarily being at fault. No one will admit that this is the human's real role - but it seems an apt description of the "safety driver" watching the road, trying to stay alert in case the software driving the car needs backup or the poorly-paid human given a scoring system and tasked with awarding welfare benefits. What matters, as Andrew Selbst said in discussing this paper, is the *loop*, not the human - and that may include humans with invisible control, such as someone who can massage the data they enter into a benefits system in order to help a particularly vulnerable child, or who have wide discretion, such as a judge who is ultimately responsible for parole decisions no matter what the risk assessment system says.

This is not the moment to ask what constitutes a human.

It might be, however, the moment to note the commentator who said that a lot of the problems people are suggesting robots/AI can solve have other, less technological solutions. As they said, if you are putting a pipeline through a community without its consent, is the solution to deploy police drones to protect the pipeline and the people working on it - or is it to put the pipeline somewhere else (or to move to renewables and not have a pipeline at all)? Change the relationship with the community and maybe you can partly disarm the police.

One unwelcome forthcoming issue, discussed in a paper by Kate Darling and Daniella DiPaola is the threat merging automation and social marketing poses to consumer protection. A truly disturbing note came from DiPaola, who investigated manipulation and deception with personal robots and 75 children. The children had three options: no ads, ads allowed only if they are explicitly disclosed to be ads, or advertising through casual conversation. The kids chose casual conversation because they felt it showed the robot *knew* them. They chose this even though they knew the robot was intentionally designed to be a "friend". Oy. In a world where this attitude spreads widely and persists into adulthood, no amount of "media literacy" or learning to identify deception will save us; these programmed emotional relationships will overwhelm all that. As DiPaola said, "The whole premise of robots is building a social relationship. We see over and over again that it works better if it is more deceptive."

There was much more fun to be had - steamboat regulation as a source of lessons for regulating AI (Bhargavi Ganesh and Shannon Vallor), police use of canid robots (Carolin Kemper and Michael Kolain), and - a new topic - planning for the end of life of algorithmic and robot systems (Elin Björling and Laurel Riek). The robots won't care, but the humans will be devastated.

Illustrations: Hanging out at We Robot with Boston Dynamics' "Spot".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 16, 2022

Coding ethics

boston-dynamics-werobot-2022-370.jpgWhy is robotics hard?

This was Bill Smart's kickoff on the first (workshop) day of this year's We Robot. It makes sense: We Robot is 11 years old, and if robots were easy we'd have them by now. The basic engineering difficulties are things he's covered in previous such workshops: 2021, 2019, 2018, 2016.

More to the point for this cross-the-technicians-with-the-lawyers event: why is making robots "ethical" hard? Ultimately, because the policy has to be translated into computer code, and as Smart and others explain, the translation demands an order of precision humans don't often recognize. Wednesday's workshops explored the gap between what a policy says and what a computer can be programmed to do. For many years, Smart has liked to dramatize this gap by using four people to represent a "robot" and assigning a simple task. Just try picking up a ball with no direct visual input by asking yes/no questions of a voltage-measuring sensor.

This year, in a role-playing breakout group, we were asked to redesign a delivery robot to resolve complaints in a fictional city roughly the size of Seattle. Injuries to pedestrians have risen since delivery robots arrived; the residents of a retirement community are complaining that the robots' occupation of the sidewalks interferes with their daily walks; and one companysends its delivery robot down the street ;past a restaurant while playing ads for its across-the-street competitor.

It's not difficult to come up with ideas for ways to constrain these robots. Ban them from displaying ads. Limit them to human walking speed (which you'll need to specify precisely). Limit the time or space they're allowed to occupy. Eliminate cars and reallocate road space to create zones for pedestrians, cyclists, public tranport, and robots. Require lights and sound to warn people of the robots' movements. Let people ride on the robots. (Actually, not sure how that solves any of the problems presented, but it sounds like fun.)

As you can see from the sample, many of the solutions that the group eventually proposed were only marginally about robot design. Few could be implemented without collaboration with the city, which would have to agree and pay for infrastructure changes or develop policies and regins specifying robot functionality.

This reality was reinfoced in a later exercise, in which Cindy Grimm, Ruth West, and Kristen Thomasen broke us into robot design teams and tasked us to design a robot to solve these complaints reinforced this. Most of the proposals involved reorganizating public space (one group suggested sending package delivery robots through the sewer system rather than on public streets and sidewalks), sometimes at considerable expense. Our group, concerned about sustainability, wanted the eventual robot made out of 3D printed engineered wood, but hit physical constraints when Grimm pointed out that our comprehensive array of sensors wouldn't fit on the small form factor we'd picked - and would be energy-intensive. No battery life.

The deeper problem we raised: why use robots for this at all? Unless you're a package delivery company seeking to cut labor costs, what's the benefit over current delivery systems? We couldn't think of one. With Canadian journalist Paris Marx's recent book on autonomous vehicles , Road to Nowhere fresh in my mind, however, the threat to publc ownership of the sidewalk seemed real.

The same sort of real problem surfaced in discussions of a different problem, based on Paige Tutosi's winning entry in a recent roboethics competition. In this exercise, we were given three short lists: rooms in a house, people who live in the house, and objects around the house. The idea was to come up with rules for sending the objects to individuals that could be implemented in computer code for its robot servant. In an example ruleset, no one can order the robot to send a beer to the baby or chocolate to the dog.

My breakout group quickly got stuck in contemplating the possible power dynamics and relationships in the house. Was the "mother" the superuser who operated in God mode? Or was she an elderly dementia patient who lived with her superuser daughter, her daughter's boyfriend, and their baby? Then someone asked the killer question: "Who is paying for the robot?" People whose benefits payments arrive on prepay credit cards with government-designed constraints on their use could relate.

The summary reports from the other groups revealed a significant split between those who sought to build a set of rules that specified what was forbidden (comparable to English or American law) and those who sought to build a set of rules that specified what was permitted (more like German law).

For the English approach, you have to think ahead of time of all the things that could go wrong and create rules to prevent them. This is by far the easier approach - easier to code, and safer for robot manufacturers seeking to limit their liability. Robots' capabilities will default to strictly limited to "known-safe".

The fact of this split suggested that at heart developing "robot ethics" is recapitulating all of legal history back to first principles. Viewed that way, robots are dangerous. Not because they are likely to attack us - but because they can be the vector for making moot, in stealth, by inches, and to benefit their empowered commissioners, our entire framework of human rights and freedoms.

Illustrations: Boston Dynamics' canine robot visits We Robot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 9, 2022

The lost penguin

Little_penguin_(Eudyptula_minor)_at_Kwinana_Beach,_September_2021_13-370.jpgOne of the large, ignored problems of cybersecurity is that every site, every supplier, ever software coder, every hardware manufacturer makes decisions as if theirs were the only rules you ever have to observe.

The last couple of weeks I've been renewing my adventures with Linux, which started in 2016, and continued later that year and again in 2018 and, undocumented, in 2020. The proximate cause this time was the release of Ubuntu 22.04. Through every version back to 14.04 I've had the same longrunning issue: the displays occasionally freeze for no consistent reason, and the only way out is a cold boot. Would this time be the charm?

Of course, the first thing that happened was that trying to upgrade the system in place failed. This isn't my first rodeo (see 2016, part II), and so I know that unpicking and troubleshooting a failure often takes more than doing a clean install. I had an empty hard drive at the ready...

All the good things I said about Ubuntu installation in 2018 are still true: Canonical and the open source community have done a very good job of building a computer-in-a-box. It installed and it worked, although I hate the Gnome desktop it ships with.


Everything is absolutely fine unless, as I whined in 2018, you want to connect to some Windows machines. For that, you must download and install Samba. When it doesn't work, Samba is horrible, and grappling with it revives all my memories of someone telling me, the first time I heard of Linux, that "Linux is as user-friendly as a cornered rat."

Last time round, I got the thing working by reading lots of web pages and adding more and more stuff to the config file until it worked. This was not necessarily a good thing, because in the process I opened more shares than I needed to, and because the process was so painful I never felt like going back to put in a few constraints. Why would I care? I'm one person with a very small (wired) computer network, and it's OK if the machines see more of each other's undergaments than is strictly necessary.

Since then, the powers that code have been diligently at work to make the system more secure. So to stop people from doing what I did, they have tweaked Samba so that by default it's not possible to share your Home directory. Their idea is that you'll have a Public directory that is the only thing you share, and any file that's in it is there because you made a conscious decision to put it there.

I get the thinking, but I don't want to do things their way, I want to do things my way. And my way is that I want to share three directories inside the Home directory. Needless to say, I am not the only recalcitrant person, and so people have published three workarounds. I did them all. Result: my Windows machines can now access the directories I wanted to share on the Ubuntu machine. And: the Ubuntu machine is less secure for a value of security that isn't necessarily helpful in a tiny wired home network.

That was only half the problem.

Ubuntu can see there's a Windows network, and it will even sometimes list the machines correctly, but ask it to access one of them, and it draws a blank. Almost literally a blank: it just hangs there going, "Opening >machine name<" until you give up and hit Cancel. Someone has wrapped a towel around its head, apparently thinking, like the Bugblatter Beast of Traal, that if it can't see you, you can't see it. I now see that this is exactly the same analogy, in almost the identical words, that I used in 2018. I swear I typed it all new this time.

That someone appears to be Microsoft. The *other* problem, it turns out, is that Microsoft also wanted to improve security, and so it's made it harder to open Windows 10 machines to networking with interlopers such as people who run Ubuntu. I forget now the incantation I had to wave over it to get it to cooperate, but the solution I found only worked to admit the Ubuntu shares, not open up the Windows ones.

Seems to me there's two problems here.

One is the widening gap between consumer products and expert computing. The reality of mass adoption confirms that consumer computing has in fact gotten much easier over time. But the systems we rely on are more sophisticated and complex, and they're meeting more sophisticated and complex needs - and doing anything outside that mainstream has accordingly become much harder, requiring a lot of knowledge, training, patience, and expertise. I fall right into that gap (which is why my website has no Javascript and I'm afraid to touch the blogging software that powers net.wars). In 2016, Samba just worked.

The other, though, is a problem I've touched on before: decisions about product security are made in silos without considering the wider ecosystem and differing contexts in which they're used. Microsoft or Apple's answer to the sort of connection problem I have is buy our stuff. The open source community's reaction isn't much different. Which leaves me....wanting to bang all their heads together.

Illustrations: Little penguin swimming (via Calistemon at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.