" /> net.wars: September 2021 Archives

« August 2021 | Main | October 2021 »

September 24, 2021

Is the juice worth the squeeze?

Anywhere But Westminster- MK - robot.pngLast week, Gikii speakers pondered whether regulating magic could suggest how to regulate AI. This week, Woody Hartzog led a session at We Robot pondering how to regulate robots, and my best analogy was...old cars.

Bill Smart was explaining that "robot kits" wouldn't become a thing because of the complexity. Even the hassock-sized Starship delivery robot spotted on a Caltrain platform and deliver groceries in Milton Keynes are far too complex for a home build. "Like a car. There are no car kits."

Oh, yes, there are: old cars, made before electronics, that can be taken to pieces and rebuilt; you just need the motor vehicle people to pass it as roadworthy. See also: Cuba.

Smart's main point stands, though: the Starship robots have ten cameras, eight ultrasonic sensors, GPS, and radar, and that's just the hardware (which one could imagine someone plugging together). The software includes neural nets, 3D mapping, and a system for curb climbing, plus facilities to allow remote human operation. And yet, even with all that one drove straight into a canal last year.

"There's a tendency seen in We Robot to think about a robot as a *thing* and to write around that thing," Cindy Grimm observed. Instead, it's important to consider what task you want the robot to accomplish, what it's capable of, what it *can't* do, and what happens when someone decides to use it differently. Starship warns not to disturb its robots if they're sitting doing nothing. "It may just be having a rest."

A rest? To do what? Reorder the diodes all down its left side?

The discussion was part of a larger exercise in creating a law to govern delivery robots and trying to understand tradeoffs. A physical device that interacts with thea real world is, as Smart and Grimm have been saying all the way back to the first We Robot, in 2012, dramatically different from the devices we've sought to regulate so far. We tend, like the Starship people above, to attribute intentionality to things that can move, I believe as a matter of ancestral safety: things that can move autonomously can attack you. Your washing machine is more intelligent than your Roomba, but which one gets treated like a pet?

Really, though, Grimm said, "They're just a box of 1s and 0s."

So Hartzog began with a piece of proposed legislation. Posit: small delivery robots that uses sidewalks, roads, and bike lanes. Hypothetical city council doesn't want to ban outright. But the things can disrupt daily lives and impede humans' use of public space. So, they propose a law: delivery robots must have a permit, Must respect all city ordinances and physical safety of all people. Speed limited to 15 miles an hour. No contact with humans except the designated recipient. Must remain 12 feet apart and prioritize human mobility by moving away from assistive devices and making its presence known via audio signals. Only allowed to collect data for core functions; may not collect data from inside homes without consent; may not use facial recognition, only face detection for safety. What's missing?

Well, for one thing, 15 miles an hour is *dangerous* on a crowded sidewalk, and even in some bike lanes. For another, what capabilities does the robot need to recognize the intended recipient? Facial recognition? Fingerprint scanner? How much do permits cost and who can (and can't) afford them? Is it better to limit robot density rather than set a specific number? How does it recognize assistive devices? How much noise should we tolerate? Who has right of way if there's only one narrow path? If every robot's location must be known at all times, what are the implications of all that tracking? How and when do permits get revoked?

Hartzog left us with a final question: "Is the juice worth the squeeze?" Are there opportunity costs inherent in accepting the robots in the first place?

As Grimm said, nothing is for free; every new robot capability brings tradeoffs. Adding awareness, so the robot "knows" to move out of the way of strollers and wheelchairs, means adding data-gathering sensors, adding privacy risk? Grimm's work with apple-picking robots has taught her their success depends on pruning apple trees to make their task simpler. The job is a lot harder in her backyard, where this hasn't been done. So legal considerations must include how and whether we change the environment so it's safer for robots to be around people. Grimm calls this making a "tunnel" for the robot; narrow and simplify the task rather than making the robot "smarter".

Personally, I like the idea of barring the robots from weighing more than an average human can lift, so you can always pick the thing up and move it out of the way.

No such issues mar the cheery Starship promotional video linked above. This seems impossible; why should delivery robots be less of a nuisance than abandoned dockless scooters and bikes? In the more realistic view to be found in Anywhere But Westminster's 2019 visit to Milton Keynes, the robots still seem mostly inoffensive as they roll through an unpopulated park and wait to cross the empty street. Then the filmmakers encounter one broadcasting Boris Johnson speeches. Suddenly, ad-spewing sidewalk robots seem inescapable. Maybe instead hire the unemployed people the filmmakers find at the food bank?


Illustrations: Screenshot from Anywhere But Westminster, "We must deliver: Brexit, Johnson, and the robots of Milton Keynes".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 17, 2021

Learning events

gikii-SyntheticLearn-screenshot.pngFederica Giovanella's iPhone kept interrupting her as she spoke. The problem was Siri, which interpreted her pronunciation of "theory" as a call to wake up and work. Siri is real, of course, but it was purely coincidental that the talk it was interrupting, "Product Liability and the Internet of (Stranger) Things", was focused how law should deal with remote control and the Internet of Things, using a scene from Stranger Things as Exhibit A of the possibilities.

The concatenation was a fine demonstration of the essence of gikii, a small annual conference that mixes law, technology, and pop culture. Founded by Lilian Edwards and Andres Guadamuz, this year was the 16th iteration of this event that applies law to imaginary futures to understand how to manage prospective real ones. (For discussions of previous years, see: 2020, 2019, 2018, 2016, 2014, 2013, and 2008.)

It can be difficult to disentangle truth from fiction at gikii. "Are we serious?" I asked Jon Crowcroft while developing our paper, Leaky by Design.

What should be fiction, but sadly is not, is Jiahong Chen and Lucas Miotto's discussion of real-time profiling, which is particularly creepy in light of this week's revelation of the UK government's use of sensitive data to influence the population's behavior.

Also sadly non-fictional is universities' pandemic-fuelled race to take advantage of the rise of remote learning to demand that instructors record every "learning event", as Guido Noto La Diega explained in Death Becomes HE. Obviously this is not the first time universities have demanded ownership of academic staff's intellectual property, but it's an unprecedented expansion. Noto La Diega noted that in a short time universities have shifted from opt-in (I will choose to record this lecture) to opt-out (I must specifically turn off recording for this lecture). At one prominent UK university, a commenter noted, although the union agreement specifies that staff can't be required to record lectures, the administration nonetheless makes failing to record them a disciplinary offense. The result, he said, is "strangling academic freedom".

Also non-fictional was Miranda Mowbray's study of the algorithm used to substitute for 2020 secondary school exam results; the results were so controversial they led to street protests. Mowbray finds that a key flash point was wrong: the algorithm was not more class-biased than teachers' predicted grades. What is needed for future such systems is many of the usual things: early consultation with stakeholders, explicit fairness requirements that can be communicated to the public, personalized explanations, and so on.

However, the advantage of looking at law through fiction is that you don't have to wait for real cases to do your analysis. Helpfully, there is a lot of AI fiction against which to examine the EU's publication, in April, of draft AI regulations. Lilian Edwards and Hannah Smethurst applied the EU's proposed risk-based analysis to regulating magic, using this year's series WandaVISION as a source of example outcomes. Reuben Binns demonstrated how to audit an algorithm for weaknesses by interrogating it in character as Columbo. Marion Oswald's short story, based on GK Chesterton's Father Brown story The Mistake of the Machine, saw Father Brown (human psychology) and Sherlock Holmes (data from external observation) face off over a bit of domestic carnage. As in Chesterton's original, the crucial element was understanding humans.

And yet, AI itself is so often fiction, in reality the composition of hidden numbers of human-performed microtasks, as Mary L. Gray and Siddharth Suri have shown. This is, Vanessa Hanschke and Yasmin Hansschke concluded via a hand-drawn animation showing the lives of "clickworkers", very much traditional exploitation and "not the AI labor revolution we were promised".

The fictional version of Adrian Aronsson-Storrier, who with Will Page produced this year's Best Paper, might agree. In a mock video seeking seed funding for this "disruptive industry player" in the "DeepFake sector", Page and Adrian Aronsson-Storrier explain their company, SyntheticLearn, a DeepFake service that underpays PhD students to prepare, write, and record customized lectures that are then delivered by digitally faked senior professors. Hence the company's slogan: "Fake it, don't make it." But good news: SyntheticLearn has signed up to no less than 20 AI and machine learning ethics pledges.

"Our DeepFake activities are entirely lawful," Aronsson-Storrier reassures prospective investors.

Oy. (Really: watch the video.)

If that weren't bad enough, "all our dystopias are converging," said Paul Bernal in a race through movie images from several decades: climate change, technological nightmares, the rise of extremism, racism, and authoritarianism, and disease and disaster are all crashing into one another. Proposed solutions often make each dystopia worse, particularly for privacy, because all of them lead to proposals that tracking people through apps will help.

And yet, despite all these dysfunctional narratives - I haven't yet mentioned Hjalte Osborn Frandsen fretting about space junk, for example, a potential future that hangs on my wall in the form of a 1984 lithograph by Edinburgh artist Peter Standen - overall this year's gikii seemed surprisingly not-pessimistic. Bernal's crashing dystopias may have something to do with it: anything that suggests there *is* a future, however, difficult to navigate, seems optimistic.


Illustrations: A screenshot from Will Page's and Adrian Aronsson-Storrier's paper, SyntheticLearn (page down for the video).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 10, 2021

Globalizing Britain

Chatsworth_Cascade_and_House_-_geograph.org.uk_-_2191570.jpgBrexit really starts now. It was easy to forget, during the dramas that accompanied the passage of the Withdrawal Agreement and the disruption of the pandemic, that the really serious question had still not been answered: given full control, what would Britain do with it? What is a reshaped "independent global Britain" going to be when it grows up? Now is when we find out, as this government, which has a large enough majority to do almost anything it wants, pursues the policies it announced in the Queen's Speech last May.

Some of the agenda is depressingly cribbed from the current US Republican playbook. First and most obvious in this group is the Elections bill. The most contentious change is requiring voter ID at polling stations (even though there was a total of one conviction for voter fraud in 2019, the year of the last general election). What those in other countries may not realize is how many eligible voters in Britain lack any form of photo ID. The Guardian that 11 million people - a fifth of eligible voters - have neither driver's license nor passport. Naturally they are disproportionately from black and Asian backgrounds, older and disabled, and/or poor. The expected general effect, especially coupled with the additional proposal to remove the 15-year cap on voting while expatriate, is to put the thumb on the electoral scale to favor the Conservatives.

More nettishly, the government is gearing up for another attack on encryption, pulling out all the same old arguments. As Gareth Corfield explains at The Register, the current target is Facebook, which intends to roll out end-to-end encryption for messaging and other services, mixed with some copied FBI going dark rhetoric.

This is also the moment when the Online Safety bill (previously online harms). The push against encryption, which includes funding technical development is part of that because the bill makes service providers responsible for illegal content users post - and also, as Heather Burns points out at the Open Rights Group, legal but harmful content. Burns also details the extensive scope of the bill's age verification plans.

These moves are not new or unexpected. Slightly more so was the announcement that the UK will review data protection law with an eye to diverging from the EU; it opened the consultation today. This is, as many have pointed out before dangerous for UK businesses that rely on data transfers to the EU for survival. The EU's decision a few months ago to grant the UK an adequacy decision - that is, the EU's acceptance of the UK's data protection laws as providing equivalent protection - will last for four years. It seems unlikely the EU will revisit it before then, but even before divergence Ian Brown and Douwe Korff have argued that the UK's data protection framework should be ruled inadequate. It *sounds* great when they say it will mean getting rid of the incessant cookie pop-ups, but at risk is privacy protections that have taken years to build. The consultation document wants to promise everything: "even better data protection regime" and "unlocking the power of data" appear in the same paragraph, and the new regime will also both be "pro-growth and innovation-friendly" and "maintain high data protection standards".

Recent moves have not made it easier to trust this government with respect to personal data- first the postponed-for-now medical data fiasco and second this week's revelation that the government is increasingly using our data and hiring third-party marketing firms to target ads and develop personalized campaigns to manipulate the country's behavior. This "influence government" is the work of the ten-year-old Behavioural Insights Team - the "nudge unit", whose thinking is summed up in its behavioral economy report.

Then there's the Police, Crime, Sentencing, and Courts bill currently making its way through Parliament. This one has been the subject of street protests across the UK because of provisions that permit police and Home Secretary Priti Patel to impose various limits on protests.

Patel's Home Office also features in another area of contention, the Nationality and Borders bill. This bill would make criminal offenses out of arriving in the UK without permission a criminal offense and helping an asylum seeker enter the UK. The latter raises many questions, and the Law Society lists many legal issues that need clarification. Accompanying this is this week's proposal to turn back migrant boats, which breaks maritime law.

A few more entertainments lurk, for one, the plan to review of network neutrality announced by Ofcom, the communications regulator. At this stage, it's unclear what dangers lurk, but it's another thing to watch, along with the ongoing consultation on digital identity.

More expected, no less alarming, this government also has an ongoing independent review of the 1998 Human Rights Act, which Conservatives such as former prime minister Theresa May have long wanted to scrap.

Human rights activists in this country aren't going to get much rest between now and (probably) 2024, when the next general election is due. Or maybe ever, looking at this list. This is the latest step in a long march, and it reminds that underneath Britain's democracy lies its ancient feudalism.


Illustrations: Derbyshire stately home Chatsworth (via Trevor Rickards at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 3, 2021

The trial

Elizabeth_Holmes_at_TechCrunch_Disrupt_on_September_8,_2014_(14996937900).jpgThe trial of Theranos founder and former CEO Elizabeth Holmes, which began jury selection this week, offers a rare opportunity to understand in depth how lawyers select from and frame the available evidence to build and present a court case. The opportunity arises because investigative reporter John Carreyrou has both the mountains of evidence he uncovered over the last seven years, and because true crime podcasts are now a thing. Most people facing the reality of the case, he observes, would have taken a plea deal. Not Holmes, or not yet.

The story of Theranos is well-known: Holmes dropped out of studying chemical engineering at Stanford at 19 and used her tuition money as seed funding to pursue the idea of developing diagnostic tests based on much smaller amounts of blood than was currently possible - a finger stick rather than a venous blood draw and many tests conducted at once on those few drops. Expert medical professors told her it was impossible. She persisted, nonetheless.

Holmes's path through medicine and business seemed charmed. She populated the Theranos board with famous names: Henry Kissinger and former secretary of state George Shultz (who responded angrily when his Theranos employee grandson tried to warn him). She raised hundreds of millions of dollars from the Walmart family ($150 million), Rupert Murdoch ($125 million), Trump administration education secretary Betsy DeVos ($100 million), and the Cox family ($100 million). Then-boyfriend Sunny Balwani joined as chief operating officer. Theranos won contracts with Walgreen's and Safeway, both anxious about remaining competitive. By 2014 she was everywhere on TV shows and magazine covers wearing a Steve Jobs-like all-black outfit of turtleneck and trousers, famous as the world's youngest self-made female billionaire.

And then, in 2015, Wall Street Journal reporter John Carreyrou began blowing it all up with a series of investigative articles that eventually underpinned his 2018 book, Bad Blood: Secrets and Lies in a Silicon Valley Startup. The Securities and Exchange Commission charged Holmes and Theranos with fraud; Holmes settled the case by paying $500,000, giving up her voting control over the company and surrendering her 18.9 million shares. She was barred from serving as an officer or director of a public company for ten years, and she and Balwani were indicted on criminal fraud charges. This is the trial that began this week; Balwani will be tried later.

Twitter reports suggest that it hasn't been easy to find jurors in Santa Clara County, California, where the trial is taking place, who haven't encountered at least some of the extensive media coverage, read Carreyrou's book, or seen Alex Gibney's HBO documentary The Inventor: Out for Blood in Silicon Valley. Holmes remains a media magnet as a prospective felon.

With the case approaching, Carreyrou has released the first three of a planned dozen episodes of Bad Blood: The Final Chapter. These cover, in order: Holmes's trial strategy as revealed by the papers her lawyers have filed; Theranos' foray into testing for Ebola and Zika during those epidemics; and Holmes' relationship with Balwani. There is enough new material to make the podcast worth your time (though it's difficult not to wince when Carreyrou damages his credibility by delivering the requisite podcast ads for dubious health drinks and hair loss remedies, and endorses meal kits).

What makes this stand out is the near real-time critique of the case's construction. When Carreyrou thinks, for example, that the "Svengali defense" Holmes's lawyers have filed - Holmes apparently intends to claim that Balwani abuse and manipulation robbed her of personal choice - is a long shot, it's because he's seen extensive text messages between Holmes and Balwani (a selection are read out by actors). More speculative are his comments on the effect on the jury of Holmes's new persona: the Steve Jobs costume and stylized hair and makeup are replaced by a more natural look as a married woman and new mother. Carreyrou revisits Holmes and Balwani's relationship in more detail in the third episode.

The second episode offers a horrifying inside look at medical malfeasance. As explained here by microbiologist and former Theranos lab worker Lina Castro, neither Holmes nor Balwani understood the safety protocols necessary for handling infectious and lethal pathogens. Castro and Aaron Richardson, the scientist who led the effort to develop a test for Ebola, conclude that even if Theranos' "miniLab" testing device had worked, the company's culture was too dysfunctional to be able to create a successful Ebola test.

At the Washington Post, Rachel Lurman argues that the case puts Silicon Valley's culture on trial. Others argue that Theranos isn't *really* Silicon Valley at all, since neither its board nor its list of investors included Silicon Valley names. In fact, Theranos was a PR-friendly Silicon Valley copy: the eccentric but unvarying clothing (see also: Zuckerberg's hoodie), the emotive origin story (the beloved uncle who died too soon), and the enthusiastic promotion of vaporware until a real product can be demoed. In the days of pure software, bullshit could sort of work. But not in the medical context, where careful validation and clinical testing are essential, and it won't work in the future of hybrid cyber-physical systems, where safety and real world function matter.

"First they call you crazy, then they fight you, and then you change the world," Holmes frequently said in defending her company against Carreyrou's reporting. Only if you have the facts on your side.

Illustrations: Elizabeth Holmes at TechCrunch Disrupt in 2014 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.