" /> net.wars: May 2022 Archives

« April 2022 | Main | June 2022 »

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.

Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2022

Mona Lisa smile

Mona Lisa - cropped for net.wars.jpgA few weeks ago, Zoom announced that it intends to add emotion detection technology to its platform. According to Mark DeGeurin at Gizmodo, in response, 27 human rights groups from across the world, led by Fight for the Future, have sent an open letter demanding that the company abandon this little plan, calling the software "invasive" and "inherently biased". On Twitter, I've seen it called "modern phrenology"; a deep insult for those who remember the pseudoscience of studying the bumps on people's heads to predict their personalities.

It's an insult, but it's not really wrong. In 2019, Angela Chen at MIT Technology Review highlighted a study showing that facial expressions on their own are a poor guide to what someone is feeling. Cultures, context, personal style all affect how we present ourselves, and the posed faces AI developers use as part of their training of machine learning systems are even worse indicators, since few of us really how our faces look under the influence of different emotions. In 2021, Kate Crawford, author of Atlas of AI, used the same study to argue in The Atlantic that the evidence that these systems work at all is "shaky".

Nonetheless, Crawford goes on to report, this technology is being deployed in hiring systems and added into facial recognition. A few weeks ago, Kate Kaye reported at Protocol that Intel and virtual school software provider Classroom Technologies are teaming up to offer a version that runs on top of Zoom.

Cue for a bit of nostalgia: I remember the first time I heard of someone proposing computer emotion detection over the Internet. It was the late 1990s, and the source - or the perpetrator, depending on your point of view, was Rosalind Picard at the MIT Media Lab. Her book on the subject, Affective Computing, came out in 1997.

Picard's main idea was that to be truly intelligent - or at least, seem that way to us - computers would have to learn to recognize emotions and produce appropriate responses. One of the potential applications I remember hearing about was online classrooms, where the software could monitor students' expressions for signs of boredom, confusion, or distress and alert the teacher - exactly what Intel and Classroom Technologies want to do now. I remember being dubious: shouldn't teachers be dialed in on that sort of thing? Shouldn't they know their students well enough to notice? OK, remote, over a screen, maybe dozens or hundreds of students at a time...not so easy.... (Of course, the expensive schools offer mass online education schemes to exploit their "brands", but they still keep the small, in-person classes that creates those "brands" by churning out prime ministers and Silicon Valley dropouts.)

That wasn't Picard's main point, of course. In a recent podcast interview, she explains her original groundbreaking insight: that computers need to have emotional intelligence in order to make them less frustrating for us to deal with. If computers can capture the facial expressions we choose to show, the changes in our vocal tones, our gestures and muscle tension, perhaps they can respond more appropriately - or help humans to do so. Twenty-five years later, the ideas in Picard's work are now in use in media companies, ad agencies, and call centers - places where computer-human communication happens.

It seems a doubtful proposition. Humans learn from birth to read faces, and even we have argued for centuries over the meaning of the expression on the face of the Mona Lisa.

In 1997, Picard did not foresee the creepiness and giant technology exploiters. It's hard to know whether to be more alarmed about the technology's inaccuracy or its potential improvement. While it's inaccurate and biased, the dangers are the consequences of mistakes in interpretation; a student marked "inattentive", for example, may be penalized in their grade. But improving and debiasing the technology opens the way for fine-tuned manipulation and far more pervasive and intimate surveillance as it becomes embedded in every company, every conference, every government agency, every doctor's office, all of law enforcement. Meanwhile, the technological imperative of improving the system will require the collection of more and more data: body movements, heart rates, muscle tension, posture, gestures, surroundings.

I'd like to think that by this time we are smarter about how technology can be abused. I'm sure many of Zoom's corporate clients want emotion recognition technology; as in so many other cases, we are pawns because we're largely not the ones paying the bills or making the choice of platform. There's an analogy here to Elon Musk's negotiations with Twitter shareholders; the millions who use the service every day and find it valuable have no say in what will happen to it. If Zoom adopts emotion recognition, how long before law enforcement starts asking for user data in order to feed it into predictive policing systems? One of this week's more startling revelations was Aaron Gordon's report at Vice that San Francisco police are using driverless cars as mobile surveillance cameras, taking advantage of the fact that they are continuously recording their surroundings.

Sometimes the only way to block abuse of technology is to retire the idea entirely. If you really want to know what I'm thinking and feeling, just ask. I promise I'll tell you.

Illustrations: The emotional enigma that is the Mona Lisa.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 13, 2022

False economy

Thumbnail image for coyote-roadrunner-cliff.pngThis week, every cryptocurrency was unhappy in its own way. It has not been a good year for cryptocurrency speculators in general, but Wednesday was a disaster: almost all "major" cryptocurrencies crashed by about 25%, and even venerable bitcoin dropped by 14% (although it still is twice its 2017 peakl. Which sounds great until you realize that on November 10, 2021 people *bought* bitcoin for $68,789 and El Salvador has been "buying the dip" all year.

Especially notable were the losses among cryptocurrencies intended to stay pegged to the US dollar - "stablecoins" - which fell off a cliff, pricewise. One previously unfamiliar "stablecoin", Luna, dropped 99.7%, leading some posters in the Terraluna subReddit to post suicide helpline numbers.

Do not gloat. Heed Hamilton Nolan's warning at In These Times about the dangers when a class of young (mostly) men who hate government become angry, bitter, and hopeless.

First: what happened? You can value a company, as Warren Buffett does, by studying it: its business, market sector, competitors, financial stability, and prospects. There's always some element of uncertainty. New managers could derail the company (Boeing), new, well-funded competitors could enter the field (Netflix), new technology could overrun its business model, or it could be lying about its revenues - er, painting a rosier picture than is actually merited by the facts. If you have the mad skillz of Buffett (and his professor, Benjamin Graham), thinking through all that should lead you to a reasonable purchase price, and not overpaying allows you to profit from your investment at relatively modest risk.

However, a cryptocurrency is not a business, and it has no real-world usefulness. Like gold, which Buffett has never liked, it costs money to hold, it produces nothing, and, "You can fondle it, but it will not respond". But at least gold has some industrial uses. Cryptocurrencies have none; they are the currency equivalent of being famous for being famous, held aloft only through fear, greed, and mythology. In any crisis, toilet paper, chocolate, cigarettes, booze, or toothpaste are all more useful currencies.

Luna is the most interesting. Here's how Coindesk describes its collapse: "A change in market dynamics caused Luna prices to snap at a breakneck pace. Luna plummeted through several support levels as terraUSD (UST), a Terra-issued stablecoin that's meant to be priced 1:1 to the U.S. dollar, lost its peg."

Let's pick this apart. "Market dynamics" could simply mean "interest rates are going up", which drives money away from the riskiest assets, which sets off a cycle of selling.

"Support levels" is a term for a tealeaves-reading approach to stock market pricing called technical analysis. Proponents believe that the shapes of price charts over time have significance in and of themselves. It has nothing to do with underlying value, Effectively, the fundamental claim is that past performance predicts future results, the exact opposite of what every financial product is required to tell prospective buyers. It would be complete nonsense, *except* that so many people believe in it that those patterns really do move markets, at least short-term. So "breaking support levels" becomes "let's panic and sell, ferchrissake!"

HowToGeek tells us that UST is the stablecoin on the Terra blockchain. Terra is a company providing "programmable money for the Internet", and its blockchain "brings DeFi to the masses". DeFi is short for decentralized finance, and its appearance means we're entering web3 territory - the folks who want to reclaim the Internet through redecentralization. Let's leave that part aside for today.

Traditionally (!) what makes a stablecoin stable is that for every coin (for example, Tether, which also slipped, to $0.95) its issuer holds an actual $1 in its reserves. However, it turns out there is a *second* type of stablecoin, which is backed by an algorithm rather than an asset representing some government's full faith and credit.

So the UST "stablecoin" is pegged to Terra's Luna stablecoin, and the idea is that an algorithm - a smart contract - keeps them pegged to each other by buying, selling, and converting them so they both reliably stay at a value of about $1. This is the theory.

It *sounds* like a folie à deux - that is, a shared delusion in which the partners reinforce each other's belief but neither leads the other closer to any form of outside reality. Apparently enough people distrust governments so much that algorithm! seems appealing and five weeks ago Luna's market cap was $39 billion more than it is now. Yes, money is flowing away from stock market risk, too, but more slowly for the reasons outlined above. A chart at the Motley Fool shows clearly that cryptocurrencies aren't a useful hedge against this.

Bottom line: algorithms do not make a coin stable, and if you don't understand what you're buying, don't buy it.

None of this means cryptocurrencies are finished. It doesn't make them good "investments" to "buy on the dip", either. It's just one more piece of mess in an ongoing expanding experiment that has been highly profitable for a few people, and rife with fraud and market manipulation for many more. Just say no.

Illustrations: Wile E. Coyote makes the mistake of looking down as he runs off the edge of a cliff.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 6, 2022


Trigger_law_states.svg.pngThree months ago, for a book Cybersalon is producing, called Twenty-Two Ideas About the Future, I wrote a provocation about a woman living in Heartbeat Act Texas who discovers she's pregnant. When she forgets to disable its chip, the "smart" home pregnancy test uploads the news to the state's health agency, which promptly shares it far and wide. Under the 2021 law's sanctions on intermediaries, payment services, travel companies, supermarkets all fear being sued as intermediaries, and so they block her from doing anything that might lead to liability, like buying alcohol, cigarettes, or a bus ticket to the state line, or paying a website for abortion pills.

It wasn't supposed to come true, and certainly not so soon.

As anyone who's seen any form of news this week will know, in a leaked draft of the US Supreme Court's decision in Dobbs v. Jackson Women's Health Organization, author Justice Samuel Alito argues that its 1973 decision in Roe v. Wade was "wrongly decided". This is not the place to defend the right to choose or deplore the dangers of of valuing the potential life of a fetus over the actual life of the person carrying it (Louisiana legislators have advanced a bill classifying abortion as homicide). But it is the place to consider the privacy loss if the decision proceeds as indicated, and not just in the approximately half of US states predicted to jump at the opportunity to adopt forced-childbirth policies.

On my shelf is Alan E. Nourse's 1965 book Intern, by Doctor X, an extraordinarily frank diary Nourse kept throughout his 1956 internship. Here he is during his OB/GYN rotation: "I don't know who the OB men have to answer to around here when they get back suspicious pathology reports...somebody must be watching them." In an update, he says the hospital's Tissue Committee reviewed pathology reports on all dilation and curettage procedures; first "suspicious" report attracted a private warning, second a censure, and third permanent expulsion from the hospital staff.

I first read that when I was 12, and I did not understand that he was talking about abortion - although D&Cs were and are routine, necessary procedures, in that time and place each one was also suspected, like travelers today boarding a plane. Every miscarriage had to be cleared of suspicion, a process unlikely to help any of the estimated 1 million per year who grieve pregnancy loss. Elsewhere, he notes the number of patients labeled "NO INFORMATION"; they were giving their babies up for adoption. Then, it was sufficient to criminalize the doctors.

Part of Alito's argument is that abortion is not mentioned in either the Constitution or the First, Fourth, Fifth, Ninth, or Fourteenth Amendments Roe cited. Neither, he says, is privacy; that casual little aside is the Easter egg pointing to future human rights rollbacks.

The US has insufficient privacy law, even in the health sector. Worse, the data collected by period trackers, fitness gizmos, sleep monitoring apps, and the rest is not classed as health data to be protected under HIPAA. In 2015, employers' access to such data through "wellness" programs began raising privacy concerns; all types of employee monitoring have expanded since the pandemic began. Finally, as Johana Bhuiyan reported at the Guardian last month, US law enforcement has easy access to the consumer data we trustingly provide to companies like Apple and Meta. And even when don't provide it, others do: in 2016, anti-choice activists were caught snapping pictures of women entering clinics, noting license plate numbers, and surveiling their smarphones via geofencing to target those deemed to be "abortion-minded".

"Leaving it to the states" - Alito writes of states' rights, not of women's rights - means any woman of child-bearing age at risk of living under a prohibitive regime dare not confide in any of these technologies. Also dangerous: insurance companies, support groups for pregnancy loss or for cancer patients whose treatment is incompatible with continuing a pregnancy, centers for health information, GPS-enabled smartphones, even search engines. Heterosexual men can look forward to diminished sex lives dominated by fear of pregnancy (although note that no one's threatening to criminalize ejaculating inside a vagina) and women may struggle to find doctors willing to treat them at all.

My character struggled to travel out of state. This was based on 1980s Ireland, where ending a pregnancy required a trip to England; in 1992 courts famously barred a raped 14-year-old from traveling. At New York Magazine, Irin Carman finds that some Republican politicians are indeed thinking about this.

Encryption, VPNs, Tor - women will need the same tools that aid dissidents in authoritarian countries. The company SafeGraph, Joseph Cox reports at Vice, sells location data showing who has visited abortion clinics. In response, SafeGraph promised to stop. By then Cox had found another one.

At Gizmodo, Shoshona Wodinsky has the advice on privacy protection my fictional character needed. She dares not confide in anyone she knows lest she put them at risk of becoming an attackable intermediary, yet everyone she *doesn't* know has already been informed.

This is the exact near-future Parmy Olson outlines at Bloomberg, quoting US senator Ron Wyden (D-OR): "...every digital record - from web searches, to phone records and app data - will be weaponized in Republican states as a way to control women's bodies."

Illustrations: Map of the US states with "trigger laws" waiting to come into force if Roe v. Wade is overturned (via M. Bitton at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.