" /> net.wars: March 2022 Archives

« February 2022 | Main | April 2022 »

March 25, 2022

Dangerous corner

War_damages_in_Mariupol,_12_March_2022_(01).jpgIf there is one thing the Western world has near-universally agreed in the last month, it's that in the Russian invasion of Ukraine, the Ukrainians are the injured party. The good guys.

If there's one thing that privacy advocates and much of the public agree on, it's that Clearview AI, which has amassed a database of (it claims) 10 billion facial images by scraping publicly accessible social media without the subjects' consent and sells access to it to myriad law enforcement organizations, is one of the world's creepiest companies. This assessment is exacerbated by the fact that the company and its CEO refuse to see anything wrong about their unconsented repurposing of other people's photos; it's out there for the scraping, innit?

Last week, Reuters reported that Clearview AI was offering Ukraine free access to its technology. Clearview's suggested uses: vetting people at checkpoints; debunking misinformation on social media; reuniting separated family members; and identifying the dead. Clearview's CEO, Hoan Ton-That, told Reuters that the company has 2 billion images of Russians scraped from Russian Facebook clone Vkonakte.

This week, it's widely reported that Ukraine is accepting the offer. At Forbes, Tom Brewster reports that Ukraine is using the technology to identify the dead.

Clearview AI has been controversial ever since January 2020, when Kashmir Hill reported its existence in the New York Times, calling it "the secretive company that might end privacy as we know it". Social media sites LinkedIn, Twitter, and YouTube all promptly sent cease-and-desist notices. A month later, Kim Lyons reported at The Verge that its 2,200 customers included the FBI, Interpol, the US Department of Justice, Immigration and Customs Enforcement, a UAE sovereign wealth fund, the Royal Canadian Mounted Police, and college campus police departments.

In May 2021, Privacy International filed complaints in five countries. In response, Canada, Australia, the UK, France, and Italy have all found Clearview to be in breach of data protection laws and ordered it to delete all the photos of people that it has collected in their territories. Sweden, Belgium, and Canada have declared law enforcement use of Clearview's technology to be illegal.

Ukraine is its first known use in a war zone. In a scathing blog posting, Privacy International says, "...the use of Clearview's database by authorities is a considerable expansion of the realm of surveillance, with very real potential for abuse."

Brewster cites critics, who lay out familiar privacy issues. Misidentification in a war zone could lead to death if a live soldier's nationality is wrongly assessed (especially common when the person is non-white) and unnecessary heartbreak for dead soldiers' families. Facial recognition can't distinguish civilians and combatants. In addition, the use of facial recognition by the "good guys" in a war zone might legitimize the technology. This last seems to me unlikely; we all distinguish the difference between what's acceptable in peace time versus an extreme context. This issue here is *company*, not the technology, as PI accurately pinpoints: "...it seems no human tragedy is off-limits to surveillance companies looking to sanitize their image."

Jack McDonald, a senior lecturer in war studies at Kings College London who researches the relationship between ethics, law, technology, and war, sees the situation differently.

Some of the fears Brewster cites, for example, are far-fetched. "They're probably not going to be executing people at checkpoints." If facial recognition finds a match in those situations, they'll more likely make an arrest and do a search. "If that helps them to do this, there's a very good case for it, because Russia does appear to be flooding the country with saboteurs." Cases of misidentification will be important, he agrees, but consider the scale of harm in the conflict itself.

McDonald notes, however, that the use of biometrics to identify refugees is an entirely different matter and poses huge problems. "They're two different contexts, even though they're happening in the same space."

That leaves the use Ukraine appears to be most interested in: identifying dead bodies. This, McDonald explains, represents a profound change from the established norms, which include social and institutional structures and has typically been closely guarded. Even though the standard of certainty is much lower, facial recognition offers the possibility of being able to do identification at scale. In both cases, the people making the identification typically have to rely on photographs taken elsewhere in other contexts, along with dental records and, if all else fails, public postings.

The reality of social media is already changing the norms. In this first month of the war, Twitter users posting pictures of captured Russian soldiers are typically reminded that it is technically against the Geneva Convention to do so. The extensive documentation - video clips, images, first-person reports - that is being posted from the conflict zones on services like TikTok and Twitter is a second front in its own right. In the information war, using facial recognition to identify the dead is strategic.

This is particularly true because of censorship in Russia, where independent media have almost entirely shut down and citizens have only very limited access to foreign news. Dead bodies are among the only incontrovertible sources of information that can break through the official denials. The risk that inaccurate identification could fuel Russian propaganda remains, however.

Clearview remains an awful idea. But if I thought it would help save my country from being destroyed, would I care?

Illustrations: War damage in Mariupol, Ukraine (Ministry of Internal Affairs of Ukraine, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 18, 2022

There may be trouble ahead...

ElliQ7.pngOne of the first things the magician and paranormal investigator James Randi taught all of us in the skeptical movement was the importance of consulting the right kind of expert.

Randi made this point with respect to tests of paranormal phenomena such as telekinesis and ESP. At the time - the 1970s and 1980s - there was a vogue for sending psychic claimants to physicists for testing. A fair amount of embarrassment ensued. As Randi liked to say, physicists, like many other scientists, are not experienced in the art of deception. Instead, they are trained to assume that things in their lab do not lie to them.

Not a safe assumption when they're trying to figure out how a former magician has moved an empty plastic film can a few millimeters, apparently with just the power of their mind. Put in a magician who knows how to set up the experiment so the claimant can't cheat, and *then* if the effect still occurs you know something genuinely weird is going on.

I was reminded of this reading this quote from Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, writing in Nature: "When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research,"

The article itself is scary enough for one friend to react to it with, "This is the apocalypse". The researchers undertook a "thought experiment" after the Swiss Federal Institute for NBC Protection (Spiez Laboratory), asked theiir company, Collaborations Pharmaceuticals Inc, to provide a presentation on how their AI technology could be misused in drug discovery to its biennial conference on new technologies and their implications for the Chemical and Biological Weapons conventions. They work, they write, in an entirely virtual world; their molecules exist only in their computer. It had never previously occurred to them to wonder if the machine learning models they were building to help design new molecules that could be developed into new, life-saving drugs could be turned to generating toxins instead. Asked to consider it, they quickly discovered that it was disturbingly easy to generate prospective lethal neurotoxins. Because: generating potentially helpful molecules required creating models to *avoid* toxicity - which meant being able to predict its appearance.

As they go on to say, our general discussions of the potential harms AI can enable are really very limited. The biggest headlines go to putting people out of work; the rest is privacy, discrimination, fairness, and so on. Partly, that's because those are the ways AI has generally been most visible: automation that deskills or displaces humans, or algorithms that make decisions about government benefits, employment, education, content recommendations, or criminal justice outcomes. But also it's because the researchers working on this technology blinker their imagination to how they want their new idea to work.

The demands of marketing don't help. Anyone pursuing any form of research, whether funded by industry or government grant, has to make the case for why they should be given the money. So of course in describing their work they focus on the benefits. Those working on self-driving cars are all about how they'll be safer than human drivers, not scary possibilities like widespread hundred-car pileups if hackers were to find a way to exploit unexpected software bugs to make them all go haywire at the same time.

Sadly, many technology journalists pick up only the happy side. On Wednesday, as one tiny example, the Washington Post published a cheery article about EliiQ, an Alexa-like AI device "designed for empathy" meant to keep lonely older people company. The commenters saw more of the dark side than the writer did: ongoing $30 subscription, data collection and potential privacy invasion, and, especially, potential for emotional manipulation as the robot tells its renter what it (not she, as per writer Steven Zeitchik) calculates they want to hear.

It's not like this is the first such discovery. Malicious Generative Adversarial Networks (GANs) are the basis of DeepFakes. If you can use some new technology for good, why *wouldn't* you be able to use it for evil? Cars drive sick kids to hospitals and help thieves escape. Computer programmers write word processors and viruses, the Internet connects us directly to medical experts and sends us misinformation, cryptography protects both good and bad secrets, robots help us and collect our data. Why should AI be different?

I'd like to think that this paper will succeed where decades of prior experience have failed, and make future researchers think more imaginatively about how their work can be abused. Sadly, it seems a forlorn hope.

In Gemma Milne's 2020 book examining how hype interferes with our ability to make good decisions about new technology, Smoke and Mirrors, she warns that hype keeps us from asking the crucial question: Is this new technology worth its cost? Potential abuse is part of that cost-benefit assessment. We need researchers to think about what can go wrong a lot earlier in the development cycle - and we need them to add experts in the art of forecasting trouble (science fiction writers, perhaps?) to their teams. Even technology that looks like magic...isn't.

Illustrations: EliiQ (company PR photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 11, 2022

The rhetoric meets the road

Thumbnail image for Bitcoin_Digital_Currency_Logo.pngOn February 28, at the same time as he called for blocking Russia's Internet connections, Ukrainian minister of digital transformation Mykhailo Fedorov called for cryptocurrency exchanges to block the addresses of Russian users as well as addresses officially tied to Russia and Belarus. Fedorov was not the only one: European Central Bank president Christine Lagarde called for regulations to stop cryptocurrencies from being used to bypass the economic sanctions being jointly applied against Russia by numerous countries, as has Estonian prime minister Kaja Kallas.

Their concern echoes the rhetoric that formed cryptocurrencies' origin story. Bitcoin's founding paper begins by saying that the system's main benefit is eliminating the need for financial institutions or trusted third parties because the blockchain replaces trust with cryptography and transparency. Eliminating governments' ability to interfere in financial transactions was definitely part of the plan. I can't help thinking that Satoshi's threat model was governments taken singly, not dozens of them acting in concert. Also, this was before Sarah Meiklejohn showed that bitcoin addresses are not anonymous.

The notion that cryptoccurrencies can build an independent global financial system outside of government regulation is even more overblown than 1999s claims that governments would not be able to control the Internet. Information can achieve an effect simply by transmitting from one individual to another. Money can't, at least not at current levels of non-adoption; if you want your bitcoin stash to be of use to buy stuff you have to connect it to state-backed currencies. Even stablecoins won't buy me groceries at the local shop. And that's the point where government regulation steps in - as, for example, this week, when the UK's Financial Conduct Authority ordered the shutdown of all 81 of the UK's bitcoin ATMs because they need to be registered and comply with anti-money-laundering regulations.

The responses to the above developments have exposed the extent to which the original bitcoin/blockchain design has been thwarted by centralization. As we've said before, any time something is complicated there's a business model for a third-party intermediary to make it simple. And so we have cryptocurrency exchanges like Coinbase, which make buying and transferring cryptocurrencies easy but far more controllable for governments. And indeed: a few days after sanctions were imposed, Coinbase had blocked 25,000 cryptocurrency addresses linked to Russian people or entities.

With the Moscow stock exchange closed for two weeks and counting, shares in Russian companies plummeting to zero on international exchanges, and the ruble collapsing, the motivations for individuals to use cryptocurrencies are inarguable. But an entire trillion-dollar economy?

Says Dave Birch, author of The Currency Cold War, "Cryptocurrency people think cryptocurrencies are more important than they actually are."

Changpeng Zhao, the founder of Binance, which in 2021 was investigated for money laundering by the US and ordered to cease operations in the UK, quickly refused to sanction Russians, arguing that cryptocurrencies are too small for Russian needs. Zhao estimated the value of all cryptocurrencies at less than 0.3% of global net worth - plus, it's too traceable to be useful for illicit activities. Coin Telegraph reports that Russians are estimated to hold more than $200 billion in cryptocurrencies as of February 2022; the country is Binance's second-biggest market after Turkey.

Many experts agree with Zhao. At last week's State of the Net conference, Bill Rockwood, the executive director of the Future Forum caucus in the US House of Representatives, argued that the unalterability of the blockchain creates truth an authoritarian state can't hide, making it unsuitable for a country trying to stealthily evade international sanctions. At the Atlantic Council, senior fellow JP Schnapper-Casteras agrees, pointing out that Russian authorities have considered either banning or regulating cryptocurrencies for the precise reason that they cannot be easily centrally controlled. In any case, Schnapper-Casteras adds, US-based cryptocurrency exchanges must legally comply with all US law, including sanctions, and law enforcement skills at tracing transactions on public blockchains have improved greatly, as the recent Bitfinex arrests showed. Plus, only two cryptocurrencies are big enough to help, and purchases of the necessary size would lead to unaffordable price spikes. Like many other countries, Russia intends to develop its own central bank digital currency - but that will take years.

In a Twitter thread, the Bitcoin Association's head of policy, Jake Chervinsky, explains all that in more detail, and also points out that in the years Russian president Vladimir Putin has spent building up his war chest, cryptocurrencies formed no part of the plan, as the New York Times has reported..

The more obvious use is for individual Russians to buy cryptocurrencies (using their own systems and hardware wallets, avoiding the exchanges) as a way of hedging against further collapse in the ruble. Bloomberg, however, finds that this isn't really happening much either. As of March 3, blockchain data was showing that Russian purchases have actually halved since February and is less than a fifth of what it was at its peak in May 2021. Also, we're talking millions, not the billions the war is costing Russia every day.

The more important cryptocurrency threat we should be considering, Reuters reports, is cyber attacks on cryptocurrency exchanges. If you have a bunch of cryptocurrency reposing in an online software wallet...buyer beware.

Illustrations: Bitcoin logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 4, 2022

Sovereign stack

UA-sunflowers.jpgBut first, a note: RIP Cliff Stanford, who in 1993 founded the first Internet Service Provider to offer access to consumers in the UK, died this week. Stanford's Demon Internet was my first ISP, and I well remember having to visit their office so they could personally debug my connection, which required users to precisely configure a bit of code designed for packet radio (imagine getting that sort of service now!). Simon Rockman has a far better-informed obit than I could ever write.


On Monday, four days after Russia invaded Ukraine, the Ukrainian minister for digital transformation, Mykhailo Fedorov, sent a letter (PDF) to the Internet Corporation for Assigned Names and Numbers and asked it to shut down Russian country code domains such as .ru, .рф, and .su. Quick background: ICANN manages the Internet's domain name system, the infrastructure that turns the human-readable name for a website or email address that you type in into the routing numbers computers actually use to get your communications to where you want them to go. Fedorov also asked ICANN to shut down the DNS root servers located in Russia, and plans a separate letter to request the revocation of all numbered Internet addresses in use by Russian members of RIPE-NCC, the registry that allocates Internet numbers in Europe and West Asia.

Shorn of the alphabet soup, what Fedorov is asking ICANN to do is sanction Russia by using technical means to block both incoming (we can't get to their domains) and outgoing (they can't get to ours) Internet access, on the basis that Russia uses the Internet to spread propaganda, disinformation, hate speech and the promotion of violence.

ICANN's refusal (PDF) came quickly. For numerous reasons, ICANN is right to refuse, as the Internet Society, Access Now, and others have all said.

Internet old-timers would say that ICANN's job is management, not governance. This is a long-running argument going all the way back to 1998, when ICANN was created to take over from the previous management, the University of Southern California computer scientist Jon Postel. Among other things, Postel set up much of the domain name system, selecting among submitted proposals to run registries for both international top-level domains (.com and .net, for example), and country code domains (such as .uk and .ru). Especially in its early years, digital rights groups watched ICANN with distrust, concerned that it would stray into censorship at the behest of one or another government instead of focusing on its actual job, ensuring the stability and security of the network's operation.

For much of its history ICANN was accountable to the US National Telecommunications and Information Administration, part of the Department of Commerce. It became formally independent as a multistakeholder organization in 2016, after much wrangling over how to construct the new model.

This history matters because the alternative to ICANN was transitioning its functions to the International Telecommunications Union, an agency of the United Nations, a solution the Internet community generally opposed, then and now. Just a couple of weeks ago, Russia and China began a joint push towards greater state control, which they intended to present this week to the ITU's World Telecommunication Standardization Assembly. Their goal is to redesign the Internet to make it more amenable to government control, exactly the outcome everyone from Internet pioneers to modern human rights activists seeks to avoid.

So, now. Shutting down the DNS at the request of one country would put ICANN exactly where it shouldn't be: making value judgments about who should have access.

More to the specific situation, shutting off Russian access would be counterproductive. The state shut down the last remaining opposition TV outlet on Thursday, along with the last independent radio station. Many of the remaining independent journalists are leaving the country. Recognizing this, the BBC is turning its short-wave radio service back on. But other than that. the Internet is the only remaining possibility most Russians have of accessing independent news sources - and Russia's censorship bureau is already threatening to block Wikipedia if it doesn't cover the Ukraine invasion to its satisfaction.

In fact, Russia has long been working towards a totally-controlled national network that can function independently of the rest of the Internet, like the one China already has. As The Economist writes, China is way ahead; it has 25 years of investment in its Great Firewall, and owns its entire national "stack". That is, it has domestic companies that make chips, write software, and provide services. Russia is far more dependent on foreign companies to provide many of the pieces necessary to fill out the "sovereign stack" it mandated in 2019 legislation. In July 2021, Russia tested disconnecting its nascent "Runet" from the Internet, though little is known about the results. It is

There are other, more appropriate channels for achieving Fedorov's goal. The most obvious are the usual social media suspects and their ability to delete fake accounts and bots and label or remove misinformation. Facebook, Google, and Twitter all moved quickly to block Russian state media from running ads on their platforms or, in Facebook's case, monetizing content. Since then, Google has paused all ad sales in Russia. The economic sanctions enacted by many countries and the crash in the ruble should shut down Russians' access to most Western ecommerce. Many countries are kicking Russia's state-media channels off

This war is a week old. It will end - sometime. It will not pay in the long term (assuming we have one) to lock Russian citizens, many of whom oppose the war, into a state media-controlled echo chamber. Out best hope is to stay connected and find ways to remediate the damage, as painful as that is.

Illustrations: Sunflowers under a blue sky (by Inna Radetskaya at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.