" /> net.wars: February 2019 Archives

« January 2019 | Main

February 14, 2019


Anti-copyright.svg.pngJust a couple of weeks ago it looked like the EU's proposed reform of the Copyright Directive, last updated in 2001, was going to run out of time. In the last three days, it's revived, and it's heading straight for us. As Joe McNamee, the outgoing director of European Digital Rights (EDRi), said last year, the EU seems bent on regulating Facebook and Google by creating an Internet in which *only* Facebook and Google can operate.

We'll start with copyright. As previously noted, the EU's proposed reforms include two particularly contentious clauses: Article 11, the "link tax", which would require anyone using more than one or two words to link to a news article elsewhere to get a license, and Article 13, the "upload filter", which requires any site older than three years *or* earning more than €10,000,000 a year in revenue to ensure that no user posts anything that violates copyright, and sites that allow user-generated content must make "best efforts" to buy licenses for anything they might post. So even a tiny site - like net.wars, which is 13 years old - that hosted comments would logically be required to license all copyrighted content in the known universe, just in case. In reviewing the situation at TechDirt, Mike Masnick writes, "If this becomes law, I'm not sure Techdirt can continue publishing in the EU." Article 13, he continues, makes hosting comments impossible, and Article 11 makes their own posts untenable. What's left?

Thumbnail image for Thumbnail image for Julia Reda-wg-2016-06-24-cropped.jpgTo these known evils, the German Pirate Party MEP Julia Reda finds that the final text adds two more: limitations on text and data mining that allow rights holders to opt out under most circumstances, and - wouldn't you know it? - the removal of provisions that would have granted authors the right to proportionate remuneration (that is, royalties) instead of continuing to allow all-rights buy-out contracts. Many younger writers, particularly in journalism, now have no idea that as recently as 1990 limited contracts were the norm; the ability to resell and exploit their own past work was one reason the writers of the mid-20th century made much better livings than their counterparts do now. Communia, an association of digital rights organizations, writes that at least this final text can't get any *worse*.

Well, I can hear Brexiteers cry, what do you care? We'll be out soon. No, we won't - at least, we won't be out from under the Copyright Directive. For one thing, the final plenary vote is expected in March or April - before the May European Parliament general election. The good side of this is that UK MEPs will have a vote, and can be lobbied to use that vote wisely; from all accounts the present agreed final text settled differences between France and Germany, against which the UK could provide some balance. The bad side is that the UK, which relies heavily on exports of intellectual property, has rarely shown any signs of favoring either Internet users or creators against the demands of rights holders. The ugly side is that presuming this thing is passed before the UK brexits - assuming that happens - it will be the law of the land until or unless the British Parliament can be persuaded to amend it. And the direction of travel in copyright law for the last 50 years has very much been toward "harmonization".

Plus, the UK never seems to be satisfied with the amount of material its various systems are blocking, as the Open Rights Group documented this week. If the blocks in place weren't enough, Rebecca Hill writes at the Register: under the just-passed Counter-Terrorism and Border Security Act, clicking on a link to information likely to be useful to a person committing or preparing an act of terrorism is committing an offense. It seems to me that could be almost anything - automotive listings on eBay, chemistry textbooks, a *dictionary*.

What's infuriating about the copyright situation in particular is that no one appears to be asking the question that really matters, which is: what is the problem we're trying to solve? If the problem is how the news media will survive, this week's Cairncross Review, intended to study that exact problem, makes some suggestions. Like them or loathe them, they involve oversight and funding; none involve changing copyright law or closing down the Internet.

Similarly, if the problem is market dominance, try anti-competition law. If the problem is the increasing difficulty of making a living as an author or creator, improve their rights under contract law - the very provisions that Reda notes have been removed. And, finally, if the problem is the future of democracy in a world where two companies are responsible for poisoning politics, then delving into campaign finances, voter rights, and systemic social inequality pays dividends. None of the many problems we have with Facebook and Google are actually issues that tightening copyright law solves - nor is their role in spreading anti-science, such as this, just in from Twitter, anti-vaccination ads targeted at pregnant women.

All of those are problems we really do need to work on. Instead, the only problem copyright reform appears to be trying to solve is, "How can we make rights holders happier?" That may be *a* problem, but it's not nearly so much worth solving.

Illustrations: Anti-copyright symbol (via Wikimedia); Julia Reda MEP in 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 8, 2019

Doing without

kashmir-hill-untech-gizmodo.pngOver at Gizmodo, Kashmir Hill has conducted a fascinating experiment: cutting, in turn, Amazon, Facebook, Google, Microsoft, and Apple, culminating with a week without all of them. Unlike the many fatuous articles in which privileged folks fatuously boast about disconnecting, Hill is investigating a serious question: how deeply have these companies penetrated into our lives? As we'll see, this question encompasses the entire modern world.

For that reason, it's important. Besides, as Hill writes, it's wrong to answer objections to GAFAM's business practices - or their privacy policies - with, "Well, don't use them, then." It may be to buy from smaller sites and local suppliers, delete Facebook, run Linux, switch to AskJeeves and OpenStreetMap, and dump the iPhone, but doing so requires a substantial rethink of many tasks. As regulators consider curbing GAFAM's power, Hill's experiment shows where to direct our attention.

Online, Amazon is the hardest to avoid. As Lina M. Khan documented last year, Amazon underpins an ever-increasing amount of Internet infrastructure. Netflix, Signal, the WELL, and Gizmodo itself all run on top of Amazon's cloud services, AWS. To ensure she blocked all of them, Hill got a technical expert to set up a VPN that blocked all IP addresses owned by each company and monitored attempted connections. Even that, however, was complicated by the use of content delivery networks, which mask the origin of network traffic.

Barring Facebook also means dumping Instagram and WhatsApp, and, as Hill notes, changing the signin procedure for any website where you've used your Facebook ID. Even if you are a privacy-conscious net.wars reader who would never grant Facebook that pole position, the social media buttons on most websites and ubiquitous trackers also have to go.

For Hill, blocking Apple - which seems easy to us non-Apple users - was "devastating". But this is largely a matter of habit, and habits can be re-educated. The killer was the apps: because iMessage reroutes texts to its own system, some of Hill's correspondents' replies never arrive, and she can't FaceTime her friends. Her conclusion: "It's harder to get out of Apple's ecosystem than Google's." However, once out she found it easy to stay that way - as long as she could resist her friends pulling her back in.

Google proved easier than expected despite her dependence on its services - Maps, calendar, browser. Here the big problem was email. The amount of stored information made it impossible to simply move and delete the account; now we know why Google provides so much "free" storage space. Like Amazon, the bigger issue was all the services Google underpins - trackers, analytics, and, especially, Maps, which Uber, Lyft, and Yelp depend. Hill should be grateful she didn't have a Nest thermostat and doesn't live in Minnesota. The most surprising bit is that so many sites load Google *fonts*. Also, like Facebook, Google has spread logins across the web, and Hill had to find an alternative to Dropbox, which uses Google to verify users.

In our minds, Microsoft is like Apple. Don't like Windows? Get a Mac or use Linux. Ah, but: I have seen the Windows Blue Screen of Death on scheduling systems on both the London Underground and Philadelphia's SEPTA. How many businesses that I interact with depend on Microsoft products? PCs, Office, and Windows servers and point of sale systems are everywhere. A VPN can block LinkedIn, Skype, and (sadly) Github - but it can't block any of those - or the back office systems at your bank. You can sell your Xbox, but even the local film society shows movies using VLC on Windows.

Hill's final episode, in which she eliminates all five simultaneously, posted just last night. As expected, she struggles to find alternative ways to accomplish many tasks she hasn't had to think about before. Ironically, this is easier if you're an Old Net Curmudgeon: as soon as she says large file, can't email, I go, "FTP!" while various web services all turn out to behosted on AWS, and she eventually lands on "command line". It's a definite advantage if you remember how you did stuff *before* the Internet - cash can pay the babysitter (or write a check!), and old laptops can be repurposed to run Linux. Even so, complete avoidance is really only realistic for a US Congressman. The hardest for me personally would be giving up my constant compaion, DuckDuckGo, which is hosted on...AWS.

Several things need to happen to change this - and we *should* change it because otherwise we're letting them pwn us, as in Dave Eggers' The Circle. The first is making the tradeoffs visible, so that we understand who we're really benefiting and harming with our clicks. The second is also regulatory: Lina Khan described in 2017 how to rethink antitrust law to curb Amazon. Facebook, as Marc Rotenberg told CNBC last week, should be required to divest Instagram and WhatsApp. Both Facebook and Google should spin off or discontinue their identity verification and web-wide login systems into separate companies. Third, we should encourage alternatives by using them.

But the last thing is the hardest: we must convince all our friends that it's worth putting up with some inconvenience. As a lifelong non-drinker living in pub-culture Britain, I can only say: good luck with that.

Illustrations: Kashmir Hill and her new technology.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 1, 2019

Beyond data protection

3rd-cpdp2019-sign.jpgFor the group assembled this week in Brussels for Computers, Privacy, and Data Protection, the General Data Protection Regulation that came into force in May 2018 represented the culmination of years of effort. The mood, however, is not so much self-congratulatory as "what's next?".

The first answer is a lot of complaints. An early panel featured a number of these. Max Schrems, never one to shirk, celebrated GDPR day in 2018 by joining with La Quadrature du Net to file two complaints against Google, WhatsApp, Instagram, and Facebook over "forced consent". Last week, he filed eight more complaints against Amazon, Apple, Spotify, Netflix, YouTube, SoundCloud, DAZN, and Flimmit regarding their implementation of subject access rights. A day or so later, the news broke: the French data protection regulator, CNIL, has fined Google €50 million (PDF) on the basis of their complaint - the biggest fine so far under the new regime that sets the limit at 4% of global turnover. Google is considering an appeal.

It's a start. We won't know for probably five years whether GDPR will have the intended effect of changing the balance of power between citizens and data-driven companies (even though one site is already happy to call it a failure already. Meanwhile, one interesting new development is Apple's crackdown on Facebook and then Google for abusing its enterprise app system to collect comprehensive data on end users. While Apple is certainly far less dependent on data collection than the rest of GAFA/FAANG, this action is a little like those types of malware that download anti-virus software to clean your system of the competition.

The second - more typical of a conference - is to stop and think: what doesn't GDPR cover? The answers are coming fast: AI, automated decision-making, household or personal use of data, and (oh, lord) blockchain. And, a questioner asked late on Wednesday, "Is data protection privacy, data, or fairness?"

Several of these areas are interlinked: automated decision-making is currently what we mean when we say "AI", and while we talk a lot about the historical bias stored in data and the discrimination that algorithms derive from training data and bake into their results. Discussions of this problem, Angsar Koene tend to portray accuracy and fairness as a tradeoff, with accuracy presented as a scientifically neutral reality and fairness as a fuzzy human wish. Instead, he argued, accuracy depends on values we choose to judge it by. Why shouldn't fairness just be one of those values?

A bigger limitation - which we've written about here since 2015 - is that privacy law tends to focus on the individual. Seda Gürses noted that focusing on the algorithm - how to improve it and reduce its bias - similarly ignores the wider context and network externalities. Optimize the Waze algorithm so each driver can reach their destination in record time, and the small communities whose roads were not built for speedy cut-throughs bear the costs of extra traffic, noise, and pollution they generate. Next-generation privacy will have to reflect that wider context; as Dennis Hirsch put it, social protection rather than individual control. As Schrems' and others' complaints show, individual control is rarely ours on today's web in any case.

Privacy is not the only regulation that suffers from that problem. At Tuesday's pre-conference Privacy Camp, several speakers deplored the present climate in which platforms' success in removing hate speech, terrorist content, and unauthorized copyright material is measured solely in numbers: how many pieces, how fast. Such a regime does not foster thoughtful consideration, nuance, respect for human rights, or the creation of a robust system of redress for the wrongly accused. "We must move away from the idea that illegal content can be perfectly suppressed and that companies are not trying hard enough if they aren't doing it," Mozilla Internet policy manager Owen Bennett said, going on to advocate for a wider harm reduction approach.

The good news, in a way, is that privacy law has fellow warriors: competition, liability, and consumer protection law. The first two of those, said Mireille Hildebrandt need to be rethought, in part because some problems will leave us no choice. She cited, for example, the energy market: as we are forced to move to renewables both supply and demand will fluctuate enormously. "Without predictive technology I don't see how we can solve it." Continuously predicting the energy use of each household will, she wrote in a paper in 2013 (PDF), pose new threats to privacy, data protection non-discrimination, and due process.

One of the more interesting new (to me, at least) players on this scene is Algorithm Watch, which has just released a report on algorithmic decision-making in the EU that recommends looking at other laws that are relevant to specific types off decisions, such as applying equal pay legislation to the gig economy. Data protection law doesn't have to do it all.

Some problems may not be amenable to law at all. Paul Nemitzposed this question: given that machine learning training data is always historical, and that therefore the machines are always perforce backward-looking, how do we as humans retain the drive to improve if we leave all our decisions to machines? No data protection law in the world can solve that.

Illustrations: The CPDP 2019 welcome sign in Brussels.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.