" /> net.wars: November 2018 Archives

« October 2018 | Main | December 2018 »

November 30, 2018

Digital rights management

parliament-whereszuck.jpg"I think we would distinguish between the Internet and Facebook. They're not the same thing." With this, the MP Damian Collins (Conservative, Folkstone and Hythe) closed Tuesday's hearing on fake news, in which representatives of nine countries, combined population 400 million, posed questions to Facebook VP for policy Richard Allan, proxying for non-appearing CEO Mark Zuckerberg.

Collins was correct when you're talking about the countries present: UK, Ireland, France, Belgium, Latvia, Canada, Argentina, Brazil, and Singapore. However, the distinction is without a difference in numerous countries where poverty and no-cost access to Facebook or its WhatsApp subsidiary keeps the population within their boundaries. Foreseeing this probable outcome, India's regulator banned Facebook's Free Basics on network neutrality grounds.

Much less noticed, the nine also signed a set of principles for governing the Internet. Probably the most salient point is the last one, which says technology companies "must demonstrate their accountability to users by making themselves fully answerable to national legislatures and other organs of representative democracy". They could just as well have phrased it, "Hey, Zuckerberg: start showing up."

This was, they said, the first time multiple parliaments have joined together in the House of Commons since 1933, and the first time ever that so many nations assembled - and even that wasn't enough to get Zuckerberg on a plane. Even if Allan was the person best-placed to answer the committee's questions, it looks bad, like you think your company is above governments.

The difficulty that has faced would-be Internet regulators from the beginning is this: how do you get 200-odd disparate cultures to agree? China would openly argue for censorship; many other countries would openly embrace freedom of expression while happening to continue expanding web blocking, filtering, and other restrictions. We've seen the national disparities in cultural sensitivities played out for decades in movie ratings and TV broadcasting rules. So what's striking about this declaration is that nine countries from three continents have found some things they can agree on - and that is that libertarian billionaires running the largest and most influential technology companies should accept the authority of national governments. Hence, the group's first stated principle: "The internet is global and law relating to it must derive from globally agreed principles". It took 22 years, but at last governments are responding to John Perry Barlow's 1996 Declaration of the Independence of Cyberspace: "Not bloody likely."

Even Allan, a member of the House of Lords and a former MP (LibDem, Sheffield Hallam), admitted, when Collins asked how he thought it looked that Zuckerberg had sent a proxy to testify, "Not great!"

The governments' principles, however, are a statement of authority, not a bill of rights for *us*, a tougher proposition that many have tried to meet. In 2010-2012, there was a flurry of attempts. Then-US president Barack Obama published a list of privacy principles; the 2010 Computers, Freedom, and Privacy conference, led by co-chair Jon Pincus, brainstormed a bill of rights mostly aimed at social media; UK deputy Labour leader Tom Watson ran for his seat on a platform of digital rights (now gone from his website); and US Congressman Darrell Issa (R-OH) had a try.

Then a couple of years ago, Cybersalon began an effort to build on all these attempts to draft a bill of rights hoping it would become a bill in Parliament. Labour drew on it for its Digital Democracy Manifesto (PDF) in 2016 - though this hasn't stopped the party from supporting the Investigatory Powers Act.

The latest attempt came a few weeks ago, when Tim Berners-Lee launched a contract for the web, which has been signed by numerous organizations and individuals. There is little to object to: universal access, respect for privacy, free expression, and human rights, civil discourse. Granted, the contract is, like the Bishop of Oxford's ten commandments for artificial intelligence, aspirational more than practically prescriptive. The civil discourse element is reminiscent of Tim O'Reilly's 2007 Code of Conduct, which many, net.wars included, felt was unworkable.

The reality is that it's unlikely that O'Reilly's code of conduct or any of its antecedents and successors will ever work without rigorous human moderatorial intervention. There's a similar problem with the government pledges: is China likely to abandon censorship? Next year half the world will be online - but alongside the Contract a Web Foundation study finds that the rate at which people are getting online has fallen sharply since 2015. Particularly excluded are women and the rural poor, and getting them online will require significant investment in not only broadband but education - in other words, commitments from both companies and governments.

Popular Mechanics calls the proposal 30 years too late; a writer on Medium calls it communist; and Bloomberg, among others, argues that the only entities that can rein in the big technology companies is governments. Yet the need for them to do this appears nowhere in the manifesto. "...The web is long past attempts at self-regulation and voluntary ethics codes," Bloomberg concludes.

Sadly, this is true. The big design error in creating both the Internet and the web was omitting human psychology and business behavior. Changing today's situation requires very big gorillas. As we've seen this week, even nine governments together need more weight.


Illustrations: Zuckerberg's empty chair in the House of Commons.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2018

Phished

cupidsmessage-missourihistoricalsociety.jpgI regularly get Friend requests on Facebook from things I doubt are real people. They are always male and, at a guess, 40-something, have no Friends in common with me, and don't bother to write a message explaining how I know them. If I take the trouble to click through to their profiles, their Friends lists are empty. This week's request, from "Smith Thomson", is muscled, middle-aged, and slightly brooding. He lists his workplace as a US Army base and his birthplace as Houston. His effort is laughably minimal: zero Friends and the only profile content is the cover photograph plus a second photo with a family in front of a Disney castle, probably Photoshopped. I have a nasty, suspicious mind, and do not accept the request.

One of the most interesting projects under the umbrella of the Research Institute for Science of Cyber Security is Detecting and Preventing Mass-Marketing Fraud, led from the University of Warwick by Monica Whitty, and explained here. We tend to think of romance scams in particular, less so advance-fee fraud, as one-to-one rip-offs. Instead, the reality behind them is highly organized criminals operating at scale.

This is a billion-dollar industry with numerous victims. On Monday, the BBC news show Panorama offered a carefully worked example. The journalists followed the trail of these "catfish" by setting up a fake profile and awaiting contact, which quickly arrived. Following clues and payment instructions led the journalists to the scammer himself, in Lagos, Nigeria. One of the victims in particular displays reactions Whitty has seen in her work, too: even when you explain the fraud, some victims still don't recognize the same pattern when they are victimized again. Panorama's saddest moment is an older man who was clearly being retargeted after having already been fleeced of £100,000, his life savings. The new scammer was using exactly the same methodology, and yet he justified sending his new "girlfriend" £500 on the basis that it was comparatively modest, though at least he sounded disinclined to send more. He explained his thinking this way: "They reckon that drink and drugs are big killers. Yeah, they are, but loneliness is a bigger killer than any of them, and trying to not be lonely is what I do every day."

I doubt Panorama had to look very hard to find victims. They pop up a lot at security events, where everyone seems to know someone who's been had: the relative whose computer they had to clean after they'd been taken in by a tech support scam, the friend they'd had to stop from sending money. Last year, one friend spent several months seeking restitution for her mother, who was at least saved from the worst by an alert bank teller at her local branch. The loss of those backstops - people in local bank branches and other businesses who knew you and could spot when you were doing something odd - is a largely unnoticed piece of why these scams work.

In a 2016 survey, Microsoft found that two-thirds of US consumers had been exposed to a tech support scam in the previous year. In the UK in 2016, a report by the US Better Business Bureau says (PDF) , there were more than 34,000 complaints about this type of fraud alone - and it's known that less than 10% of victims complain. Each scam has its preferred demographic. Tech support fraud doesn't typically catch older people, who have life experience and have seen other scams even if not this particular one. The biggest victims of this type of scam are millennials aged 18 to 34 - with no gender difference.

DAPM's meeting mostly focused on dating scams, a particular interest of Whitty's because the emotional damage, on top of the financial damage, is so fierce. From her work, I've learned that the military connection "Smith Thomson" claimed is a common pattern. Apparently some people are more inclined to trust a military background, and claiming that they're located on a military base makes it easy for scammers to dodge questions about exactly what they're doing and where they are and resist pressure to schedule a real-life meeting.

Whitty and her fellow researchers have already discovered that the standard advice we give people doesn't work. "If something looks too good to be true it usually is" is only meaningful at the beginning - and that's not when the "too good to be true" manifests itself. Fraudsters know to establish trust before ratcheting up the emotions and starting to ask - always urgently - for money. By then, requests that would raise alarm flags at the beginning seem like merely the natural next steps in a developed relationship. Being scammed once gets you onto a "suckers list", ripe for retargeting - like Panorama's victim. These, too, are not new; they have been passed around among fraudsters for at least a century.

The point of DAPM's research is to develop interventions. They've had some statistically significant success with instructions teaching people to recognize scams. However, this method requires imparting a lot of information, which means the real conundrum is how you motivate people to participate when most believe they're too smart to get caught. The situation is very like the paranormal claims The Skeptic deals with: no matter how smart you are or how highly educated, you, too, can be fooledz. And, unlike in other crimes, DAPM finds, 52% of these victims blame themselves.


Illustrations: Cupid's Message (via Missouri Historical Society.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 16, 2018

Septet

bush-gore-hanging-chad-florida.jpgThis week catches up on some things we've overlooked. Among them, in response to a Twitter comment: two weeks ago, on November 2, net.wars started its 18th unbroken year of Fridays.

Last year, the writer and documentary filmaker Astra Taylor coined the term "fauxtomation" to describe things that are hyped as AI but that actually rely on the low-paid labor of numerous humans. In The Automation Charade she examines the consequences: undervaluing human labor and making it both invisible and insecure. Along these lines, it was fascinating to read that in Kenya, workers drawn from one of the poorest places in the world are paid to draw outlines around every object in an image in order to help train AI systems for self-driving cars. How many of us look at a self-driving car see someone tracing every pixel?

***

Last Friday, Index on Censorship launched Demonising the media: Threats to journalists in Europe, which documents journalists' diminishing safety in western democracies. Italy takes the EU prize, with 83 verified physical assaults, followed by Spain with 38 and France with 36. Overall, the report found 437 verified incidents of arrest or detention and 697 verified incidents of intimidation. It's tempting - as in the White House dispute with CNN's Jim Acosta - to hope for solidarity in response, but it's equally likely that years of politicization have left whole sectors of the press as divided as any bullying politician could wish.

***

We utterly missed the UK Supreme Court's June decision in the dispute pitting ISPs against "luxury" brands including Cartier, Mont Blanc, and International Watch Company. The goods manufacturers wanted to force BT, EE, and the three other original defendants, which jointly provide 90% of Britain's consumer Internet access, to block more than 46,000 websites that were marketing and selling counterfeits. In 2014, the High Court ordered the blocks. In 2016, the Court of Appeal upheld that on the basis that without ISPs no one could access those websites. The final appeal was solely about who pays for these blocks. The Court of Appeal had said: ISPs. The Supreme Court decided instead that under English law innocent bystanders shouldn't pay for solving other people's problems, especially when solving them benefits only those others. This seems a good deal for the rest of us, too: being required to pay may constrain blocking demands to reasonable levels. It's particularly welcome after years of expanded blocking for everything from copyright, hate speech, and libel to data retention and interception that neither we nor ISPs much want in the first place.

***

For the first time the Information Commissioner's Office has used the Computer Misuse Act rather than data protection law in a prosecution. Mustafa Kasim, who worked for Nationwide Accident Repair Services, will serve six months in prison for using former colleagues' logins to access thousands of customer records and spam the owners with nuisance calls. While the case reminds us that the CMA still catches only the small fry, we see the ICO's point.

***

In finally catching up with Douglas Rushkoff's Throwing Rocks at the Google Bus, the section on cashless societies and local currencies reminded us that in the 1960s and 1970s, New Yorkers considered it acceptable to tip with subway tokens, even in the best restaurants. Who now would leave a Metro Card? Currencies may be local or national; cashlessness is global. It may be great for those who don't need to think about how much they spend, but it means all transactions are intermediated, with a percentage skimmed off the top for the middlefolk. The costs of cash have been invisible to us, as Dave Birch says, but it is public infrastructure. Cashlessness privatizes that without any debate about the social benefits or costs. How centralized will this new infrastructure become? What happens to sectors that aren't commercially valuable? When do those commissions start to rise? What power will we have to push back? Even on-the-brink Sweden is reportedly rethinking its approach for just these reasons In a survey, only 25% wanted a fully cashless society.

***

Incredibly, 18 years after chad hung and people disposed in Bush versus Gore, ballots are still being designed in ways that confuse voters, even in Broward County, which should have learned better. The Washington Post tell us that in both New York and Florida ballot designs left people confused (seeing them, we can see why). For UK voters accustomed to a bit of paper with big names and boxes to check with a stubby pencil, it's baffling. Granted, the multiple federal races, state races, local officers, judges, referendums, and propositions in an average US election make ballot design a far more complex problem. There is advice available, from the US Election Assistance Commission, which publishes design best practices, but I'm reliably told it's nonetheless difficult to do well. On Twitter, Dana Chisnell provides a series of links that taken together explain some background. Among them is this one from the Center for Civic Design, which explains why voting in the US is *hard* - and not just because of the ballots.

***

Finally, a word of advice. No matter how cool it sounds, you do not want a solar-powered, radio-controlled watch. Especially not for travel. TMOT.

Illustrations: Chad 2000.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 9, 2018

Escape from model land

Thumbnail image for lennysmith-davidtuckett-cruise-2018-11-08.jpg
"Models are best for understanding, but they are inherently wrong," Helen Dacre said, evoking robotics engineer Bill Smart on sensors. Dacre was presenting a tool that combines weather forecasts, air quality measurements, and other data to help airlines and other stakeholders quickly assess the risk of flying after a volcanic eruption. In April 2010, when Iceland's Eyjafjallajökull blew its top, European airspace shut down for six days at an estimated overall cost of £1.1 billion. Since then, engine manufacturers have studied the effect of atmospheric volcanic ash on aircraft engines, and are finding that a brief excursion through peak levels of concentration is less damaging than prolonged exposure at lower levels. So, do you fly?

This was one of the projects presented at this week's conference of the two-year-old network Challenging Radical Uncertainty in Science, Society and the Environment (CRUISSE). To understand "radical uncertainty", start with Frank Knight, who in 1921 differentiated between "risk", where the outcomes are unknown but the probabilities are known, and uncertainty, where even the probabilities are unknown. Timo Ehrig summed this up as "I know what I don't know" versus "I don't know what I don't know", evoking Donald Rumsfeld's "unknown unknowns". In radical uncertainty decisions, existing knowledge is not relevant because the problems are new: the discovery of metal fatigue in airline jets; the 2008 financial crisis; social media; climate change. The prior art, if any, is of questionable relevance. And you're playing with live ammunition - real people's lives. By the million, maybe.

How should you change the planning system to increase the stock of affordable housing? How do you prepare for unforeseen cybersecurity threats? What should we do to alleviate the impact of climate change? These are some of the questions that interested CRUISSE founders Leonard Smith and David Tuckett. Such decisions are high-impact, high-visibility, with complex interactions whose consequences are hard to foresee.

It's the process of making them that most interests CRUISSE. Smith likes to divide uncertainty problems into weather and climate. With "weather" problems, you make many similar decisions based on changing input; with "climate" problems your decisions are either a one-off or the next one is massively different. Either way, with climate problems you can't learn from your mistakes: radical uncertainty. You can't reuse the decisions; but you *could* reuse the process by which you made the decision. They are trying to understand - and improve - those processes.

This is where models come in. This field has been somewhat overrun by a specific type of thinking they call OCF, for "optimum choice framework". The idea there is that you build a model, stick in some variables, and tweak them to find the sweet spot. For risks, where the probabilities are known, that can provide useful results - think cost-benefit analysis. In radical uncertainty...see above. But decision makers are tempted to build a model anyway. Smith said, "You pretend the simulation reflects reality in some way, and you walk away from decision making as if you have solved the problem." In his hand-drawn graphic, this is falling off the "cliff of subjectivity" into the "sea of self-delusion".

Uncertainty can come from anywhere. Kris de Meyer is studying what happens if the UK's entire national electrical grid crashes. Fun fact: it would take seven days to come back up. *That* is not uncertain. Nor are the consequences: nothing functioning, dark streets, no heat, no water after a few hours for anyone dependent on pumping. Soon, no phones unless you still have copper wire. You'll need a battery or solar-powered radio to hear the national emergency broadcast.

The uncertainty is this: how would 65 million modern people react in an unprecedented situation where all the essentials of life are disrupted? And, the key question for the policy makers funding the project, what should government say? *Don't* fill your bathtub with water so no one else has any? *Don't* go to the hospital, which has its own generators, to charge your phone?

"It's a difficult question because of the intention-behavior gap," de Meyer said. De Meyer is studying this via "playable theater", an effort that starts with a story premise that groups can discuss - in this case, stories of people who lived through the blackout. He is conducting trials for this and other similar projects around the country.

In another project, Catherine Tilley is investigating the claim that machines will take all our jobs . Tilley finds two dominant narratives. In one, jobs will change, not disappear, and automation more of them, enhanced productivity, and new wealth. In the other, we will be retired...or unemployed. The numbers in these predictions are very large, but conflicting, so they can't all be right. What do we plan for education and industrial policy? What investments do we make? Should we prepare for mass unemployment, and if so, how?

Tilley identified two common assumptions: tasks that can be automated will be; automation will be used to replace human labor. But interviews with ten senior managers who had made decisions about automation found otherwise. Tl;dr: sectoral, national, and local contexts matter, and the global estimates are highly uncertain. Everyone agrees education is a partial solution - "but for others, not for themselves".

Here's the thing: machines are models. They live in model land. Our future depends on escaping.


Illustrations: David Tuckett and Lenny Smith.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 2, 2018

The Brother proliferation

Thumbnail image for Security_Monitoring_Centre-wikimedia.jpgThere's this about having one or two big threats: they distract attention from the copycat threats forming behind them. Unnoticed by most of us - the notable exception being Jeff Chester and his Center for Digital Democracy, the landscape of data brokers is both consolidating and expanding in new and alarming ways. Facebook and Google remain the biggest data hogs, but lining up behind them are scores of others embracing the business model of surveillance capitalism. For many, it's an attempt to refresh their aging business models; no one wants to become an unexciting solid business.

The most obvious group is the telephone companies - we could call them "legacy creepy". We've previously noted their moves into TV. For today's purposes, Exhibit A is Verizon's 2015 acquisition of AOL, which Fortune magazine attributed to AOL's collection of advertising platforms, particularly in video, as well as its more visible publishing sites (which include the Huffington Post, Engadget, and TechCrunch). Verizon's 2016 acquisition of Yahoo! and its 3 billion user accounts and long history also drew notice, most of it negative. Yahoo!, the reasoning went, was old and dying, plus: data breaches that were eventually found to have affected all 3 billion Yahoo! accounts. Oath, Verizon's name for the division that owns AOL and Yahoo!, also owns MapQuest and Tumblr. For our purposes, though, the notable factor is that with these content sites Verizon gets a huge historical pile of their users' data that it can combine with what it knows about its subscribers in truly disturbing ways. This is a company that only two years ago was fined $1.35 million for secretly tracking its customers.

Exhibit B is AT&T, which was barely finished swallowing Time-Warner (and presumably its customer database along with it) when it announced it would acquire the adtech company AppNexus, a deal Forrester's Joanna O'Connell calls a material alternative to Facebook and Google. Should you feel insufficiently disturbed by that prospect, in 2016 AT&T was caught profiting from handing off data to federal and local drug officials without a warrant. In 2015, the company also came up with the bright idea of charging its subscribers not to spy on them via deep packet inspection. For what it's worth, AT&T is also the longest-serving campaigner against network neutrality.

In 2017, Verizon and AT&T were among the biggest lobbyists seeking to up-end the Federal Communications Commission's privacy protections.

The move into data mining appears likely to be copied by legacy telcos internationally. As evidence, we can offer Exhibit C, Telenor, which in 2016 announced its entry into the data mining business by buying the marketing technology company Tapad.

Category number two - which we can call "you-thought-they-had-a-different-business-model creepy" - is a surprise, at least to me. Here, Exhibit A is Oracle, which is reinventing itself from enterprise software company to cloud and advertising platform supplier. Oracle's list of recent acquisitions is striking: the consumer spending tracker Datalogix, the "predictive intelligence" company DataFox, the cross-channel marketing company Responsys, the data management platform BlueKai, the cross-channel machine learning company Crosswise, and audience tracker AddThis. As a result, Oracle claims it can link consumers' activities across devices, online and offline, something just about everyone finds creepy except, apparently, the people who run the companies that do it. It may surprise you to find Adobe is also in this category.

Category number three - "newtech creepy" - includes data brokers like Acxiom, perhaps the best-known of the companies that have everyone's data but that no one's ever heard of. It, too, has been scooping up competitors and complementary companies, for example LiveRamp, which it acquired from fellow profiling company RapLeaf, and which is intended to help it link online and offline identities. The French company Criteo uses probabilistic matching to send ads following you around the web and into your email inbox. My favorite in this category is Quantcast, whose advertising and targeting activities include "consent management". In other words, they collect your consent or lack thereof to cookies and tracking at one website and then follow you around the web with it. Um...you have to opt into tracking to opt out?

Meanwhile, the older credit bureaus Experian and Equifax - "traditional creepy" - have been buying enhanced capabilities and expanded geographical reach and partnering with telcos. One of Equifax's acquisitions, TALX, gave the company employment and payroll information on 54 million Americans.

The detail amounts to this: big companies with large resources are moving into the business of identifying us across devices, linking our offline purchases to our online histories, and packaging into audience segments to sell to advertisers. They're all competing for the same zircon ring: our attention and our money. Doesn't that make you feel like a valued member of society?

At the 2000 Computers, Freedom, and Privacy conference, the science fiction writer Neal Stephenson presciently warned that focusing solely on the threat of Big Brother was leaving us open to invasion by dozens of Little Brothers. It was good advice. Now, Very Large Brothers are proliferating all around us. GDPR is supposed to redress this imbalance of power, but it only works when you know who's watching you so you can mount a challenge.


Illustrations: "Security Monitoring Centre" (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.