" /> net.wars: October 2016 Archives

« September 2016 | Main

October 28, 2016

Killer apps

BBT-killerrobot-s02e12.png
Until recently, it made sense to talk about the offline world and the online world as separate things. In the mid 1990s, online pioneers often talked about the way life on the screen had given them a much greater appreciation of the physical world.

As I've written before, that made more sense when there was little or no overlap between the people in your offline and online lives, and when you had to dial up and wait to get to the latter. For some time, it's been clear that cyberspace was beginning to colonize the physical world, and a couple of weeks ago we saw the consequences that security experts have been predicting: the first distributed denial-of-service attack on the internet to be mounted by devices we haven't normally thought of as computers: digital video recorders, cameras, and baby monitors.

Thumbnail image for krebsinbelgium.pngThis was actually the second such attack. In late September, the site of security journalist Brian Krebs was knocked offline with such an enormous flood that even the content delivery network company Akamai struggled to contain it. Krebs had to temporarily shut down, and then move, his site.

Krebs reported, shortly afterwards, that the source code of the Mirai botnet malware used to attack his site had been posted to Github. Within weeks, Level 3 Threat Research was finding an uptick in enslaved devices. Flashpoint, which tracks the progress of such things, suggests that the Dyn attack, which hobbled connections to sites like Amazon, Twitter, and Netflix, was the work of script kiddies - copycats, basically. Script kiddies are why we still need to run anti-virus software to trap very old malware even though it can't detect the most recent, sophisticated attacks. The sole bit of sophistication in the Dyn attack may have been in picking on Dyn, a key intermediary that most people had never heard of and whose function even fewer understood.

Krebs' series of reports on the attacks - the attack itself, details on the devices used, and the news that the manufacturer he named is recalling products and threatening a libel suit.

BBC News this week asked me two questions: what can users do to protect themselves? Is your data at home at risk?

In these particular cases, users had few options. The devices in question had a web interface that was protected by a user name and password. The answer there is easy: change the default user name and password. But underneath the devices, which run a trimmed-down version of Linux, also have a text-based access interface using the standard protocols Telnet and SSH. That feature was not documented, and the default passwords were hard-coded, so users had no ability to change them. According to Krebs, this was the vector by which the Mirai malware infected the devices. A technically capable home user could and should configure their home router to block incoming traffic on all but the ports they need to use - in this case, ports 23 (Telnet) and 48101.

cluley.pngBeyond that, says Graham Cluley, turn off UPnP, which can help attackers take control. Always change default user names and passwords for all devices; this is especially important for home routers, since someone who seizes control of your router has control of your entire network and can mount spoofing attacks on all your internet activities. Fortunately, router manufacturers recognized this issue some time ago, and most home routers now provide better security by default, shipping with individual, randomly generated user names and passwords rather than a single universal pair that users must know to change.

Cluley also recommends checking regularly for vendor firmware updates and patching devices. This is where the whole Internet of Things enterprise is going to founder. Anyone who's ever bricked a device through a failed firmware update is simply not going to risk it when the device to be updated is a car, refrigerator, or other expensive appliance. For vendors, patching software is a fairly expensive effort. It makes sense when your products produce a steady stream of revenue, but none at all for inexpensive items like light bulbs or temperature sensors that may remain in place for years at no benefit to you. Protecting these will have to depend on a secure router or gateway.

But even if you had followed Cluley's recommendations, it wouldn't have helped prevent the Dyn attack. There, the only solution was not to buy the devices in the first place or yank their internet connections. In the interests of being an intelligent customer, for any potential purchase insert the manufacturer, model number, and the word "security" into a search engine to see if any known flaws pop up. Similarly, read the product manual before buying, both to see what it says about security and to find any other annoying habits the product might have. Don't buy anything with standardized, hard-coded login credentials you can't change. Look for unexpected hidden channels (like Telnet and SSH) and make sure you can change those credentials, too. Finally, ask yourself: do you really need this device to be connected to the internet (YouTube)? If not, either buy something else or disable the connection. Think of it as social responsibility: it's not just your own security at risk. Today, these devices are being corralled to attack internet intermediaries; tomorrow, critical infrastructure.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 21, 2016

Joined-up thinking

kinkolympyxxx.jpegOn Monday, following up on the recent age verification demonstration event, a bunch of us, led by Myles Jackman and Pandora Blake, staged a protest in Old Palace Yard (YouTube) to explain the problems with the policy and its implementation.

If only over-18s were adept at recognizing the storefronts and streetsigns that Google's captcha system is currently obsessed with, implementation would be no problem. Instead, debate could be limited to deeper social questions: "Who should have access to what kind of material and where do we draw the boundaries?" Unfortunately, since the gap between five-year-old and adult human is many orders of magnitude smaller than the gap between human and spambot, we don't have a simple way to differentiate. British Board of Film Classification head David Austen, the incoming regulator, has said the system be interested solely in over-18, yes/no.The BBFC is, I'm sure, well-intentioned and supplied with advice, but its long history of rating and classifying film content and, latterly, video games, has given it no known expertise in salient issues such as privacy, computer system design, or cybersecurity.

The Open Rights Group argues, and Alec Muffett's technical assessment of proposed mechanisms underscores, that the draft bill contains no requirements for protecting privacy or security. Muffett has suggested that any data collected in age checks should be subject to at least the payment industry's PCI DSS standard. Even if legislative drafters want to avoid specifying a current technical standard that could soon be outmoded, they could still find a way to specify a minimum level of security.

In this situation, the government is visibly schizophrenic. With one hand, the government is pouring money via GCHQ and various research councils into improving the nation's cybersecurity. With the other, it's legislating policies that could put much of the population at risk of fraud (fake Age Gates will be everywhere) or blackmail (as data breaches continue to escalate). As ORG says, it's wrong to assume, as the draft law apparently does, that data protection law is enough on its own. Data protection law is intended to block abuse like repurposing, selling, or sharing data that's been collected. The new version, GDPR, does establish security baselines and creates a new requirement for breach notification, but it gives very few specifics about how to evaluate the need for or implement data security. And: will it be UK law?

All of this leads me to propose a required Security Impact Assessment for new legislation, similar to the now-familiar privacy impact assessment. The world's governments are still legislating with the mentality that computer networks and data are the exception rather than the norm, and that they occupy a sector separate from all others. The reality is the opposite: computers, networking, and data practices are the means by which laws are implemented in *every* sector. They are part of the critical infrastructure in all aspects of transport - even individual cyclists are frequently dependent on GPS directions fed to an earpiece. Energy. Water. Health and social care, especially. Retail. Immigration management. Voting. It is self-destructive and backward to continue to enact as though "cybersecurity" is a luxury add-on that need not be considered until the last stage of deployment.

Here are some of the questions an SIA might have asked about age verification:

- How sensitive is the data that could be collected? In this case, set the marker up to 11.

- How valuable would it be, and to whom? Again, 11, and: marketers, site owners, advertisers, criminals, hacktivists, blackmailers, unscrupulous journalists...

- What security and privacy standards and practices are relevant?

- What known security issues already exist in this area?

- What network externalities might apply? For example, given that people often reuse IDs and passwords, can these be reidentified against dumps from data breaches? Should sites be required to issue random character strings?

- How can the size of the target and/or attack surface be reduced? Since everyone agrees the data should not be collected and retained, an SIA would conclude the law should specify this.

- What other legislative mandates create technical conflicts or risky network externalities? For example, if one policy requires the creation of a large database of sensitive information kept securely using encryption to protect the data both in transit and at rest, and you simultaneously pass a law that requires that government be able to access all encrypted data...you haven't got a secure system for this ultra-sensitive data. One or the other policy must change to reflect this.

- Are there wider security impacts such as teaching people to do dangerous things? One type of age verification mechanism requires you to allow the verifier to build a probability score by examining your profile on Facebook or Paypal. What are the consequences of habituating people to do this? What's the plan when fake sites spring up to take advantage?

- Has this been tried before? If so, what went wrong? How should the requirements be adapted to reflect that?

Granted, everyone connected with age verification seems to have recognized correctly from the beginning that data minimization is essential here. But it's not (yet) reflected in the law. An SIA would be a vehicle for making sure that happens.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 14, 2016

Coffee pots with benefits

wificoffeepot.jpeg
"I always remember to cover the TV now," a female attendee said at the September 30 Gikii conference. She had entered her living room naked one morning, and the TV had greeted her by name.

How to control the Internet of Things was much on everyone's minds at this year's Gikii conference. One of the most interesting ideas came from Philip Howard, who wants to create more of a civic good and less of a surveillance-with-everything monster by requiring such devices to identify the ultimate beneficiary of the data. A quick search tells me that Howard, a recent arrival at the Oxford Internet Institute has had this notion for a while. It sounds great, not least because so many of today's websites and mobile phone apps (which of course will be used to control many Internet of Things devices) are designed to lull users into a sense of intimacy, obfuscating the data collector behind the curtain revealed in the privacy policy.

philip_howard.jpgThe question is: how could Howard's idea be implemented? Data has a habit of slipping the surly bounds of its metadata, and any one instance of data collection may have hundreds of beneficiaries. First and foremost, presumably, is you yourself, and that's the one most technology companies encourage you to focus on when you're making purchasing decisions. If you don't follow the data's travels up the chain of sales, transfers, and profits all the way to shareholders and bankers, the rule is meaningless. If you do, how do you provide the information so that it helps people make better-informed decisions rather than become another instance of "alert fatigue", like the cookie directive. If the informtaion isn't provided in context, it's too easy to forget; if it is, it interferes. Famously, Mark Rittman spent 11 hours this week getting his wifi kettle to work; how much longer would it have taken him if he had to keep breaking to study and approve data flows? Would it be worth to him if he could examine the list of beneficiaries and add his favorite Fair Trade coffee supplier? Configuring firewalls will also be...let's call it "interesting" in such a scenario.

Part of Howard's argument was that the way today's first Internet of Things devices use data impedes existing social arrangements and conventions. Howard cited two examples. The Ukrainian government used mobile phones to geolocate people at a Kiev protest and blast them a text message warning they had been registered as a participant in a mass riot. In Kansas City, MO dealerships can use a disabler wired to a car's ignition to stop delinquent payers from starting their cars - "repo" at a distance. In an example that Andrea Matwyshyn brought up earlier this year, you might think you should own your own heartbeat. But the manufacturer of the pacemaker implanted in your chest begs to differ.

Identifying the ultimate beneficiary in those various cases would have different effects.Governments likely won't be listed at all: it's entirely likely they won't have the data until they issue a subpoena or order, and possibly not even then, if the phone company simply executes the order. In the car case, there are a few more choices: buy the car for cash, switch to bicycling, or take the bus. The heart patient has no choice unless the doctor is willing to substitute an equally effective device made by a different manufacturer that operates a better data policy.

warrant-canary.pngThe government problem is an example of the difficulty in capturing mission creep. Governments would want the right to benefit secretly through deals such as the Yahoo email scanning recently revealed by Reuters. Today, we have both overt and covert CCTV cameras, depending on whether the owner is more interested in deterring or identifying miscreants. Would the democratized version of such deals require me to add GCHQ to the list of data beneficiaries for the smart light bulb outside my door - or, more likely, install a pre-configured bulb they send me - which not-so-coincidentally can watch my neighbors' movements? Can I turn the bulb into a warrant canary by accidentally breaking it and not replacing it?

And all this doesn't include the potential for spoofing, deceit, cybercrime, and so on.

Now multiply this by thousands, because that's at least the number of "smart" things we will encounter every day. Don't get me wrong: I want Howard's idea to work. But I think it will only do so if we *also* up-end how data is collected and stored, as groups like Mydex and Hatdex are trying to do by giving users back control over data and being able to see at any time who it's been shared with and revoke permission. How any of these things will scale is an open question; but it, along with other aspects of security, must be solved. Otherwise, the recent record botnet attack on security journalist Brian Krebs's website will become the established norm for every home. The software that powered that attack is, as they say, out there, waiting.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 7, 2016

Smut

rotated-age-verify-1.jpg
The pole on the stage behind the speaker was a nice touch, serving as a reminder of most people's image of the pornography industry: straight men watching largely nude women under controlled circumstances. A few minutes after arriving at yesterday's Age Verification Demonstration meeting, Pandora Blake appeared with news: the ladies' room was behind the stage. Of course. That's where the women are, right?

The meeting was there to present a series of mechanisms for implementing age verification, one of the more contentious clauses in the UK's 2016-2017 Digital Economy bill (not to be confused with the 2010 Digital Economy Act). Last year, I attended a couple of related meetings of the Digital Policy Alliance, which was seeking to develop a standard for age checking technologies. The BSI's draft PAS 1296 code of practice (requires free login) is the result. A side note: the reason I attended no further meetings is that rather expensive (for a small civil society organization) membership was required; I also note that the DPA's list of registered observers does not include civil society or consumer protection organizations, a strange and disturbing omission for a system intended for nationwide use.

I'll summarize, hoping that Alec Muffett will do a more thorough technical job later. Several of the ideas presented depended on using credit cards as proxies for age, an idea payment providers will probably resist. Some proposed to use credit profiles - Experian, Equifax. One of the audience asked incredulously: does Experian seriously want its logo on porn sites? Apparently so: it sponsored the event. A third proposal involves mining social media and Paypal. In this scenario, you-the-punter grant the age verifier (which may be a bought-in third-party service) the right to rummage around in your Facebook/Twitter/Paypal account to establish the probability that you're over 18. The site doesn't get your password, but it's unclear whether it or the supplier copies and retains your data. A fourth, from from Mindgeek, the biggest online pornography company, establishes your age classification once and gives you a federated token you can reuse across all the company's sites. Can we say profiling and a full view of each individual's preferences? Can we say that although Mindgeek swears now it will never sell or seek to monetize this data that someday down the road, someday the temptation may prove too strong? No: "We're not going to repeat Ashley Madison." I'm fairly sure Ashley Madison didn't intend to *be* Ashley Madison either. All of these ideas raise questions whether people's personal information may become linked to the list of porn sites they visit.

rotated-are-you-old-enough.jpgMy personal favorite was .xxx domain owner ICM Registry. I had been waiting to play Magic Technology Bingo, in which emerging technology buzzwords would be sprinkled on this actually rather difficult problem like magic fairy dust (see also online voting. We'd already had "machine learning" to do all that probability stuff. Here, ICM Registry has an idea for an e-wallet, usable by anyone serving content into the UK market and loaded with virtual currencies. Micropayments! Struggling pornography producers can monetize snippets of content they couldn't charge for before! It should be needless to point out to net.wars readers that this idea has been going to save publishers for at least 25 years. This system would also support blacklisting (bad commercial producers that don't do age checks) and whitelisting. Government-approved porn, right there in your e-wallet!

The techiest solution is Yoti, which involves taking a selfie and sending it with an image of a government-issued document to a fourth party to verify photo is live and match the document and check the age. Thereafter, visiting a site requires taking a new selfie and scanning the site's QR code.

Finally, Chris Ratcliff, from Television X owner Portland TV, reviewed his company's history with age-checking and compared various options. The dropout rate with credit card verification, he said, is 70%, so it's important to support other methods such as checking against government-issued documents, electoral rolls, and so on. He got wistful: "It would be great if we could have access to the [Driver and Vehicle Licencing Agency's] datasets." Among the other silos he'd like to see opened up to use in age checking: the passport database. Er...

Big questions remain for all these guys about security. How much of the data used for verification will be kept? How will the data be protected? To what standard?

pandora-blake.jpgIt took Pandora Blake to apply some sense. Instead of these basically dubious ideas for age verification, she said, why not try to change the bill, which reaches the committee stage next Tuesday (October 11)? Blake has written extensively about the problems she sees in the bill. Her key economic point: the bill will require UK-based commercial (which means what exactly?) pornography producers to age-verify all their customers, but will only require overseas producers to age-verify UK customers, meaning that UK producers will operate at a disadvantage (or will rapidly become overseas producers). The burden will fall most heavily on small, independent producerss who produce niche material - which means the ones who aren't producing airbrushed females for consumption by straight men but other things, like Blake's own feminist BDSM. What will disappear, she said, is healthy diversity, education, and "safety from monopolization".

Cue Tom Lehrer (YouTube).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.