Entirely preventable
This week, the US House of Representatives Committee on Oversight and Government Reform used this phrase to describe the massive 2017 Equifax data breach: "Entirely preventable." It's not clear that the ensuing recommendations, while all sensible and valuable stuff - improve consumers' ability to check their records, reduce the use of Social Security numbers as unique identifiers, improve oversight of credit reporting agencies, increase transparency and accountability, hold federal contractors liable, and modernize IT security - will really prevent another similar breach from taking place. A key element was a bit of unpatched software that left open a vulnerability used by the attackers to gain a foothold - in part, the report says, because the legacy IT systems made patching difficult. Making it easier to do the right thing is part of the point of the recommendation to modernize the IT estate.
How closely is it feasible to micromanage companies the size and complexity of Equifax? What protection against fraud will we have otherwise?
The massive frustration is that none of this is new information or radical advice. On the consumer rights side, the committee is merely recommending practices that have been mandated in the EU for more than 20 years in data protection law. Privacy advocates have been saying for more than *30* years that the SSN is every example of how a unique identifier should *not* be used. Patching software is so basic that you can pick any random top ten security tips and find it in the top three. We sort of make excuses for small businesses because their limited resources mean they don't have dedicated security personnel, but what excuse can there possibly be for a company the size of Equifax that holds the financial frailty of hundreds of millions of people in its grasp?
The company can correctly say this: we are not its customers. It is not its job to care about us. Its actual customers - banks, financial services, employers, governments - are all well served. What's our problem? Zeynep Tufecki summed it up correctly on Twitter when she commented that we are not Equifax's customers but its victims. Until there are proportionate consequences for neglect and underinvestment in security, she said later, the companies and their departing-with-bonuses CEOs will continue scrimping on security even though the smallest consumer infraction means they struggle for years to reclaim their credit rating.
If Facebook and Google should be regulated as public utilities, the same is even more true for the three largest credit agencies, Equifax, Experian, and TransUnion, who all hold much more power over us, and who are much less accountable. We have no opt-out to exercise.
But even the punish-the-bastards approach merely smooths over and repaints the outside of a very ugly tangle of amyloid plaques. Real change would mean, as Mydex CEO David Alexander is fond of arguing, adopting a completely different approach that puts each of us in charge of our own data and avoids creating these giant attacker-magnet databases in the first place. See also data brokers, which are invisible to most people.
Meanwhile, in contrast to the committee, other parts of the Five Eyes governments seem set on undermining whatever improvements to our privacy and security we can muster. Last week the Australian parliament voted to require companies to back-door their encryption when presented with a warrant. While the bill stops at requiring technology companies to build in such backdoors as a permanent fixture - it says the government cannot require companies to introduce a "systemic weakness" or "systemic vulnerability" - the reality is that being able to break encryption on demand *is* a systemic weakness. Math is like that: either you can prove a theorem or you can't. New information can overturn existing knowledge in other sciences, but math is built on proven bedrock. The potential for a hole is still a hole, with no way to ensure that only "good guys" can use it - even if you can agree who the good guys are.
In the UK, GCHQ has notified the intelligence and security committee that it will expand its use of "bulk equipment interference". In other words, having been granted the power to hack the world's computers - everything from phones and desktops to routers, cars, toys, and thermostats - when the 2016 Investigatory Powers Act was being debated, GCHQ now intends to break its promise to use that power sparingly.
As I wrote in a submission to the consultation, bulk hacking is truly dangerous. The best hackers make mistakes, and it's all too easy to imagine a hacking error becoming the cause of a 100-car pile-up. As smart meters roll out, albeit delayed, and the smart grid takes shape, these, too, will be "computers" GCHQ has the power to hack. You, too, can torture someone in their own home just by controlling their thermostat. Fun! And important for national security. So let's do more of it.
In a time when attacks on IT infrastructure are growing in sophistication, scale, and complexity, the most knowledgeable people in government, whose job it is to protect us, are deliberately advocating weakening it. The consequences that are doubtless going to follow the inevitable abuse of these powers - because humans are humans and the mindset inside law enforcement is to assume the worst of all of us - will be entirely preventable.
Illustrations: GCHQ listening post at dawn (via Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.