On Monday, following up on the recent age verification demonstration event, a bunch of us, led by Myles Jackman and Pandora Blake, staged a protest in Old Palace Yard (YouTube) to explain the problems with the policy and its implementation.
If only over-18s were adept at recognizing the storefronts and streetsigns that Google's captcha system is currently obsessed with, implementation would be no problem. Instead, debate could be limited to deeper social questions: "Who should have access to what kind of material and where do we draw the boundaries?" Unfortunately, since the gap between five-year-old and adult human is many orders of magnitude smaller than the gap between human and spambot, we don't have a simple way to differentiate. British Board of Film Classification head David Austen, the incoming regulator, has said the system be interested solely in over-18, yes/no.The BBFC is, I'm sure, well-intentioned and supplied with advice, but its long history of rating and classifying film content and, latterly, video games, has given it no known expertise in salient issues such as privacy, computer system design, or cybersecurity.
The Open Rights Group argues, and Alec Muffett's technical assessment of proposed mechanisms underscores, that the draft bill contains no requirements for protecting privacy or security. Muffett has suggested that any data collected in age checks should be subject to at least the payment industry's PCI DSS standard. Even if legislative drafters want to avoid specifying a current technical standard that could soon be outmoded, they could still find a way to specify a minimum level of security.
In this situation, the government is visibly schizophrenic. With one hand, the government is pouring money via GCHQ and various research councils into improving the nation's cybersecurity. With the other, it's legislating policies that could put much of the population at risk of fraud (fake Age Gates will be everywhere) or blackmail (as data breaches continue to escalate). As ORG says, it's wrong to assume, as the draft law apparently does, that data protection law is enough on its own. Data protection law is intended to block abuse like repurposing, selling, or sharing data that's been collected. The new version, GDPR, does establish security baselines and creates a new requirement for breach notification, but it gives very few specifics about how to evaluate the need for or implement data security. And: will it be UK law?
All of this leads me to propose a required Security Impact Assessment for new legislation, similar to the now-familiar privacy impact assessment. The world's governments are still legislating with the mentality that computer networks and data are the exception rather than the norm, and that they occupy a sector separate from all others. The reality is the opposite: computers, networking, and data practices are the means by which laws are implemented in *every* sector. They are part of the critical infrastructure in all aspects of transport - even individual cyclists are frequently dependent on GPS directions fed to an earpiece. Energy. Water. Health and social care, especially. Retail. Immigration management. Voting. It is self-destructive and backward to continue to enact as though "cybersecurity" is a luxury add-on that need not be considered until the last stage of deployment.
Here are some of the questions an SIA might have asked about age verification:
- How sensitive is the data that could be collected? In this case, set the marker up to 11.
- How valuable would it be, and to whom? Again, 11, and: marketers, site owners, advertisers, criminals, hacktivists, blackmailers, unscrupulous journalists...
- What security and privacy standards and practices are relevant?
- What known security issues already exist in this area?
- What network externalities might apply? For example, given that people often reuse IDs and passwords, can these be reidentified against dumps from data breaches? Should sites be required to issue random character strings?
- How can the size of the target and/or attack surface be reduced? Since everyone agrees the data should not be collected and retained, an SIA would conclude the law should specify this.
- What other legislative mandates create technical conflicts or risky network externalities? For example, if one policy requires the creation of a large database of sensitive information kept securely using encryption to protect the data both in transit and at rest, and you simultaneously pass a law that requires that government be able to access all encrypted data...you haven't got a secure system for this ultra-sensitive data. One or the other policy must change to reflect this.
- Are there wider security impacts such as teaching people to do dangerous things? One type of age verification mechanism requires you to allow the verifier to build a probability score by examining your profile on Facebook or Paypal. What are the consequences of habituating people to do this? What's the plan when fake sites spring up to take advantage?
- Has this been tried before? If so, what went wrong? How should the requirements be adapted to reflect that?
Granted, everyone connected with age verification seems to have recognized correctly from the beginning that data minimization is essential here. But it's not (yet) reflected in the law. An SIA would be a vehicle for making sure that happens.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.