In the summer of 2020, as the murder of George Floyd sparked a reckoning over racial injustice, IBM, Microsoft and Amazon stopped selling facial recognition technology to law enforcement agencies. The moratoriums acknowledged what critics of the technology had long argued: that facial recognition is dangerously unreliable, leads to the over-policing of marginalised people and lacks proper legal protections.
More than 20 American states, including California, the home of the tech industry, took steps to restrict the use of facial recognition. But as crime rates in several major US cities rise, those restrictions are gradually being eroded. In the UK, meanwhile, the application of facial recognition in policing has faced very little resistance.
After a judge ruled that South Wales Police had breached privacy and equalities laws by using the technology, the force indicated that it would press ahead with the technology regardless. “This is a judgement we can work with,” the force said, before making minor adjustments to how it was used. London’s Metropolitan Police appears similarly unfazed by the judgment, having deployed live facial recognition several times.
But while legal challenges have failed to diminish the appetite among police forces for facial recognition, concerns over the technology remain. This week, the Ada Lovelace Institute – a research body funded by the Nuffield Foundation – has become the latest British organisation to call for an end to the use of live facial recognition.
[See also: Uber and the Big Tech ego trip]
Matthew Ryder, a high-profile criminal barrister, was commissioned by the institute to produce one of the most wide-ranging analyses of biometric surveillance legislation to date. It warns that the “current legal framework is not fit for purpose, has not kept pace with technological advances and does not make clear when and how biometrics can be used, or the processes that should be followed”.
Arguably the most significant concern is that the technology exacerbates discriminatory policing. This typically happens in two ways. Firstly, several studies have shown that live facial recognition services misidentify people of colour more regularly, leading to wrongful arrests. An independent University of Essex review of the Met’s trials found it accurately identified individuals in less than one in five cases.
Secondly, if black people are already disproportionately targeted by police forces for committing petty crime, they will be over-represented in mugshot databases. This creates a feedback loop in which previous episodes of unfair targeting lead to discriminatory policing in the future.
Ryder acknowledged to me that “there may be progress being made in terms of the nature of the biases within the products that are carrying out facial recognition, or attempts to rectify some of the systemic problems in the application of the technology”. But he said that “those problems have not been ironed out”.
The call for the moratorium is “by no means a radical step”, Ryder continued, because it “follows in light of the Court of Appeal decision” relating to South Wales Police’s usage and “the way the companies themselves have raised concerns about the products that they’ve made”.
Ryder called for the creation of a “technologically neutral, statutory framework” that outlines what considerations public and private bodies must take before using biometric technology. Many legal and policy experts have called for a framework like this before, but Ryder’s review goes further.
Legislation should, he argued, cover not only the use of biometrics for identification – as existing privacy laws, such as the General Data Protection Regulation (GDPR), already do – but also for classification. In doing so, the proposals would bring emotional recognition tech into the scope of the law for the first time. This field is widening and is expected to grow rapidly in the coming years. Job recruiters are already using the technology to categorise interview candidates based on their facial expressions, while schools are buying software to assess if children are paying attention during lessons.
The other key proposal is the creation of a biometrics ethics board. Similar ideas already exist at a regional level in London and the West Midlands, but this would be a national committee overseeing every police usage of facial recognition in the country. The board would serve in an advisory role, and its decisions would not be binding, but police forces that go against its guidance would have to publicly explain why they were doing so.
The proposals will likely be welcomed by civil liberties campaigners, who have long warned that the patchwork of laws covering facial recognition are inadequate. But ministers could be more resistant to them. The government has indicated that it wants to “simplify” the legal framework underpinning biometric data in law enforcement, while taking a lighter, pro-innovation approach to data governance following Brexit.
Ryder’s proposals, by contrast, don’t just go beyond GDPR, but are also stronger than early drafts of the EU’s upcoming Artificial Intelligence Act. The latter legislation will treat emotional recognition systems as “limited risk”, meaning, in most cases, that facial technology users’ only responsibility will be to inform the public that they are being surveilled (unless it is being used in high-risk situations such as law enforcement).
Ryder, however, said it would be wrong to see the report as anti-innovation. “Clear regulation gives those who are trying to innovate confidence about the four corners in which they can innovate,” he confirmed. “Obviously, it’s a discussion, but we think that the trope that regulation necessarily stifles innovation just isn’t borne out by practice.”