Why the Metropolitan Police needs to stop playing with facial recognition tech

It’s not clear whether this technology is more dangerous if it works or if it doesn’t.

NS

Sign Up

Get the New Statesman's Morning Call email.

If the NHS, a local council, or a benefits agency persisted with a scheme with a failure rate between 98 and 100 per cent, we could expect a rapid political outcry – led by Conservatives and the right-wing media, decrying our failing public sector.

And yet the Metropolitan Police is doing just that, to near-absolute silence from politicians and the media alike. The reason? The scheme concerned is facial recognition surveillance technology.

The latest “trial” of the technology saw it deployed yesterday – and again today – in three crowded Christmas shopping spots in central London, where crowds are sure to be dense. It has previously been deployed at protests, the Notting Hill Carnival, and other occasions.

The most obvious issue with the technology’s use comes from evidence unearthed through the Freedom of Information Act by the campaign group Big Brother Watch. This initially showed that trials of facial recognition had a 98 per cent false positive rate.

This means that for every supposed “match” the system generated, 49 innocent people were identified by the system as persons of interest, potentially leading to their being further monitored, or even stopped, searched, or questioned by police.

A later FOI request found that 100 per cent of the supposed matches identified by the system were false positives – meaning the deployment hadn’t successfully identified a single person of interest, but had created a situation in which other people could blamelessly find themselves the subject of police interest.

As research picked up by the specialist technology site The Register notes, facial recognition technology particularly struggles in situations with large crowds, and in situations with low light – making the case for deployment on Oxford Street during the week of the winter solstice questionable, to say the least. And that’s before we consider that, given the UK’s weather in winter, there’ll be no shortage of hats and scarves being worn too.

When Taylor Swift decides to deploy facial recognition to ID potential stalkers in her crowds, she can easily be forgiven for putting too much faith in early-stage and largely unproven tech – it’s hardly her field of expertise.

The same cannot be said for the Met Police, which appears to have once again tried to deploy a form of mass surveillance in the hope that it will make its job easier. This is in the same vein as intelligence agencies spending billions on often ineffective bulk email and phone log interception – which have provided little evidence that they keep us safer – and diverting effort and resources from targeted policing and intelligence, which have proven results.

In both instances, they are allowed to take these risks with our money – and potentially our safety – because our politics and media are reluctant to ever risk appearing soft on crime. But by refusing to tackle possibly ineffective, not to mention illiberal, technologies and tactics used by police and security services, they become soft on crime. They let the appearance of toughness make them weak.

What we do know is these technologies are intrusive: we don’t know the full extent of what is stored, or how it is used, or how any of these things were used in practice. It is not hard to build an image of how such technologies, if normalised, would be useful to despots and dictators. Just because we can normalise the use of facial recognition doesn’t mean it should.

Even at the UK level, it is not clear we should be advancing these technologies. At present, their ineffectiveness is a problem that could see innocent people facing police attention as guilty people who might have been picked up by more conventional tactics get away with it thanks to the distraction.

But we should also think about what this system might look like if it works. It’s often said that if they really want, police can find a reason to charge almost anyone with anything. If crowds at protests are monitored by this technology, anyone with a previous rap sheet suddenly has – in effect – a bright beacon on them, flagging them for special attention and monitoring throughout, even as those around them go with far less scrutiny.

These people become easy targets for arrest for minor wrongdoing that others would get away with – which would then seem to prove the effectiveness of the technology, creating a self-reinforcing loop.

This would be bad even if the people who currently come to police attention were a reflection of wider society, but they are not. People from low-income families, and especially people from BME families, are far more likely to come to the attention of the police, and then enter databases which keep them on the system – to then, potentially, constantly be tracked by facial recognition.

The use of this technology by the Met Police is bad enough now, when it doesn’t work. It could be so much worse when – if – it works as intended. It shouldn’t be allowed to happen without a long and serious public debate first.

James Ball is an award-winning freelance journalist who has previously worked at the Guardian and Buzzfeed. He tweets @jamesrbuk