The quiet and creeping normalisation of facial recognition technology

A new app allows users to forgo carrying their ID by scanning their face with a phone. 

Sign Up

Get the New Statesman's Morning Call email.

At face value it’s remarkably convenient – and really, really cool. If you live in Bournemouth and fancy a night out, you no longer have to worry about squeezing your passport in and out of your pocket just to get through the door of a club, pub, or bar. Instead of relying on traditional forms of ID to verify your age, you can now use Yoti – an app that uses facial recognition to prove that you are you.

Five nightclubs and bars in Bournemouth are now accepting the app as ID. After downloading, all you have to do is scan both your face and your passport (or driver’s license), and then take your phone on a night out. Yoti verifies that your face and the picture on your ID match, and when you get to da club, the bouncer will present you with a QR code which you can scan with the app. Some establishments might also insist you take a selfie, which the app then verifies, as extra proof.

If you prefer buying a bottle of wine at the supermarket to going on a night out, Yoti will soon make your life easier too. In November 2017, it announced that two supermarkets will trial the tech for people looking to buy cigarettes, alcohol, and knives. It’s the fun and funky future – but it’s now!

“This airport-style security simply to get into a club is utterly misguided,” says Silkie Carlo, Director of Big Brother Watch, a non-profit privacy campaign group. “Facial recognition is a biometric identification tool, much like DNA and iris scanning… This level of identity checking is not characteristic of a free society and we urge caution against such casual use of biometric technology.”

That’s not to say Yoti is especially bad. The app has a transparent online privacy policy, and promises not to sell your information to third parties (although it will pass it over if legally required). But the celebration of the app is part of the creeping normalisation of facial recognition technology in the UK. Fine, right? Why not normalise futuristic face scans?

In August 2017, the Metropolitan Police used facial recognition software at Notting Hill carnival. The practice saw the police scanning the faces of thousands of carnival-goers to see if any matched with photos of people who had previously been arrested. This wasn’t against the law – mostly because technology is now moving faster than legislation – but it was condemned widely by civil liberties groups.

“Such a difficult balance between public benefit and individual privacy should not be decided by the police but is best decided by Parliament through informed debate and legislation,” wrote Paul Wiles, the UK’s Biometrics Commissioner, after the trial. Legislation about DNA and fingerprint databases is presently overseen by independent bodies – this isn’t yet the case for facial recognition tech.

Further problems arise when facial recognition fails. Not only did the technology’s use at Notting Hill carnival unfairly target black people at London’s main African-Caribbean celebration, experts have additionally warned that much facial recognition technology is significantly more likely to misidentify black people and women. And even if the technology worked perfectly and only correctly identified people who had been arrested in the past – does that mean they deserve to be singled out? As Big Brother Watch notes in its campaign to end the retention of innocent people’s mugshots: “Being arrested does not make a person guilty of a crime. Often people who are arrested are found to have done nothing wrong.”

Despite being a relatively new technology, facial recognition already has a rap sheet of past offences. In 2016, Floridian Willie Lynch was convicted of drug dealing after police used facial recognition technology to identify him. In court, the accuracy of the technology was not called into question despite the fact the prosecution offered no evidence for how it had been used. In 2012, German data protection officials ordered Facebook to destroy its database of human faces. Five years later in December 2017, Facebook announced it will alert you when you show up in a photo, even when you haven’t been tagged. Privacy is being violated, but there are additional social problems. Apple was accused of racism after its iPhone X’s Face ID technology failed to distinguish between Chinese users, while fears already abound about how marketers could use the iPhone's tech to check whether you really are watching their ads. 

The lack of a backlash is surprising given previous responses to less sophisticated forms of identification. In 2006, the government attempted to issue everyone with a national identity card linked to a register but the scheme was ultimately abandoned over human rights and privacy concerns. Richard Thomas, then Information Commissioner, expressed concerns that the UK could “sleepwalk into a surveillance society”. And yet, Carlo, of Big Brother Watch, believes the current normalisation of facial recognition technology “turns each of us into a walking ID card.”

“Facial recognition software creates a biometric map of every face it sees in order to make an identity check,” she explains. “The notion of putting biometric checkpoints on our streets [as with Notting Hill carnival] is unprecedented and immeasurably more intrusive than past governments’ failed plans for national ID cards. The public simply will not accept it.”

And yet, the public might. In China, facial recognition is increasingly used by people to make payments or enter buildings, while the police use the technology to track criminals. China's Resident Identity Cards mean its citizens are more used to being part of a centralised database than us, but apps like Yoti may go some way to normalising facial recongition tech in the UK. When this technology offers us extreme convenience, it's important to remember that surveillance is part of the package deal.

Amelia Tait is a freelance journalist, and was previously the New Statesman's tech and digital culture writer. She tweets at @ameliargh