Privacy 28 January 2020 Facial recognition cameras are turning privacy into an elite commodity The Metropolitan Police’s own trials have shown the technology to be remarkably inaccurate – and ethnic minorities are at greatest risk. Getty Images A CCTV camera in Pancras Square near Kings Cross Station on August 16, 2019. Sign UpGet the New Statesman's Morning Call email. Sign-up After several years of trials, the Metropolitan Police announced last Friday that it would be deploying live facial recognition cameras across London for the first time. But the technology raises more questions than it can ever hope to solve. According to the Met, live facial recognition will only be used in specific instances and for specific purposes. Examples cited include finding missing people and those who are on a watchlist for violent crime. The facial recognition technology, which uses Japanese company NEC’s NeoFace system, will create a “faceprint” – a unique map of each individual’s facial features, which is then compared to the faceprint of the wanted individual. Human oversight is still involved – an officer on the scene will be the one to decide whether to speak to an individual. The Met says that this will help it to catch suspects more quickly, and protect the most vulnerable people in society. What makes facial recognition so appealing to technology companies, police forces and governments alike is how difficult it is to avoid. At the risk of stating the obvious, it is hard to walk around in public without your face. Attempts at disguise – face paint, sunglasses, hats, scarves, masks, headwear – often merely draw more attention to the wearer, particularly if they’re from a group that is disproportionately surveilled. Biometric measures – face ID, fingerprint sensors on smartphones – are increasingly common in the private sector, introduced by companies with the aim of bringing your physical and virtual identities together. Facial recognition technology, particularly as used by law enforcement bodies, is controversial with good reason: for a start, it has already proven to be incredibly inaccurate. The Met ran six trials of facial recognition in London and the system was found to be accurate just 19 per cent of the time. Such issues are exacerbated for black and minority ethnic groups – as Moya Lothian-McLean observed in gal-dem, individuals from demographics already seen as suspect will be subject to double surveillance. There are numerous other issues as well, such as data collection and storage, and the possibility of facial recognition in public spaces creating a virtual identity parade. National security interests, alongside concerns over crime and violence, are already used as a justification for intrusion into civil liberties, such as CCTV. The UK has a specific relationship to the latter technology, boasting as many as 5.9 million cameras, a higher number per person than any country in the world except China. In a High Court ruling on facial recognition in Cardiff last year, the court asserted that “the use of automated facial recognition was no more intrusive than the use of CCTV in the streets”. In the Baffler, Rachel Connolly noted that much of the current approach towards facial recognition “creates an ‘everything the light touches’ style of regulation, with some elements of facial recognition up for scrutiny and others arbitrarily protected”. The court in Cardiff last year ruled that the use of automated facial recognition technology was legal because it was limited in scope – both in the space that it covered as well as the amount of time that it could be used for. The Met emphasises that such uses of live facial recognition will be specific and targeted, but in practice, this could lead to significant issues. If someone is identified as a suspect in a robbery in one borough, where should the police train their cameras? In the same borough, in the area that they’re last known to have lived at, or in the area that they’re known to have associates in. The Met has also published a series of documents detailing guidance for the use of live facial recognition, including the legal mandate. One section states: “The use of live facial recognition may also be more expected in areas which have high rates of crime where police action would be more expected,” – in essence, areas which are already heavily policed and that aren’t white or middle, which leads to increased police presence, leading to further surveillance. Another section by the Met suggests that this technology may be used during assemblies and demonstrations, and acknowledges that the people assembled may be unhappy about this – but argues that emphasis should be placed on helping people understand that live facial recognition will be used to keep them safe. Given the police and the government’s vast definition of criminality, that should be cause for concern. Privacy is increasingly being turned into a fiction, something that is unrealistic to expect – and so, it becomes viewed as a rare commodity, something reserved for a select few. Part of the reason why face ID, fingerprinting and other biometric measures have become so pervasive, and why we consent to them, is that they are often positioned as a way to make our lives easier, to somehow gain back the time that we would otherwise spend fumbling around with a card or a passcode. This notion of reduced friction – increasing convenience, gaining back time and effort – is also central to the use of facial recognition. But at what cost? › Why we must banish bishops from the House of Lords Sanjana Varghese was previously a Wellcome scholar at the New Statesman. She writes about science and technology. Subscribe For more great writing from our award-winning journalists subscribe for just £1 per month!