In November 2019, the New York Times journalist Kashmir Hill received a tip that seemed “too outrageous to be true”. Freddy Martinez, an analyst at Open the Government, a pro-transparency non-profit, had passed her a legal memo he had unearthed about a company called Clearview AI that claimed it could identify almost anyone based on a snapshot of their face. Clearview had already sold its services to hundreds of police departments around the US but had tried to keep its existence secret.
Clearview might have flown under the radar a little longer had this memo landed in the lap of a less dogged reporter, but Hill – who describes her beat as “the looming tech dystopia, and how we can try to avoid it” – isn’t one to give up. The first part of her book, Your Face Belongs to Us, is a gripping account of how she uncovered the identity of Clearview’s founders and confirmed that its claims weren’t just hype. The company’s facial recognition abilities were alarmingly advanced and indeed have the potential to completely undermine privacy as we know it. A despot could use Clearview to identify protesters in a crowd. A stalker could take a photo of a woman at a bar and use Clearview to instantly discover her name, her social media accounts, and quite possibly her place of work or home address.
[See also: The AI that solved climate change]
When Hill checked the Clearview website in November 2019 it revealed only a simple logo, the tagline “Artificial intelligence for a better world”, and a form to request access, which she completed to no avail. The physical address listed at the bottom of the website did not exist. On LinkedIn she found one Clearview employee, a sales manager named “John Good” who had a skimpy CV and only two connections. He never replied to her messages. She received no response from the lawyer who had written the memo, or by tracing and contacting Clearview’s investors, among them Peter Thiel. In one of several twists that make the book read like a thriller, an acquaintance on Facebook – someone she had added as a friend years earlier – messages her to say he had heard she was trying to find out about Clearview and could he help. When she replies asking for his number, he ghosts her. Eventually, she managed to get one of Clearview’s investors to talk when she doorstepped him, mostly because he felt guilty about sending her away while she was heavily pregnant.
Hill also tried approaching police detectives who used Clearview at work. They were enthusiastic about the tech, which had helped them identify suspects, but a funny thing happened each time she asked them to run her photograph through the programme to show her how well it worked: they went cold on her. In Texas, she finally figured out why. A detective had agreed to assist her investigation if she didn’t use his name, and he uploaded her photograph to Clearview’s system. Within seconds, he received a phone call from Clearview tech support to ask him why he had uploaded a New York Times reporter’s photograph.
Eventually, having sought advice from the renowned crisis communications expert Lisa Linden, Clearview’s co-founder, Hoan Ton-That, agreed to sit down for an interview with Hill. Silicon Valley giants such as Google, Amazon and Facebook had refrained from releasing this kind of facial recognition technology because it was deemed too dangerous and easily abused, but it quickly became clear that Clearview had no such qualms. “Clearview AI represents our worst fears, but it also offers, at long last, the opportunity to confront them,” Hill writes in Your Face Belongs to Us, showing a degree of optimism I wish I could share. How do you effectively confront a technology over which you have so little control?
People sometimes argue that technology is neutral: it’s like a knife, and what matters is the user’s intention, whether they want to slice bread or stab someone. Hill rightly disagrees: AI technology is much more complicated than a knife, and those who design and control it wield huge power over how it will be used. This is why it matters that Clearview’s founders had links to the alt-right and Donald Trump, and were interested in toxic ideologies such as physiognomy, the idea that you could determine someone’s character, IQ or propensity to criminality by assessing their facial features. The founding trio were Charles Carlisle Johnson, an alt-right troll and Holocaust denier; Richard Schwartz, a New York politician with ties to Republicans such as Rudy Giuliani; and Ton-That.
Ton-That was a self-taught coder who moved from Australia to Silicon Valley as a teenager and couch-surfed as he tried to establish himself by building Facebook apps. He was best known for creating a website named ViddyHo that tricked users into sharing access to their Gmail accounts. After this phishing scandal broke, the gossip website Valleywag identified Ton-That – “an anarcho-transsexual Afro-Chicano” (as he described himself) – as being behind the scam, and suggested that building ViddyHo was “the first truly original thing he’s done”. After moving to New York, Ton-That fell in with a new crowd and started styling his trademark flamboyant suits with a MAGA hat. Before Clearview, Ton-That, Johnson and Schwartz had collaborated to build similar facial recognition software called Smartcheckr, which was used in January 2017 at the DeploraBall Trump inauguration party to prevent liberals from entering. Smartcheckr also tried to sell its services to Viktor Orbán in Hungary.
Hill documents the many, often unsuccessful, efforts to ban on privacy grounds Clearview and others from using facial recognition technology. Since its existence was made public, Clearview has weathered multiple legal storms; it has incurred fines totalling over $70m from several countries, including the UK. In these countries, the courts determined that Clearview should seek users’ permission to develop their faceprints – the algorithmic map that the technology uses to make its matches. The lawsuits seem to be having some impact on tech firms: in 2021, for instance, Facebook said it was no longer using facial recognition technology and was deleting the faceprints it had collected of over a billion people after it had thrust automatic photo tagging on users around a decade earlier. Alongside the legal challenges, some activists are trying to target facial recognition companies’ dependency on large photo databases, either by blocking their access to the archives or by flooding them with unusable data.
But Clearview seems to be ploughing on mostly undeterred: it hopes to combine its technology with internet-connected “smart” glasses, and recently granted Ukraine access to its tech to help it identify Russian spies. According to news reports, Ukrainians are now using Clearview to identify the bodies of Russian soldiers and find their relatives online.
Once a technology has been discovered, it’s hard to row back. Other companies have now released similar facial recognition software that is available to the general public: Hill interviews one man who tells her he has developed a secret obsession with using the website PimEyes to discover the real identities of porn actresses. China has integrated facial recognition technology with its network of surveillance cameras to create an all-seeing police state. In the UK, the Met has installed mobile facial recognition vans in busy areas of London in the hope of randomly identifying suspects on the street, while in the US facial recognition technology has led to miscarriages of justice and new forms of discrimination. At Madison Square Garden in New York, for instance, facial recognition technology has been used to bar entry to lawyers working at firms that have been involved in legal action against the arena and entertainment group. As Hill observes, people could easily be targeted on the basis of their political affiliation or views.
At the end of the book, Hill trials a prototype for smart glasses with Clearview installed: when the wearer looks at someone, a green circle appears around that person’s face and their online presence is instantly available. “It didn’t frighten me, though I knew it should,” she writes, because she was tasting what is called “technical sweetness”, the thrill of the new that drives people to push forward with dangerous inventions – such as the atom bomb – and worry about the consequences later. It’s a shame that while she writes with great clarity about the dangers of facial recognition technology, she offers few conclusions on how to fight back. What can ordinary citizens do to retain their anonymity if, one day soon, their faces will betray almost everything about them? One hopes the answer isn’t simply: nothing.
Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy
Simon & Schuster, 352pp, £20
Purchasing a book may earn the NS a commission from Bookshop.org, who support independent bookshops
This article appears in the 27 Sep 2023 issue of the New Statesman, The Right Power List