When the iPhone X was unveiled this week, its facial recognition feature immediately made headlines. Owners of the £1,000 phone can bypass the pesky password and just use their face to unlock their device. Aside from the obvious pitfalls (even the Apple demonstration didn’t work seamlessly), it’s evident that a world where facial ID software becomes commonplace is one that could spiral out of control.
For researchers Michal Kosinski and Yilun Wang, the dangers of such kinds of technology are paramount. In order to make their point, they shared the results of a study. They had trained a machine learning programme to identify from a series of photographs whether the face in them was gay or straight.
The study itself had some methodological idiosyncrasies, but despite the simplistic headlines the researchers generated, their general concerns resonated. In a later interview with the Guardian, Kosinski said that he had been carrying out preliminary work into determining whether facial features could give away an individual’s political views. So far, the initial results indicated they could. Predictably, people on various social media networks and in the comments section were more than a little perturbed.
Many of the narratives around artificial intelligence paint it as a nebulous, amorphous entity that will soon change the fabric of reality. AI, according to these theories, will take our jobs, drive our cars, and in some distant dystopian hellscape, control our lives after overpowering human intelligence. This slow creep of automation into the most complex aspects of our lives is more than a little unnerving.
Forget the finer points of Kosinski and Wang’s algorithm. The mere fact that they were able to create it from software, technology and information that are publicly available is what’s terrifying.
After all, facial recognition may capture the imagination, but it is hardly necessary to work out who we are. Glued to our hands, on our bedside tables, in our pockets, our smartphones carry so much of the information that anyone would already want to know about us. We don’t need to welcome our robot overlords before our private information is made public; we already, often willingly, have shared that information ourselves.
If you – like the majority – are reading this on a smart device, the object you hold in your hand is far more useful to a government or company than your face. Social networks, auto-fill forms and browsing histories build up a profile of our personalities without anyone necessarily needing to look us in the eye.
We download dating apps like Grindr and Tinder and let them access our locations even as we roam. We put our debit card details into our phones to make buying concert tickets easier. We create a digital footprint of what kind of food we buy, we let mapping apps know where our home is so we can return to it, with fewer taps of our fingers.
Meanwhile, large corporations are making a killing out of uploading our lives – our holiday pictures, our music tastes, what we secretly find funny – into their own servers. Companies such as Facebook, Twitter and Instagram already sell to third parties, such as law enforcement agencies.
Publicly available data has been used in the past to crack down on dissent. Facial recognition software has long since been a hindrance for activists and protestors – new artifical intelligence programs can recognise protestors’ faces even if they’re covered.
Read more: Ranking the features of the £1,149 iPhone X from “why” to “sweet lord why”
In the early days of the internet, we were quick to denounce programs, software and companies that ask for too many details of our lives – my dad still maintains a deep distrust of putting his card details into online forms. As time has worn on and as concepts like Amazon’s Alexa, one-click shopping, and autofill forms have made the little inconveniences disappear, we are, collectively, forgetting the parts of our privacy we sacrifice. The iPhone X may recognise your face, but it isn’t really that much more dangerous than the models which came before.
In the last decade, as artificial intelligence technology has accelerated, we have made a trade-off between privacy and a more connected world. It’s a trade off we have to make, but we can be smart about it – and remember that some of these predictions around privacy have already come true.