Artificial intelligence has a gender problem. Robots associate men with doctors and programmers, and women with homemaking. AI-generated portraits of women manifest with inflated chests and tiny waists. Machine-learning algorithms are bound to be sexist – they’re trained by humans, on data sets of human outputs. But what if there’s another gender problem around AI that we’ve overlooked?
At the AI safety summit this week, national leaders and Big Tech met to tackle the supposed apocalyptic threat from generalised artificial intelligence. To underline that this was a good vs evil sort of situation, they gathered at Bletchley Park, where the Allies successfully cracked the Nazi Enigma code during the Second World War. The most hyped event was its host Rishi Sunak in conversation with Elon Musk on Thursday night (2 November). Musk has long been in the game of AI doomerism – he claimed to have invested in Google-owned DeepMind nine years ago to “keep an eye on” AI.
Sunak’s conversion, however, is a little more recent – it was only this May, in a humble Airbus 330 on the way to the G7, that he declared himself and the UK natural leaders to implement worldwide AI “guardrails”. His desire to show AI who’s boss reportedly arose from contact with AI labs with ties to the effective altruism (EA) philosophical movement.
The – largely male – EA world is preoccupied by the idea of AI turning on its creators. Its utilitarian reasoning is that even if it’s an improbable scenario, the sheer magnitude of wiping out all of humanity, including all infinite future generations, means this threat must outweigh all other concerns. Musk has long been connected with EA. He described a book on “long-termism” written by William MacAskill, a thinker who is the closest thing EA has to a leader, as “a close match for my philosophy”. Musk also had talks with the high-profile EA donor and disgraced cryptocurrency trader Sam Bankman-Fried over the latter’s potential involvement in Musk’s Twitter acquisition deal – though they came to nothing.
It’s easy to forget that many figures, such as the president of Microsoft and Meta’s chief AI scientist, have spoken of how AI’s existential risk has been overhyped. Meanwhile, private AI companies such as OpenAI, which is behind ChatGPT, might have an interest in exaggerating AI’s danger because it would strengthen the case for keeping their algorithms private (closed source) in the name of public safety – and boost share prices.
[See also: What the AI summit should be focusing on]
But even if AI doomerism is legitimate, it currently involves a lot of toothless grandstanding: The “Bletchley Declaration”, signed earlier this week only binds countries to do lots of woolly things like “cooperate” on and “research” AI risk; the key summit initiative was supposedly a global register of large AI models – pointless given that it looks very unlikely that major players, such as Chinese and US tech companies, would join it.
What explains the fixation of these men in squaring up to AI? Overwhelmingly, you sense a desire to play superhero – to be associated with defeating whatever could destroy humanity. Sunak’s obsession with AI correlates to his waning popularity as Prime Minister, a far cry from his popular spell as chancellor where he was depicted in blue tights and a red cape by one unsubtle BBC cartoonist. As his polling reached record lows in February, Sunak started to form a“Department of Cool” (for Science, Innovation and Technology) in the hope of making the UK the next Silicon Valley. As his ratings have dropped further since, he has banged the drum louder about the risks of AI.
Musk – who has consistently had the career of a man still trying to prove something, to someone, somewhere – told Joe Rogan earlier this week that he bought Twitter to defeat the “zombie apocalypse” which beckons the “end of civilisation”. Musk’s current focus is building a human colony on Mars to escape a potential extinction event on Earth.
It doesn’t matter that this is all so theoretical as to be fantasy – that’s actually the point. The superhero complex is something many geeky men acquire. They thirst for social recognition unattainable in real life. Sunak is a self-described “nerd” who has repeatedly talked about his love of Star Wars and has a collection of lightsabers. Musk has leaned into comparisons made of him with Iron Man – a billionaire inventor who engineers an armoured suit to fight evil. Sunak joked about the prospect of a cameo in Star Wars, Musk was actually in Iron Man 2 for ten seconds.
The superhero fantasy is perhaps best personified by Nick Bostrom, an EA philosopher kept up at night by the thought of AI killing us all. A poem on his own website describes how he sees the lives of those at the Future of Humanity Institute at Oxford where he works: “daytime a tweedy don/at dark a superhero/flying off into the night/cape a-fluttering/to intercept villains and stop catastrophes”.
There’s a temptation to view blokes playing the superhero as endearing or benign. In reality, their play-fighting acts as a distraction. There are genuine AI threats we must grapple with now; the economic challenges from automation, and how computer biases affect who gets picked for a job, or who goes to jail. As for effective altruism, the causes it used to focus on, such as infectious diseases and famine, have been deprioritised in favour of the AI threat – much to the chagrin of disillusioned former members such as Carla Cremer and Luke Kemp.
There exists a fable of betting on something improbable purely because the returns could be so big. Sam Bankman-Fried, the fallen angel of the EA movement, who was convicted this week of defrauding investors in his cryptocurrency exchange FTX, figured if he only borrowed more customers’ money, he might make back his losses – perhaps that he’d continue saving the world by funnelling money into charitable causes. But hubris has landed him in a detention centre in Brooklyn. That is the reality of how a superhero complex plays out – not a Marvel movie.