In April 2018 Mark Zuckerberg appeared before the US Senate to answer questions on the Cambridge Analytica scandal, Russian interference in the 2016 US presidential election, and whether Facebook now represented a threat to the free world. Traditionally, for any business to be described by its parliament as “a weapon against democracy” would be a bad thing.
But for Facebook the opposite was true. As the world watched a pale and contrite Zuckerberg explain whether he had subverted democracy itself, Facebook’s stock rose by 4.5 per cent, its best day’s trading for more than two years. In a single day Zuckerberg’s personal wealth rose by $2.3bn, equivalent to 70,000 years of work on the median American salary.
A similar thing happened on Tuesday (16 May) when the CEO of OpenAI (the company behind ChatGPT), Sam Altman, made a similar appearance before the Senate’s Judiciary Committee. When discussing the precedents for regulating OpenAI’s technology, one senator, Richard Blumenthal, drew an analogy to the development of atomic weapons. Altman readily agreed that he and his colleagues were “anxious” about how their product “could change the way we live” and that his company, and the artificial intelligence industry, could “cause significant harm to the world”.
[See also: We are finally living in the age of AI. What took so long?]
OpenAI is a private company but it works in partnership with (and is to a great extent funded by) Microsoft. Again, the admission that Microsoft was involved in the production of something that could lay waste to swathes of jobs – maybe even civilisation itself – was greeted positively by the market; Microsoft’s stock price rose 1.5 per cent on the previous day, adding roughly $34bn to the company’s market capitalisation.
Big Tech’s companies are clearly aware of the need to communicate that they, too, are competing as fast as they can to build what may or may not be a doomsday device. Even Elon Musk, who has described AI as “our biggest existential threat” and called for a pause on AI development, has started his own AI company. The best thing a company can do to build shareholder value today is to create a terrifying and unpredictable new threat to humanity – or to say that’s what they’ve done.
Again, Facebook is a good example of why this works. After the shock of the 2016 US presidential election and the Brexit vote, Zuckerberg could not do enough to apologise. The more he listened, the more time he spent touring the US, solemnly acknowledging his company’s part in the rise of populism, the more he cemented the idea that Facebook had the power to decide elections. Facebook is an advertising company, and this idea was very compelling to its clients – its earnings have more than quadrupled since 2016 – and also, therefore, to investors.
Facebook hadn’t swung those votes, but for the political class it was much easier to agree that these populist rebellions were brought about by a computer trick than to acknowledge that they themselves had created a society of brutal inequality that serves a handful of billionaires at the expense of the working poor.
Today, it is much easier for politicians to speculate about the possible existential risks that new technologies could pose to humanity than it is for them to regulate the very real existential risk posed by technology that has been running for decades. Climate change is here, now, and it is accelerating: the world will pass the 1.5C threshold within the next five years. The global fossil fuel industry has done “significant harm to the world” and continues to do so. (Investors love this apocalypse too; fossil fuel investment grew by $214bn last year.)
We don’t know how useful or dangerous AI really is, and it’s going to be very difficult to find out in advance, because people like Sam Altman have a financial incentive to pretend it’s exceptionally powerful, and the people who are supposed to regulate him have a political incentive to agree.
[See also: There is no chance the government will regulate AI]