The birth of AI nationalism

The race to master artificial intelligence increasingly shapes rivalry between the great powers. 

Sign Up

Get the New Statesman's Morning Call email.

Visions of a dystopian future owing to the rapid development of artificial intelligence (AI) are common. Tesla’s chief executive, Elon Musk, for instance, last year raised the spectre of a robot dictatorship that could permanently rule over humanity. “At least when there’s an evil dictator, that human is going to die,” he said. “But for an AI, there will be no death – it would live forever. And then you would have an immortal dictator from which we could never escape.”

Such fatalistic “visions”, however, are of little help. The more urgent risk is the mishandling of AI, which may precipitate human-led destruction before such doomsday scenarios are ever reached. Still more neglected are AI’s implications for foreign policy and the global order.

Analysis to date has largely focused on AI’s potential to create new military capabilities (such as “robot soldiers”) and enhance state power. AI will improve resource allocation and increase the speed of decision making. It will present an opportunity for smaller powers to punch above their weight.

Such opportunities are not without historical precedent. In the 19th century, resource-poor Prussia used telegraphs and railroads to concentrate its military forces and prevail against larger rivals.

There is scope for AI to have an even more transformative effect. Autonomous submarines, capable of locating, tracking and potentially disarming nuclear weapons, are one of many developments threatening to undermine a core tenet of nuclear deterrence: reliable second-strike capability.

AI may create new dimensions to warfare, too. Algorithms can be used to anticipate enemy movements and adjust tactics accordingly. AI could also soon be able to predict social unrest. Such digital clairvoyance raises new legal, ethical and strategic questions.

If these developments have been underappreciated in public discourse, they have not been neglected at state level. AI increasingly shapes rivalry between the great powers. Kai-Fu Lee, a former head of Google China, argues that the US and China are locked in an AI arms race. Both are ratcheting up investment in AI. Both are increasingly wary of foreign interference. In November 2018, the US accused a Chinese state-owned company of trying to steal trade secrets from a American firm producing semiconductors – a crucial technology for AI development and an area where China trails the US.

Other states, including Canada, France and India, have developed national strategies for AI. Venture capitalist Ian Hogarth has predicted the emergence of “AI nationalism” – where states shape national policies to the AI landscape and compete for supremacy. This does not appear to be an exaggeration. Russian president Vladimir Putin has proclaimed that the country which achieves AI dominance “will be the ruler of the world”. The need for high-quality and high-quantity data has created a new currency of state power. Just as oil shaped the international diplomacy of the past, so new alliances will increasingly overlap with co-operation in the digital sphere.

The challenge for all countries is reconciling geopolitical ambitions with the social and political tensions created by AI. In this regard, China’s state capitalist economy may enjoy an advantage. While Google recently refused to renew a contract with the US Pentagon following extensive employee protest at the use of AI for military purposes, Baidu (Google’s Chinese equivalent) welcomes military partnerships. China’s ability to amass large amounts of sophisticated data on its citizens (through the social credit system), compared to the more privacy-sensitive West, is its greatest asset.

Meanwhile, far from the state retreating, as libertarians hoped, the US-China AI arms race suggests that “big government” is likely to remain pivotal. Breakthroughs in AI, such as Google’s pioneering AlphaGo algorithm, may be driven by commercial investment. But the broader harnessing of AI will require the kind of resources and legitimacy that only the state can provide. As governments seek to exploit private sector capabilities, they will become stronger before they become weaker.

Will our understanding of the state, however, remain the same? It seems plausible, as some have argued, that AI will lead to economic, social and political transformations comparable to those of the Industrial Revolution. 

Throughout history, similar changes undermined traditional notions of governance. While shrewd statesmen were capable of adapting, some regimes were rendered obsolete. The decline of the Habsburg and Ottoman empires was arguably due to their inability to accommodate rapid economic modernisation and growing nationalism.

The AI revolution may wreak similar disruption. And it is unclear whether it is democracy (through pluralism and liberalism), authoritarianism (through control and stability), or perhaps an entirely new model of government, which is best-placed to reconcile the consequences of AI with the demands of statecraft.

Alternatively, AI may simply be the latest instance of a technology that reshapes, rather than shatters, the international system. In 1945, the atomic bombings of Hiroshima and Nagasaki – “Einstein’s monsters” – stunned the world with their destructive power. The proliferation and deployment of nuclear weapons threatened to annihilate human civilisation. Yet this prediction remains unfulfilled, largely due to the efforts of governments to adapt to a nuclear world.

AI merits just as much, if not more, urgent reassessment. As we confront the overwhelming potential of this technology, resisting facile, dystopian visions of robot-led uprisings is vital. We should not ignore the less apocalyptic, yet far more tangible, exploitation of AI for political power.

The tragic reality is that we are still more likely to destroy each other before technology destroys us. This, perhaps, should be our primary focus.

Sebastian Spence is a London-based strategy consultant 

This article appears in the 12 April 2019 issue of the New Statesman, System failure