Will the robots of the future be able to replicate human thought? Most engineers assume so with a casual fatalism: the rate of advance in artificial intelligence (AI) is so rapid that it is only a matter of time before robots indistinguishable from human beings are built. Will the robots of the future surpass and then subordinate their creators? Some of the initiated believe so. This impending apocalypse has a name – “the singularity” – and is confidently expected in some quarters as soon as 2045.
Most existing AI systems have a narrow remit. They are task-focused, designed to perform some specific function – recognising speech, say, or diagnosing melanoma. Driven by the aggregation of huge data sets and explosive increases in computer processing power, “machine learning” – a newly effective technique for designing algorithms – is facilitating impressive advances in the capabilities of these systems. Machine learning-trained algorithms now outperform human specialists across a range of applications. But artificial general intelligence (AGI) systems that can imitate human thought, like the robot Hal in Stanley Kubrick’s 2001: A Space Odyssey – which experts have been claiming are “just around the corner” since about 1956 – remain the stuff of science fiction.
Jamie Susskind does not think the distinction between the two forms of artificial intelligence – the specific AI and the human-like AGI – matters much. Specific AIs will be hard enough to handle, creating “vast new opportunities and risks worthy of careful attention in their own right”. If you integrate enough specific AIs into a single interface you might well end up with something so good at giving “the impression of general intelligence” that it is functionally indistinguishable from the AGIs of the engineering imagination. Whether or not AGI fictions like Hal become reality does not matter, Susskind insists. We need to stop debating such distinctions and knuckle down to the hard work of adapting our politics to a world reshaped by AI.
Susskind believes that a whole new domain of existence – he calls it the “digital lifeworld” – is materialising around us through the proliferation of specific AI systems ordering and streamlining our lives. Organising the digital lifeworld is going to be challenging. Engineers are busy translating existing rules and norms into the rigid terminologies of code. But in the process something important is often lost – some of the slack or give that lends these rules tensile strength.
In the real world it is possible to drive too fast. One risks punishment, injury and social condemnation – but those are risks one is free to run. Once engineers code speed limits into self-driving cars, speeding will simply be impossible. In this example, a recognised rule is formalised through AI in such a way as to preclude its violation. For Susskind – an author, barrister and past fellow of Harvard University’s Berkman Center for Internet and Society – this is a diminution of liberty. It represents a “colonising” by the authorities of a “precious hinterland of naughtiness”.
Moreover, in the process of translating the rules and norms that govern the real world into code, engineers sometimes come upon gaps. The process of filling these gaps further curtails liberty. Consider the scenario in which a child strays into the path of an automated car. Should the driver swerve into oncoming traffic to save the child, risking multiple fatalities in doing so, or hold her course? Reasonable minds will respond to this grim scenario differently – some may prefer to preserve multiple lives; others will protect the child’s. As things stand we have the autonomy to make a terrible choice. In the digital lifeworld that discretion will be forfeit: the relevant code will have a decision programmed into it in advance.
As well as truncating freedom in these ways, construction of the digital lifeworld will give rise to new forms of power that will be difficult to fathom and hold in check using our current conceptual resources. The ability to force us to comply with the law will be complemented by new powers of scrutiny and surveillance, persuading people that even minor transgressions cannot escape official notice. Artificial intelligence will also give governments and businesses new resources to control how we perceive the world around us. News feeds and search results can be manipulated to define what a user sees: state censorship in China illustrates the growing reach of these technologies.
A related power to control how we are seen by our peers, neighbours and trading partners is also emerging. The Chinese authorities plan to create a comprehensive social credit system, adding private financial data on creditworthiness to information on crime and civic infringements, building on systems already in place in some provinces. Their intention is to put a new premium on trust by making histories of malfeasance harder to mask. There are already systems at work across the world using credit scoring and online user reviews – this takes them to new extremes, suggesting the kind of future we might all have in store for us.
These dangers to liberty (“unprecedented”, in Susskind’s estimation) expose shortcomings in our intellectual resources. Political theory as it stood at the end of the 20th-century is inadequate to the task of grappling with the distinctive conditions of politics in the digital lifeworld: this is Susskind’s central contention. If we want to measure and mitigate the risks to liberty posed by AI’s new forms of force, scrutiny and perception control we will look in vain to John Stuart Mill’s On Liberty, Isaiah Berlin’s “Two Concepts” or Robert Nozick’s Anarchy, State, Utopia: they simply did not see the new forms of power coming.
Susskind believes that our concepts of property and democracy are similarly outmoded. Electoral systems will be compromised where new powers of perception control run unchecked. Meanwhile, AI will prompt revision of the parameters of democracy as we currently understand it. We already have apps that ask voters about their preferences and then indicate how to vote. These might be scaled up and decision-making authority handed over to AI systems purporting to understand where a person’s interests lie better than they can. More frequent and exhaustive polling becomes conceivable – governments could make inquiries, Susskind writes, “thousands of times each day”.
Our structures of property will also be rendered unstable. Concentrations of capital in the hands of the wealthy few will be exacerbated by the growing scarcity of labour as advances in technology cause unemployment. Whereas it has long been possible for those without capital to work at accumulating a nest egg, automation’s unemployed will only be able to invest what little they can save of the subsidies proffered to them by the state. Applying Thomas Piketty’s law that the return on capital will continue to exceed the rate of growth, in Susskind’s digital lifeworld inequality between the rich and the rest will rise dramatically.
For the convenience of travelling with hands off the wheel we trade some of the freedoms that make being out on the open road a pleasure. But is this kind of threat to liberty really, as Susskind argues, “unprecedented”? Is it not an extension of the processes of rationalisation and bureaucratisation that Nietzsche and Max Weber lamented a century and more ago? Anti-chivalrous cars that prohibit drivers from veering into oncoming traffic to save infant lives, on the utilitarian basis that the death of one innocent is preferable to multiple adult fatalities, sound like the kind of congealments of the human spirit that Weber abhorred. Do we need wholly new conceptual resources to think about them?
Corporate behemoths such as Google and Facebook wield influence over how we think, see and are seen in the digital lifeworld, but their power goes unchecked because we fail to view them as political: Susskind argues we have succumbed to the “dangerous” notion “that a body has to be as powerful as a state before we start taking it seriously as an authentic political entity”.
Were our predecessors really so obtuse? It is true the great tradition of political philosophy from Hobbes onwards ascribes surpassing importance to the state. But any number of complementary theories enrich our understanding of how states and civil society groups such as trade unions and firms co-operate to organise market societies.
The complex interaction between individuals, governments and powerful firms has also been a topic of rigorous analysis in political economy. Discussion of the big tech companies consistently distinguishes their business models from those of the 20th-century monopolists that John Hobson, Lenin and Joseph Schumpeter attacked and that anti-trust regimes dismantled. It is insisted that the kind of monopolies these new corporations represent is different, although their business practices – such as buying out entrepreneurs who threaten the status quo – suggest otherwise.
And if the resolve to curb these firms’ power could be summoned, old-fashioned competition policy might be appropriate after all. Susskind’s own treatment of the “data deal” that the likes of Google and Facebook offer their users suggests as much. If the information garnered through use of their products is valuable, Google and Facebook emerge as the only buyers in markets with many sellers, suppressing the price of data to maximise their own profits. This is monopsony, a practice regulators are experienced (if not always successful) at policing.
Received wisdom might also prove illuminating on the broader fate of labour in the automation era. Susskind recognises that the value of work is not simply material. Some kind of universal basic income could mitigate the loss of a wage or salary, but what about the rest? Susskind proposes that “political theorists, economists, social psychologists and the like” get to work precipitating a paradigm shift, elevating some kind of activity other than work to the apex of social esteem. But presumably this would involve revisiting earlier efforts in the same vein.
The future of work has been foremost in the minds of critics of automation since the 1950s. Some of them did not think the relationship between automation and unemployment one of simple causation. In The Human Condition (1958), for instance, Hannah Arendt argued that work was growing scarce in part because it had become unfulfilling. The model of achievement in the modern age had been Homo faber, the craftsman, engineer or inventor who worked to fabricate a human habitat – making wine jars out of clay, building cities of stone and towers of glass and steel. But the environment Homo faber built had ceased to feel hospitable. What Arendt called “world-alienation” had made the human artifice seem more like a trap than a home. The space race enacted a desire for escape: finally, an end to “man’s imprisonment on Earth” was within reach. Homo faber’s new task was to get us out of here. Automation then was not so much the driver of a dwindling supply of work as a consequence of a broader disillusionment. The “work paradigm” was in crisis less because work had grown scarce than because work had ceased to be rewarding.
However, if low morale as much as technological progress lies behind the replacement of humans by robotic systems, then automation – and particularly the AI advances initiated by new machine-learning techniques – could reverse the trend and make work interesting again. Part of the promise of these systems is that by taking over data-crunching drudgery they offer interesting problems for humans to solve.
In medicine, for example, diagnostic algorithms for certain cancers free up research and development time for improving other treatments. In fashion, systems that expedite online shopping by profiling and then advising customers are built not by supplanting human sales assistants but by equipping human staff to supervise the machine-learning process – improving the operation of auto-select algorithms by reviewing and correcting their output. In law, the graduate weeks spent in litigation war-rooms hand-coding emails for their relevance to issues in dispute are being foreshortened by machine learning-powered algorithms. So far, this seems to mean not fewer jobs for lawyers, but more rewarding work for them to do. In research science, machine learning is dramatically expediting data processing, liberating scientists to formulate and explore more far-reaching hypotheses: the isolation of the Higgs boson particle at CERN exemplifies the kind of discovery that will happen more quickly now.
The growing use of robots in these fields is redirecting human workers to tasks more likely to engage their distinctively human faculties. Decades of rationalisation and depersonalisation that broke work down into empty increments may be shifting into reverse. The most exciting AI technologies (so-called “learning apprentice” or “human-in-the-loop” systems) give humans a pivotal role in carrying out astonishing feats of engineering. The steady improvement of interfaces means that coding expertise is no longer a prerequisite for these roles. New software is superseding some jobs but creating others – and not the menial repetition of the production line, which robots now handle, but innovative, constructive, meaningful activity, shaping and refining the human artifice.
Not everyone is going to end up at the centre of a creative storm of machine learning – making adjustments, solving tightly framed problems, moving enterprises forward in satisfying leaps and bounds. But other kinds of work that look certain to survive the great automation – teaching, nursing, care work, hairdressing – might also be raised higher in the general esteem by virtue of their very survival. The jobs most likely to live on are those the robots cannot manage, requiring feats of tact, dexterity and emotional intelligence that only humans command. These are things we have devalued through the long ascendancy of utilitarianism. Automation might help us to re-evaluate these careers, restoring their dignity and earning power.
It was not the appropriation of humanity by robots but the dehumanisation of people that caused greater concern to 20th-century critics, and that concern should be with us still, not least because recalling it at this juncture might actually lead us towards a more positive appraisal of AI and its implications. Susskind thinks the distinction between specific and general AI is immaterial. But it’s arguable that it is everything. For as long as AI remains specific and applied, confining AGI to works of fiction, we have good reason to believe that AI systems will augment rather than diminish our humanity. In reaching the limits of their own capabilities, specific AI systems will help to remind us how impressive seemingly prosaic processes of human cognition (such as appreciating context and sensing latent sentiments) really are – even as they furnish means to engage those processes to still more inspiring ends.
If we are looking for ways to revitalise politics for the new century, these renewed appreciations of human capability seem more promising starting points than the summary abandonment of our efforts to make sense of politics to date. l
Tim Rogan is author of “The Moral Economists: RH Tawney, Karl Polanyi, EP Thompson, and the Critique of Capitalism” (Princeton University Press)
Future Politics: Living Together in a World Transformed by Tech
Oxford University Press, 544pp, £20