Support 100 years of independent journalism.

  1. Spotlight
6 June 2019updated 08 Sep 2021 2:16pm

Do coders need a code of conduct?

The power of data should not be underestimated, which is why AI technologies need to be regulated responsibly.

By Rainer Kattel

Are public organisations ready for artificial intelligence? Recent tragic accidents involving the Boeing 737 MAX suggest they are not. While modern aviation, including rules-based software it relies on, is overwhelmingly safe, the slippery slope of self-regulation by industry shows the limits of human-machine interactions.

The step towards probabilistic software, such as that used in self-driving vehicles, is not just highly demanding technologically speaking, and uncertain at this point, it also puts public regulators into a uniquely complicated situation. Namely, there are things we may want to regulate – for example, search algorithms that exclude competitors, social media feeds inciting violence – that will change as they are being monitored. The object of regulation is dynamic. And the complexity only increases as we are applying AI to less technological areas, such as health or education, while trying to understand how these complexities affect people, systems and society.

The European Commission has fined Google €8.2bn over two years for abusing its monopoly power in online advertising, shopping and in its Android operating system. This is a highly commendable action. Yet, it is unlikely that will change Google’s behaviour, mainly because such rulings misunderstand the source of Google’s ability to dominate. The market power of big data companies does not rely on their means to pressure websites into using their advertising tools. Instead, power comes from the combination of the sheer endless amount of data about its users and clients, and its code and algorithms that continuously learn about the users and clients.

Code is not only law, but code is also learning. Historically, learning and in particular tacit aspects of if it, such as the ability of teams to work well together, have been fundamental to innovation. Thus, the question is how competition authorities should curtail the ability to learn within big data companies. Fining them for external anticompetitive behaviour forces them to come up with better internal processes – better code – to circumvent not only competitors but also regulators.

Breaking up companies like Facebook, as recently suggested by one of its co-founders, Chris Hughes, would probably not change the underlying dynamics. Radical open source solutions like the data sovereignty approach by cities such as Barcelona is a much more promising alternative.

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The New Statesman’s global affairs newsletter, every Monday and Friday. The best of the New Statesman, delivered to your inbox every weekday morning. A weekly round-up of The New Statesman's climate, environment and sustainability content. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A weekly dig into the New Statesman’s archive of over 100 years of stellar and influential journalism, sent each Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.

Put bluntly, AI makes code and algorithms into economic and political agents, and our economic and policy frameworks have limited tools to deal with such non-human actors whose primary goal is to circumvent human agency. This poses the critical question for AI and public policy: what is the purpose of public policy, such as competition policy or seamless public services, and who and how defines this purpose?

In truth, 20th-century public organisations are not supposed to have the capabilities to check, to question, to redefine the purpose of public policy. In this age of super-wicked problems,
the temptation will only increase to cut out the human from the decision-making processes.

AI will be, eventually, taught to learn from millions of cases of purpose, of policy choices – and it will decide. Thus, the question we really need to be asking is not how to create seamless public services, or how to diminish big tech’s market power, but rather who will teach machines about what is innovation and what is political and policy choice?

Content from our partners
Cyber security is a team game
Why consistency matters
Community safety includes cyber security

Rainer Kattel is professor of innovation and public governance at the UCL Institute for Innovation and Public Purpose.