Support 100 years of independent journalism.

  1. Spotlight
6 June 2019updated 08 Sep 2021 2:16pm

Do coders need a code of conduct?

The power of data should not be underestimated, which is why AI technologies need to be regulated responsibly.

By Rainer Kattel

Are public organisations ready for artificial intelligence? Recent tragic accidents involving the Boeing 737 MAX suggest they are not. While modern aviation, including rules-based software it relies on, is overwhelmingly safe, the slippery slope of self-regulation by industry shows the limits of human-machine interactions.

The step towards probabilistic software, such as that used in self-driving vehicles, is not just highly demanding technologically speaking, and uncertain at this point, it also puts public regulators into a uniquely complicated situation. Namely, there are things we may want to regulate – for example, search algorithms that exclude competitors, social media feeds inciting violence – that will change as they are being monitored. The object of regulation is dynamic. And the complexity only increases as we are applying AI to less technological areas, such as health or education, while trying to understand how these complexities affect people, systems and society.

The European Commission has fined Google €8.2bn over two years for abusing its monopoly power in online advertising, shopping and in its Android operating system. This is a highly commendable action. Yet, it is unlikely that will change Google’s behaviour, mainly because such rulings misunderstand the source of Google’s ability to dominate. The market power of big data companies does not rely on their means to pressure websites into using their advertising tools. Instead, power comes from the combination of the sheer endless amount of data about its users and clients, and its code and algorithms that continuously learn about the users and clients.

Code is not only law, but code is also learning. Historically, learning and in particular tacit aspects of if it, such as the ability of teams to work well together, have been fundamental to innovation. Thus, the question is how competition authorities should curtail the ability to learn within big data companies. Fining them for external anticompetitive behaviour forces them to come up with better internal processes – better code – to circumvent not only competitors but also regulators.

Breaking up companies like Facebook, as recently suggested by one of its co-founders, Chris Hughes, would probably not change the underlying dynamics. Radical open source solutions like the data sovereignty approach by cities such as Barcelona is a much more promising alternative.

Select and enter your email address Quick and essential guide to domestic and global politics from the New Statesman's politics team. A weekly newsletter helping you fit together the pieces of the global economic slowdown. The New Statesman’s global affairs newsletter, every Monday and Friday. The New Statesman’s weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A newsletter showcasing the finest writing from the ideas section and the NS archive, covering political ideas, philosophy, criticism and intellectual history - sent every Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.

Put bluntly, AI makes code and algorithms into economic and political agents, and our economic and policy frameworks have limited tools to deal with such non-human actors whose primary goal is to circumvent human agency. This poses the critical question for AI and public policy: what is the purpose of public policy, such as competition policy or seamless public services, and who and how defines this purpose?

Content from our partners
Harnessing breakthrough thinking
Are we there yet with electric cars? The EV story – with Wejo
Sherif Tawfik: The Middle East and Africa are ready to lead on the climate

In truth, 20th-century public organisations are not supposed to have the capabilities to check, to question, to redefine the purpose of public policy. In this age of super-wicked problems,
the temptation will only increase to cut out the human from the decision-making processes.

AI will be, eventually, taught to learn from millions of cases of purpose, of policy choices – and it will decide. Thus, the question we really need to be asking is not how to create seamless public services, or how to diminish big tech’s market power, but rather who will teach machines about what is innovation and what is political and policy choice?

Rainer Kattel is professor of innovation and public governance at the UCL Institute for Innovation and Public Purpose.

Topics in this article :