When it comes to industry regulation, the story normally goes that after years of self-regulation or no regulation there is a scandal that kicks regulators into shape. When it comes to artificial intelligence, that is not an option. So far regulators have been caught between a rock and a hard place. Act too soon, and ill-formed policies stifle innovation. Act too late, and you fail to protect society and industry. The choice for citizens globally is split: look to the state to regulate AI or to business?
No country has a coherent approach to regulating AI, and regulations quickly morph into principles and investment strategies. Take the 2019 OECD Principles on Artificial Intelligence, the first universal standards agreed to by major nations to promote AI technologies that respect human rights and democracy. In July, the European Commission published its AI High-Level Expert Group Assessment List to provide a checklist for businesses to self-assess the trustworthiness of their AI systems. While such voluntary non-binding guidelines encourage the responsible development of AI, they hold no regulatory clout.
Setting standards is an important first step, what is next is to translate these principles into national and international law. Much of the talk about regulating AI is also positioned alongside driving inward investment. Singapore’s AI Governance framework lists out AI policy principles alongside investment targets. This is a sign of how global competition for AI, and the promise of economic reward, has pushed AI regulation to the sidelines. Rather, effective regulation will do more to support AI reaching its full economic potential than suppress it.
If you look at the top three tech hubs – the United States, China and the United Kingdom – each one is taking a very different approach. The US free market model has championed corporates focused on self-regulation. China’s authoritarian approach, outlined in the 2017 initiative New Generation AI Development Plan, promotes a national push to become the world’s “leading AI power by 2030” and sets itself in competition with the rest of the world. The UK is taking the middle ground, working with the public and private sector.
When it comes to the responsible and ethical development of AI, the UK leads the pack. The UK is uniquely placed to capitalise on this and develop policies and laws with a pragmatic understanding of how the technology works, and what the implications are. The 2018 AI Sector deal and the flurry of new-formed bodies since – the Centre for Data Ethics and Innovation, Regulatory Horizons Council and the UK government’s AI Council – show immense promise to bridge the gap between industry and government, and advocate for AI regulation that serves to support, not stifle innovation.
Elliot Wellsteed-Crook is head of partnerships and public relations at London Tech Week.