The MP David Amess was murdered last week at the hands of a person who is thought to have “self-radicalised” online. Last night (18 October), the BBC’s Panorama aired with a report on how the online abuse of women can become similarly self-reinforcing. Part of the Panorama investigation involved creating an account on five social media platforms and interacting with misogynistic posts; on Facebook and Instagram, the software appears to have decided to recommend a greater intake of hatred and violence.
There are already too many cases to list in which software has made immoral and disturbing decisions. Amazon’s recruitment software was ditched after it was found be biased against women; Facebook’s automatic moderation has been accused of racial prejudice; the New York Times described how YouTube’s algorithm recommended videos of children to paedophiles.
The reasoning behind such decisions is becoming more important in terms of public safety and legal accountability. But the UK government is now looking at unpicking the current legal protection we have against automated decision-making.
In February this year, three Tory MPs – Iain Duncan Smith, Theresa Villiers and George Freeman – combined to form an entity called “TIGRR”: the Taskforce on Innovation, Growth and Regulatory Reform, accountable directly to the Prime Minister. The mission of the taskforce was to find ways for British businesses to “scale up unencumbered by any unnecessary administrative burdens”.
The 130-page report delivered by TIGRR to the PM in May marked a number of pieces of red tape for removal. Among these is Article 22 of the EU’s General Data Protection Regulation (GDPR, copied into UK law when we left the EU), which states that you “have the right not to be subject to a decision based solely on automated processing”, and that if you do have a decision made about you solely by software, you have the right “to obtain human intervention… and to contest the decision”.
The report proposes removing the existing protection UK citizens have against automated decision-making, arguing that Article 22 “makes it burdensome, costly and impractical for organisations to use AI to automate routine processes”. The fact that some AI systems are better at making decisions than humans, it argues, shows that human review is unnecessary. “Article 22 of GDPR should be removed,” it concludes.
The Department for Digital, Culture, Media and Sport (DCMS) is now building on these proposals in a consultation called “Data: a new direction”. BCS, the chartered institute for IT, said last week that it supported the consultation but that “the right to human review of decisions made fully by computers should not be removed”.
Sam de Silva, chair of the BCS Specialist Law Group, agreed that Article 22 might not be the best tool for the job – as part of the GDPR it is concerned with the processing of personal data, but software can make a “life-changing decision” about someone without using their data. “If we think the protection is important enough, it should not go into the GDPR,” he said.
The EU is already drawing up policy that will place transparency obligations on some AI systems, as are legislators in the US, but the DCMS description of a Britain that will “seize opportunities with its new regulatory freedoms” shows there are some in government who see fewer rules as attractive to business.
In the past, this was certainly the case, but it’s now an old-fashioned approach. If the largest markets for social media and search giants impose accountability requirements in the US and EU, that’s what the UK will get too. British businesses won’t suddenly be better able to compete with Facebook if we don’t regulate its algorithms.
There’s a strong business and social argument for heading in the other direction, however, and imposing a higher standard of protections for businesses and consumers in the UK against the automated decision-making of global companies. Good regulation in this area could well improve confidence in the services created and competition among those offering them.
The unsettling truth about automated decision-making is that even when researchers have the time and resources to analyse why software made a given decision, they often aren’t able to do so because artificial neural networks are so complicated and ambiguous. Ofcom’s guidance on online content moderation by AI concedes that “it is extremely difficult to develop truly transparent, explainable models”. This only makes good regulation more urgent; such decisions may be made in a mysterious box, but they can have devastating consequences in the outside world.