New Times,
New Thinking.

  1. Spotlight on Policy
7 June 2019updated 08 Sep 2021 2:07pm

How AI risks replicating the prejudices of the past

Artificial intelligence has a responsibility to modernise alongside the society it serves.    

By Kriti Sharma

The biggest issue that artificial intelligence faces at the moment is not a problem of technical advancement – there are leaps being made all the time. It is about designing systems and products that humans can trust. And trust comes from transparency, responsibility, and ethical design. If an algorithm tells you to do something, you won’t do it if you don’t have confidence in its motives

Perhaps the most important hurdle that AI needs to get over is the issue of bias. The algorithms in AI systems are trained using large datasets, and if those underlying datasets are biased, the output is likely to be biased as well.

This creates problems if you’re using AI in systems such as making credit decisions about who gets a mortgage or a credit card, or who gets invited to a job interview. If there are historic patterns, such as a higher concentration of men in senior leadership roles, then the AI is going to make its decision based on those patterns.

Bias can also be introduced to AI by the people working on it. I don’t believe it’s malicious, but if teams are not diverse then bias emerges, at a very low level, during the design process, and this affects the end product. The statistics on gender equality in AI are fairly depressing – women make up about 12 per cent of the workforce.

Take the growing number of voice assistant devices powered by AI. Voice assistants such as Alexa, Siri, Cortana and Google Assistant have female voices or feminine personalities. And they do mundane tasks, such as switching your lights on and off, or ordering your shopping, and playing your favourite music. The “male” AIs, such as IBM Watson and Salesforce Einstein, are the ones designed to make important business decisions.

It’s not just a question of gender, either. There is evidence that facial recognition systems are biased against ethnic minorities and women, because the algorithms were trained on certain kinds of faces. And background, too, makes a difference. People’s social mobility and education level impact the kinds of problems that they are interested in solving with AI, and this affects the question of who the technology is being designed for. It would certainly be a shame if the greatest technological advancement of our times wasn’t used for social good – improving healthcare for all, providing high-quality education, reducing inequality and so on.

I’m optimistic, because policymakers and legislators are now deeply interested in this topic. I find that very encouraging and refreshing. But I do feel there needs to be more responsibility taken by businesses. I genuinely believe that the future of our society should not be designed just by geeks like me. We need to allow a wider combination of people – people who are concerned with law, ethics, anthropology and the humanities – to take part in the AI movement, and not just be people it happens to.

Give a gift subscription to the New Statesman this Christmas from just £49

Kriti Sharma is the founder of AI for Good and technology advisor to the UK government and the United Nations.

Content from our partners
How the UK can lead the transition to net zero
We can eliminate cervical cancer
Leveraging Search AI to build a resilient future is mission-critical for the public sector