For those looking to get back into work or change career paths, your CV could now be read and screened by an AI system.
Increasing numbers of firms, including Vodafone, PwC and Unilever, are using AI technology to filter applications in the search for the perfect candidate. However, a law proposed by the European Commission could prove troublesome for those looking to adopt new, smart methods of hiring.
Under the Artificial Intelligence Act all AI systems in the EU would be categorised in terms of their risk to citizens’ privacy, livelihoods and rights.
Any system determined to pose an “unacceptable risk” would be banned outright, while those deemed “high risk” would be subject to strict obligations before being put on the market.
Those developing AI-based recruitment tools would be required to conduct risk assessments, include “appropriate” human oversight measures, and use high levels of security and quality datasets.
Why would recruitment technologies be considered high risk? Some HR systems discriminate against applicants based on their ethnic, socio-economic or religious background, gender, age or abilities, according to Natalia Modjeska, AI research director at analysts Omdia.
Modjeska, one of the key speakers at the AI Summit London, says biased systems “perpetuate structural inequalities, violate fundamental human rights, break laws, and cause significant suffering to people from already marginalised communities”.
Such tools could also harm the businesses using them, with high-performing candidates potentially left out. “Let’s not forget about the reputational damage biased AI systems inflict,” Modjeska adds. “Millennials and zoomers value diversity, inclusion and social responsibility, while trust is the fundamental prerequisite that underlies all relationships in business.”
The law would also impact freelancers, referring to “persons in work-related contractual relationships”, says Shireen Shaikh, a lawyer at Taylor Wessing.
To avoid falling foul of the prospective law, Shaikh says developers should embrace transparency in how their AI makes decisions about candidates.
“The machine must not be left in charge, meaning the system’s intelligence should be capable of human oversight,” Shaikh continues. “It will be for the provider to identify what ‘human oversight measures’ have been taken when designing the product and also which are available when operating it.”
Juggle Jobs is one platform that would get a “high-risk” tag under the proposed law. The company, which “helps organisations find and manage experienced professionals on a flexible basis”, says it supports “well thought-through oversight when done correctly”.
Its CEO, Romanie Thomas, notes AI-based hiring tools improve the average time to shortlist applicants, adding that over 65 per cent of interviewed candidates were female and 30 per cent were non-white.
It remains to be seen how much the proposed law will impact companies. But one thing is certain: the automation and digitisation of recruitment is only going to increase – and biases are expected to follow, no matter what measures you use to hide them.
Ben Wodecki is summit series lead correspondent. Contribution by Jackson Szabo, AI and IoT events and editorial director.