This article is more than 1 year old
AI recruitment software is 'automated pseudoscience', Cambridge study finds
HR diversity claims via software are rot, according to boffins
Claims that AI-powered recruitment software can boost diversity of new hires at a workplace were debunked in a study published this week.
Advocates of machine learning algorithms trained to analyze body language and predict the emotional intelligence of candidates believe the software provides a fairer way to assess workers if it doesn't consider gender and race. They argue the new tools could remove human biases and help companies meet their diversity, equity, and inclusion goals by hiring more people from underrepresented groups.
But a paper published in the journal Philosophy and Technology by a pair of researchers at the University of Cambridge, however, demonstrates that the software is little more than "automated pseudoscience". Six computer science undergraduates replicated a commercial model used in industry to examine how AI recruitment software predicts people's personalities using images of their faces.
Dubbed the "Personality Machine", the system looks for the "big five" personality tropes: extroversion, agreeableness, openness, conscientiousness, and neuroticism. They found the software's predictions were affected by changes in people's facial expressions, lighting and backgrounds, as well as their choice of clothing. These features have nothing to do with a jobseeker's abilities, thus using AI for recruitment purposes is flawed, the researchers argue.
"The fact that changes to light and saturation and contrast affect your personality score is proof of this," Kerry Mackereth, a postdoctoral research associate at the University of Cambridge's Centre for Gender Studies, told The Register. The paper's results are backed up by previous studies, which have shown how wearing glasses and a headscarf in a video interview or adding in a bookshelf in the background can decrease a candidate's scores for conscientiousness and neuroticism, she noted.
Mackereth also explained these tools are likely trained to look for attributes associated with previous successful candidates, and are, therefore, more likely to recruit similar-looking people instead of promoting diversity.
"Machine learning models are understood as predictive; however, since they are trained on past data, they are re-iterating decisions made in the past, not the future. As the tools learn from this pre-existing data set a feedback loop is created between what the companies perceive to be an ideal employee and the criteria used by automated recruitment tools to select candidates," she said.
- Bosses using AI to hire candidates risk discriminating against disabled applicants
- Google says some AI call center agents took the morning off
- Europe just might make it easier for people to sue for damage caused by AI tech
- Spruce up your CV or just bin it? Survey finds recruiters are considering alternatives
The researchers believe the technology needs to be regulated more strictly. "We are concerned that some vendors are wrapping 'snake oil' products in a shiny package and selling them to unsuspecting customers," said co-author Eleanor Drage, a postdoctoral research associate also at the Centre for Gender Studies.
"While companies may not be acting in bad faith, there is little accountability for how these products are built or tested. As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be 'de-biased' and made fairer," she added.
Mackereth said that although the European Union AI Act classifies such recruitment software as "high risk," it's unclear what rules are being enforced to reduce those risks. "We think that there needs to be much more serious scrutiny of these tools and the marketing claims which are made about these products, and that the regulation of AI-powered HR tools should play a much more prominent role in the AI policy agenda."
"While the harms of AI-powered hiring tools appear to be far more latent and insidious than more high-profile instances of algorithmic discrimination, they possess the potential to have long-lasting effects on employment and socioeconomic mobility," she concluded. ®