This article is more than 1 year old
There's nothing AI and automation can't solve – except bias and inequality in the workplace, says report
Nope, it just makes them worse
RoTM AI and automation in the workplace risk creating new forms of bias and unfairness, worsening inequalities in the world of work, according to a UK think tank report published today.
The result of two years of research, the Fabian Society paper "Sharing the future: workers and technology in the 2020s" said that automating technologies creates heightened risks for historically disadvantaged groups.
Among other evidence, it cited machine-learning algorithms that inform recruitment decisions based on outdated and discriminatory data.
"Algorithms and AI are being used to make life-changing decisions about recruitment and progression in the workplace, replicating the kinds of biases that plague human decision-making," the authors said.
It went on to mention the well-known case where Amazon was forced to abandon its AI recruitment software because it used past data to learn to reject women coders.
"But similar commercial packages are being used more and more," the report said. "These algorithms are told to exclude information about sex, race and other characteristics covered by equality laws, but we heard how they use supposedly unrelated data that are actually correlated, such as where someone lives."
The report, supported by the union Community, said Unilever and Vodafone were among firms that have said they use facial-recognition technology to compare interviewees' physical responses with traits supposedly linked to success at work. But other experts argue human facial expressions display too much variety, especially across cultures and among some disabled people, for these techniques to be accurate and non-discriminatory.
"Without intervention, biased technology risks locking disadvantaged groups out of the changing labour market; ensuring that, in the near term, they face additional employment barriers through the COVID-19 recession and, in the longer term, they do not see the benefits of innovation," said the report, which followed an inquiry led by Labour MP and chair of the Home Affairs Select Committee Yvette Cooper.
Bias in ML algorithms
The report comes after experts have asserted on more than one occasion that datasets used to train many of the ML models used by image recognition and AI camera software are skewed towards white faces, and this makes them prone to bias and more likely to discriminate against people of colour.
Anima Anandkumar, a professor of computer science at Caltech and director of Nvidia's machine-learning research group, also pointed out gender bias issues in computer vision, for example noting that "image cropping on Twitter and other platforms like Google News often focuses on women's torsos rather than their heads."
In June, a tool known as PULSE using StyleGAN - trained on 70,000 images scraped from Flickr - was found to demonstrate racial bias, tending to generate images of white people.
In July this year, Detroit Police reportedly made two wrongful facial-recognition based arrests when the suspects were misidentified by software.
In November, a UK government review into bias in algorithmic decision-making found that it was "well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems".
It recommended more transparency in how the models are created, as well as a holding to account of the businesses that build the models. It said, specifically, that more guidance is needed on ensuring that recruitment tools, for example, "do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data". You can find the review here.
Get humans involved
More broadly, the Fabian Society determined the adoption of automation in the workplace was likely to disproportionately affect disadvantaged groups and argued that these effects are being exacerbated by the COVID-19 pandemic.
The report proposed a series of solutions including investment in training and skills. It also said employers should embrace "workplace partnership" and involve workers and trade unions in technology-related decisions.
Despite the disproportionate impact of automation, workers do welcome new technologies in the workplace, the report said, pointing out that IT has in many cases helped companies keep working during the pandemic.
But, as many experienced IT professionals know only too well, the design of technology and the way it is introduced is critical.
"People resent having to operate poorly designed technology that is difficult to use, breaks down or makes errors. Workers appreciate new technology when they see it as 'right for the job' and dislike it when it is dysfunctional, unsuitable or misunderstood by managers," the report concluded. ®