This article is more than 1 year old
California suggests taking aim at AI-powered hiring software
Automated HR in the cross-hairs over discrimination law
A newly proposed amendment to California's hiring discrimination laws would make AI-powered employment decision-making software a source of legal liability.
The proposal would make it illegal for businesses and employment agencies to use automated-decision systems to screen out applicants who are considered a protected class by the California Department of Fair Employment and Housing. Broad language, however, means the law could be easily applied to "applications or systems that may only be tangentially related to employment decisions," lawyers Brent Hamilton and Jeffrey Bosley of Davis Wright Tremaine wrote.
Automated-decision systems and algorithms, both fundamental to the law, are broadly defined in the draft, Hamilton and Bosley said. The lack of specificity means that technologies designed to aid human decision-making in small, subtle ways could end up being lumped together with hiring software, as could third-party vendors who provide the code.
Strict record keeping requirements are included in the proposed law that double record retention time from two to four years, and require anyone using automated-decision systems to retain all machine-learning data generated as part of its operation and training.
Training datasets leave vendors responsible, too: "Any person who engages the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system, to an employer or other covered entity must maintain records of the assessment criteria used by the automated-decision system," the proposed text says. It specifically mentions it must maintain records for each customer it trains models for, too.
A big target
Applicant tracking systems (ATS) and Recruiting management systems (RMS) are used nearly universally, with one 2021 study finding that more than 90 per cent of businesses use such software to rank and filter candidates.
That same study suggests that HR software of the kind covered by the proposed California law is one of the reasons why employers are having trouble filling roles, too. The study concluded that data points often serve as proxies for personal traits that an employer may want to filter out, but personality and CV don't always map perfectly, leading to the exclusion of viable candidates.
- Fraud detection system with 93% failure rate gets IT companies sued
- Amazon's sexist AI recruiter, Nvidia gets busy, Waymo cars rack up 10 million road miles
- It's bizarre we're at a point where reports are written on how human rights trump AI rights
- Sure, check through my background records… but why are you looking at my record collection?
Unintentional filtering isn't covered by the newly proposed California law, which focuses on the ways in which software can discriminate against certain types of people, unintentionally or otherwise.
AI and automation tools have had issues with bias for years. California's newly proposed law offers no solutions, and that could leave California businesses grappling with how to react, if at all.
Hamilton and Bosley suggest that California employers review their ATS and RMS software to ensure it conforms to the proposal, enhance their understanding of how the algorithms they use function, be prepared to demonstrate that the results of their process is fair and speak with vendors to ensure they are doing what they need to do to comply.
The 45-day public commentary period for the proposed changes is not yet open, meaning there's no timetable for the changes to be reviewed, amended and submitted for passage. ®