This article is more than 1 year old
Algorithm used to predict sepsis in hundreds of US hospitals isn’t as good as maker claims — study
Developer hits back at researchers’ ‘hypothetical approach’
An algorithm used by hundreds of US hospitals to predict whether or not patients with infections have contracted sepsis is less accurate than its maker claims, according to a published study.
Sepsis is a leading cause of death in American hospitals. So Epic Systems, a major healthcare software provider whose products are used in the majority of US hospitals, developed a tool to identify whether a patient is at risk of sepsis.
The idea is that if hospitals can foresee cases of sepsis they’ll be able to provide care for patients before their condition gets worse. But not only does the algorithm tend to forecast patients will suffer from sepsis when they don’t, it’s less accurate than its maker claims at correctly identifying people who develop the life-threatening complication, according to peer-reviewed research.
Epic chose to use billing codes to define the outcome of sepsis
Epic reckons its algorithm is accurate up to 83 per cent of the time, yet a paper published in the Journal of the American Medical Association this week concluded the software is only right in about 63 per cent of cases.
The issue lies with how the model comes up with its predictions, we're told. “Epic chose to use billing codes to define the outcome of sepsis,” Karandeep Singh, co-author of the study and an associate professor focused on machine learning and healthcare at the University of Michigan, explained to The Register. The model analyses which drugs or medical procedures a patient is being charged for to determine whether that person is at risk of sepsis, we’re told.
It’s often not very helpful, therefore, in cases where antibiotics are already being administered to a patient to tackle sepsis. “In essence, they developed the model to predict sepsis that was recognized by clinicians at the time it was recognized by clinicians. However, we know that clinicians miss sepsis,” Singh added.
Singh said he and his colleagues raised their concerns to Epic in April. Epic, however, doesn’t agree with the academics, and said the scientists had not fine-tuned the application prior to testing.
“The authors used a hypothetical approach,” a company spokesperson told El Reg. “They did not take into account the analysis and required tuning that needs to occur prior to real-world deployment to get optimal results. In order to predict who might become septic, the model is trained on past patients who had a clinical diagnosis of sepsis.”
- That AI scanning your X-ray for signs of COVID-19 may just be looking at your age
- Mounties messed up by using Clearview AI, says Canadian Privacy Commissioner
- Apple sued in nightmare case involving teen wrongly accused of shoplifting, driver's permit used by impostor, and unreliable facial-rec tech
- Insurance startup backtracks on running videos of claimants through AI lie detector
The academics tested the model on 27,697 patients who had been hospitalized 38,455 times. Most did not go on to develop sepsis during their stay. Of the 2,552 patients who did develop sepsis, Epic’s algorithm was only able to predict seven per cent of them would do so before they were administered antibiotics, and it had a false positive rate of 18 per cent, the researchers said.
A better approach for the software would be to use a model that analyses healthcare symptoms defined by health agencies, such as the US Centers for Disease Control and Prevention, rather than just relying on billing codes, it would seem.
“This is not typically how sepsis is defined for the purposes of quality measurement or model development,” Singh told us. "There are several consensus criteria that exist. Either of these would have been preferable to simply using billing codes." ®