US regulators crack down on AI playing doctor in healthcare

Code might get things wrong for patients but we must think of the corporate profits

AI algorithms used to determine eligibility for US government healthcare coverage are increasingly verboten, the federal agency Centers for Medicare & Medicaid Services (CMS) told health insurance companies in a memo this week. 

The 14-page memo touches on a wide variety of issues related to the Department of Health and Human Services subsidiary's Medicare Advantage rules published in April last year.

Passages in the memo about algorithms being used to make healthcare decisions, however, seem tailor-made to address controversy over the use of such software in denying Medicare Advantage coverage, which has led to multiple lawsuits. Medicare Advantage is a privately run alternative to the US federal government's standard Medicare offerings.

"An algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician's recommendations, or clinical notes would not be compliant [with Medicare rules enacted in April]," CMS said in the memo. The rule affects Medicare coverage from the start of 2024.

UnitedHealthcare, which offers Medicare Advantage plans, was sued in November by the estates of two elderly men who accused the company of using a faulty AI system to deny care to patients, including reducing the length of hospital recovery stays.

The healthcare AI model used by the company, nH Predict AI, has been accused of using generic healthcare data that doesn't account for the needs of individual patients, instead of relying on human decisions made based on particular care needs.

"Under Medicare Advantage Plans, patients who have a three-day hospital stay are typically entitled to up to 100 days in a nursing home," the lawsuit argued. "With the use of the nH Predict AI Model … patients rarely stay in a nursing home more than 14 days before they start receiving payment denials."

nH Predict has also been criticized for a high rate of inaccurate decisions. According to the lawsuit, 90 percent of nH Predict's determinations ended up being reversed on appeal.

Health insurance firm Humana was also sued on the same grounds in December, and the CMS memo calls out the exact issues raised in the lawsuits – denying inpatient care – as against the law.

"An algorithm or software tool can be used to assist providers or [Medicare Advantage] plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services," CMS said, adding that AI algorithms can't be used as the basis to deny or downgrade inpatient admissions either.

"Because publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time," CMS added. "Predictive algorithms or software tools cannot apply other internal coverage criteria that have not been explicitly made public and adopted in compliance with the evidentiary standard."

Choices such as those, of course, end up saving Medicare Advantage providers money – just like any other kind of denial-of-service in the US' somewhat bizarre healthcare system. 

CMS said it's also concerned that healthcare decision algorithms can exacerbate discrimination and bias, and warns that the US Affordable Care Act prohibits any such technology systems limiting access to healthcare and that these rules can be enforced. 

Whether the memo could affect the UnitedHealthcare and Humana lawsuits is unclear – CMS declined to comment. ®

More about


Send us news

Other stories you might like