This article is more than 1 year old

Harmed by a decision made by a poorly trained AI? You should be able to sue for damages, says law prof

Let a thousand torts bloom

Companies deploying machine-learning systems trained on subpar datasets should be legally obligated to pay damages to victims that have been harmed by the technology under tort law, a law professor has opined.

In a virtual lecture organised by the University of California, Irvine, Professor Frank Pasquale, of the Brooklyn Law School, described how America's laws should be expanded to hold machine-learning companies to account. The talk was based on an article [PDF] published in the Columbia Law Review.

“Data can have massive adverse impacts, and therefore I think there really should be tort liability for many uses where there has been inaccurate or inappropriate data,” Prof Pasquale said. But in order for victims to build a case against vendors that have been training AI systems in a reckless manner, there has to be some way to obtain and investigate the training data.

doctor

Want to let an AI-powered doctor loose on patients? Try slapping a food-label-like sticker on it, says Uncle Sam

READ MORE

For example, he said, initial results from machine-learning research show that the technology is promising in healthcare, and there are numerous studies claiming that machines are as good as, if not better, than professionals at diagnosing or predicting the onset of a range of diseases.

Yet the data used to train such systems is often flawed, Prof Pasquale argued. The datasets can be, for instance, unbalanced, where the samples aren’t diverse enough to represent people of various ethnicities and genders. The biases in the datasets are carried forward in the performance of these models; they are often less accurate and less effective for women or people of darker skin, for example. In the worse case scenario, patients could be mistakenly diagnosed or overlooked.

New laws must be passed at the state or federal level to force companies to be transparent about what data their systems have been trained on, and how that data was collected, he stated. Next, federal organizations, such as the Food and Drug Administration, the National Institute of Standards and Technology, the Department of Health and Human Services, or Office for Civil Rights, should launch efforts to audit the datasets to analyze their potential biases and effects.

“Such regulation not only provides guidance to industry to help it avoid preventable accidents and other torts. It also assists judges assessing standards of care for the deployment of emerging technologies,” Pasquale wrote in the aforementioned article. He said that regulation should not impede progress and innovation in AI healthcare.

“Development is really exciting and should be applauded in these areas, I don’t want to lose the lead but [the technology] will only be fair and just if there is tort law,” he concluded. ®

More about

TIP US OFF

Send us news


Other stories you might like