Supreme Court supremo ponders AI-powered judges, concludes he's not out of a job yet
Justice Roberts thinks ML can help in legal cases, if humans keep their hands on the tiller
US Supreme Court Chief Justice John Roberts believes that artificial intelligence will play an increasingly important role in the legal process, but he expects "human judges will be around for a while."
Roberts made that observation in his 2023 Year-End Report on the Federal Judiciary [PDF], which has not previously touched on the topic.
"AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike," Roberts wrote. "But just as obviously it risks invading privacy interests and dehumanizing the law."
Roberts cited the potential value of AI systems to help people who cannot afford legal representation by allowing them to prepare court filings on their own. At the same time, he cautioned that AI models have made headlines for their tendency to hallucinate, "which caused the lawyers using the application to submit briefs with citations to non-existent cases. (Always a bad idea.)"
As if to underscore that concern, documents unsealed last week revealed that Michael Cohen, the attorney who previously handled legal affairs for President Donald Trump, had given his own lawyer fake legal citations from Google Bard. He did so in support of a motion seeking an early end to his court-ordered supervision, following his admission of 2018 campaign finance violations.
Roberts also argued that machines cannot currently match a human judge's ability to assess the sincerity of a defendant's speech. "Nuance matters: Much can turn on a shaking hand, a quivering voice, a change of inflection, a bead of sweat, a moment’s hesitation, a fleeting break in eye contact," he wrote.
And he went on to observe that, in criminal cases where AI is used to assess flight risk, recidivism or other predictive decisions, there's ongoing controversy about due process, reliability, and biases that such systems may contain.
"At least at present, studies show a persistent public perception of a 'human-AI fairness gap,' reflecting the view that human adjudications, for all of their flaws, are fairer than whatever the machine spits out," Roberts wrote.
- While you holidayed, Microsoft brought Copilot to mobile devices, again
- New York Times sues OpenAI, Microsoft over 'millions of articles' used to train ChatGPT
- Trust us, says EU, our AI Act will make AI trustworthy by banning the nasty ones
- EU running in circles trying to get AI Act out the door
That perception was challenged in a September paper by Harvard academics Victoria Angelova, Will Dobbie, and Crystal Yang through the National Bureau of Economic Research. The paper, "Algorithmic Recommendations and Human Discretion," finds that when human judges override algorithmic decisions on whether to release or detain a defendant on bail, 90 percent of the humans underperform the algorithm in spotting potential recidivists.
"This finding indicates that the typical judge in our setting is less skilled at predicting misconduct than the algorithm and that we could substantially decrease misconduct rates by automating release decisions," the authors state in their paper.
At the same time, 10 percent of judges manage to outperform the algorithm when overriding its recommendations, and are better at anticipating misconduct by defendants. The common factor among these "high-skill judges" is that they're less likely to have worked previously in law enforcement and they're better at using private information not available to the algorithm.
The paper says that low-skill judges pay more attention to demographic factors like race, while high-skill judges focus more on non-demographic issues like mental health, substance abuse, and financial resources.
Human judges no doubt will be around for a while, Roberts opines. And for the underperforming majority, it may be that AI can help make them better, at least in the context of pretrial decision-making. ®