Google AI chatbot more empathetic than real doctors in tests
No, that does not mean machines can replace primary care physicians
An AI chatbot was better at diagnosing medical ailments and communicating results than human physicians in text-based conversations, a research paper from Google claims.
The system, named Articulate Medical Intelligence Explorer (AMIE), is a large language model trained to collect medical information and conduct clinical conversations. AMIE was designed to analyze symptoms described by users, ask questions, and predict diagnoses.
In a test, 20 mock patients presenting with fabricated illnesses entered the randomized experiment, along with 20 professional primary care physicians who were recruited for the experiment to add the human touch.
The patients didn't know whether they were conversing with AMIE or a real physician. They were asked to rate the quality of their interactions, not knowing whether they had chatted with an AI chatbot or a human.
The results showed that most of the mock patients preferred chatting to AMIE compared to real doctors across the 149 case scenarios tested in the trial. The participants said the AI chatbot was better at understanding their concerns, and was more empathetic, clear, and professional in replies. That's not too surprising given that an AI chatbot's persona and tone can be programmed so that they behave more consistently and without pesky human problems like being tired or distracted.
Interestingly, AMIE seemed more accurate at diagnosing medical issues too. But does this mean that AI chatbots are better than doctors at providing medical care? Not at all, Google explains.
Although the results seem promising, primary care physicians and patients interact in-person and can build a relationship over time. Clinicians also have more access to other types of information than just text descriptions when they make diagnoses, so it's not a practical experiment, really – as Google acknowledges.
- Like Microsoft, Google can't stop its cloud from pouring AI all over your heads
- AIs can produce 'dangerous' content about eating disorders when prompted
- Google, you're not unleashing 'unproven' AI medical bots on hospital patients, yeah?
- Google Lens now can spot problematic skin spots, or not
"Our research has several limitations and should be interpreted with appropriate caution," the Google researchers admitted.
"Firstly, our evaluation technique likely underestimates the real-world value of human conversations, as the clinicians in our study were limited to an unfamiliar text-chat interface, which permits large-scale LLM–patient interactions but is not representative of usual clinical practice."
The goal isn't to replace primary care physicians. Instead, Google believes that AI chatbots can be useful tools to support patients that might not have access to healthcare. But deploying such a system in the real world is risky, and will require more work to use it responsibly, they said.
"Translating from this limited scope of experimental simulated history-taking and diagnostic dialogue, towards real-world tools for people and those who provide care for them, requires significant additional research and development to ensure the safety, reliability, fairness, efficacy, and privacy of the technology," the team concluded in their paper.
"If successful, we believe AI systems such as AMIE can be at the core of next generation learning health systems that help scale world class healthcare to everyone." ®