People have empathy with AI… as long as they think it's human
Study finds emotional support from chatbots is more readily accepted if participants don't know it's an AI
A study of AI chat sessions has shown people tend to have more empathy with a chatbot if they think it is human.
Nine studies involving 6,282 participants found humans are more likely to reject emotional support from LLM-based chatbots unless they are told they are talking to a human.
Research published in Nature Human Behaviour said that some AI models "show remarkable abilities to engage with social–emotional situations and to mimic expressions of interpersonal emotional reactions, such as empathy, compassion and care."
It points out that AI may help with social interaction and provide emotional support, since earlier studies have shown that LLM-powered tools can determine a human's emotional state and that their responses can be perceived as empathetic.
Anat Perry, associate professor at Jerusalem's Hebrew University, and her colleagues studied the subjects' response to AI conversation. The participants were told that the responses were either written by a human or by an AI chatbot, even though they all came from an AI.
Although subjects had some empathy with all responses, they tended to have more empathy with responses they falsely believed came from a human. They were also happy to wait longer to get responses from the apparently human responses.
"Human-attributed responses were rated as more empathic and supportive, and elicited more positive and fewer negative emotions, than AI-attributed ones. Moreover, participants' own uninstructed belief that AI had aided the human-attributed responses reduced perceived empathy and support. These effects were replicated across varying response lengths, delays, iterations and large language models and were primarily driven by responses emphasizing emotional sharing and care," the researchers said.
"Additionally, people consistently chose human interaction over AI when seeking emotional engagement. These findings advance our general understanding of empathy, and specifically human–AI empathic interactions."
The research paper argues it is an interesting result, particularly as at face value, LLMs actually have more empathy than humans.
It cited studies showing empathy is hard work. Commonly, people don't often grok another person's perspective and fail to empathize when fatigued or burned out. They might also avoid empathy because it is taxing.
"In contrast to humans, LLMs can accurately capture our emotional situation by analyzing patterns and associations of phrases, and they do so instantly, without tiring or being preoccupied. These models show impressive abilities to infer emotional states," the paper said.
- Want a job? Just put 'AI skills' on your resume
- HPE customers on agentic AI: No, you go first
- The launch of ChatGPT polluted the world forever, like the first atomic weapons tests
- Meta offered one AI researcher at least $10,000,000 to join up
Other research has suggested it might be a two-way street: LLMs can show signs of apparent emotional stress. Earlier this year, a study published in Nature found that when GPT-4 was subjected to traumatic narratives and then asked to respond to questions from the State-Trait Anxiety Inventory, its anxiety score "rose significantly" from a baseline of no/low anxiety to a consistent highly anxious state.
While some skepticism of inferred emotional states of chatbots is justified, it's worth remembering people perform badly at telling AI poetry apart from human bards.
A study published in November last year suggested readers mistake the complexity of human-written verse for incoherence created by AI and underestimate how human-like generative AI can appear. ®