When LLMs get personal info they are more persuasive debaters than humans
Large-scale disinfo campaigns could use this in machines that adapt 'to individual targets.' Are we having fun yet?
Fresh research is indicating that in online debates, LLMs are much more effective than humans at using personal information about their opponents, with potentially alarming consequences for mass disinformation campaigns.
The study showed that GPT-4 was 64.4 percent more persuasive than a human being when both the meatbag and the LLM had access to personal information about the person they were debating. The advantage fell away when neither human nor LLM had access to their opponent's personal data.
The research, led by Francesco Salvi, research assistant at the Swiss Federal Technology Institute of Lausanne (EPFL), matched 900 people in the US with either another human or GPT-4 to take part in an online debate. The subjects debated included whether the nation should ban fossil fuels.
In some pairs, the debater – either human or LLM – was given some personal information about their opponent, such as gender, age, ethnicity, education level, employment status, and political affiliation extracted from participant surveys. Participants were recruited via a crowdsourcing platform specifically for the study and debates took place in a controlled online environment. Debates centered on topics on which the opponent had a low, medium, or high opinion strength.
The researchers pointed to criticism of LLMs for their "potential to generate and foster the diffusion of hate speech, misinformation and malicious political propaganda."
"Specifically, there are concerns about the persuasive capabilities of LLMs, which could be critically enhanced through personalization, that is, tailoring content to individual targets by crafting messages that resonate with their specific background and demographics," the paper published in Nature Human Behaviour today said.
"Our study suggests that concerns around personalization and AI persuasion are warranted, reinforcing previous results by showcasing how LLMs can outpersuade humans in online conversations through microtargeting," they said.
The authors acknowledged the study's limitations in that debates followed a structured pattern while most real-world debates are more open ended. Nonetheless, they argued it was remarkable how effectively the LLM used personal information to persuade participants, given how little the models had access to.
- Whodunit? 'Unauthorized' change to Grok made it blather on about 'White genocide'
- Sci-fi author Neal Stephenson wants AIs fighting AIs so those most fit to live with us survive
- Microsoft set to pull the plug on Bing Search APIs in favor of AI alternative
- The future of LLMs is open source, Salesforce's Benioff says
"Even stronger effects could probably be obtained by exploiting individual psychological attributes, such as personality traits and moral bases, or by developing stronger prompts through prompt engineering, fine-tuning or specific domain expertise," the authors noted.
"Malicious actors interested in deploying chatbots for large-scale disinformation campaigns could leverage fine-grained digital traces and behavioral data, building sophisticated, persuasive machines capable of adapting to individual targets," the study said.
The researchers argued that online platforms and social media take these threats seriously and extend their efforts to implement measures countering the spread of AI-driven persuasion. ®