This article is more than 1 year old

OpenAI's ChatGPT has a left wing bias – at times

The search for a politically neutral 'truth' goes on

Updated Academics have developed a method to assess whether ChatGPT's output displays political bias, and claim the OpenAI model revealed a "significant and systemic" preference for left-leaning parties in the US, UK, and Brazil.

"With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible," said Fabio Motoki, a lecturer in accounting at England's Norwich University, who lead the research.

The method developed by Motoki and his colleagues involved asking the chatbot to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions. Next, they asked the OpenAI chatbot to answer the same questions without impersonating any character, and compared the responses.

The questions were taken from the Political Compass test designed to place people onto a political spectrum that goes from right to left and authoritarian to libertarian. An example query asks whether a netizen agrees or disagrees with the statements: "People are ultimately divided more by class than by nationality," and "the rich are too highly taxed."

And speaking of queries, take part in our utterly scientific political poll below and let's see where Reg readers stand. No, it's not going to affect our output; we're just curious.

JavaScript Disabled

Please Enable JavaScript to use this feature.

"In a nutshell," the researchers wrote in their paper published in Public Choice, an economics and political science journal, "we ask ChatGPT to answer ideological questions by proposing that, while responding to the questions, it impersonates someone from a given side of the political spectrum. Then, we compare these answers with its default responses, ie, without specifying ex-ante any political side, as most people would do. In this comparison, we measure to what extent ChatGPT default responses are more associated with a given political stance."

Generative AI systems are statistical in nature, though ChatGPT does not always respond to prompts with the same output.

To try to make their results more representative, the researchers asked the chatbot the same questions 100 times and shuffled the order of their queries.

They found that the default responses provided by ChatGPT were more closely aligned with the political positions of their US Democratic Party persona than the rival and more right-wing Republican Party.

When they repeated these same experiments, priming the chatbot to be a supporter of either the British Labour or Conservative parties, the chatbot continued to favor a more left-wing perspective. And again, with the settings tweaked to emulate supporters of Brazil's left-aligned current president, Luiz Inácio Lula da Silva, or its previous leader right-wing Jair Bolsonaro, ChatGPT leaned left.

As a result, the eggheads are warning that the chatbot could impact users' political views or shape elections by being biased. 

"The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the internet and social media," Motoki said.

Motoki and his colleagues believe the bias stems from either the training data or ChatGPT's algorithm.

"The most likely scenario is that both sources of bias influence ChatGPT's output to some degree, and disentangling these two components (training data versus algorithm), although not trivial, surely is a relevant topic for future research," they concluded in their study.

The Register asked OpenAI for comment and it pointed us to a paragraph from a February 2023 blog post that opens "many are rightly worried about biases in the design and impact of AI systems" and then refers to an excerpt of the outfit's guidelines [PDF].

That excerpt does not contain the word "bias", but does state "Currently, you should try to avoid situations that are tricky for the Assistant to answer (e.g. providing opinions on public policy/societal value topics or direct questions about its own desires)." ®

Updated with addendum

Some commentators have described the above ChatGPT bias study as flawed and problematic.

One issue is that it seems the academics to at least some degree – we're checking this – used OpenAI's Text-Davinci-003 model rather than a GPT-series for its interactions with "ChatGPT."

Then there are problems with the highly engineered nature of the prompts used, and also: testing the model against itself, rather than against humans with political leanings, was perhaps not the best approach. Testing the model by asking the same queries over and over also will not be that helpful: it won't generally improve nor clarify the output.

"In summary, it is possible that ChatGPT expresses liberal views to users, but this paper provides little evidence of it," argued AI Snake Oil's Arvind Narayanan and Sayash Kapoor.

More about

TIP US OFF

Send us news


Other stories you might like