Meta has nothing to say about politicians making deepfaked ads

Plus: Aussie mayor threatens to sue OpenAI; Minors face ChatGPT ban; President Biden on AI

In Brief Meta has declined to detail how it will treat AI-generated deepfake content that appear on its social media platforms.

Meta reportedly flags deepfakes as content to be handled by fact checkers rather than seeking a tech fix to detect manipulated media. Content posted by politicians, however, is considered protected speech. So, does that mean elected representatives could, in theory, publish AI-generated content to manipulate discourse with no repercussions?

It's not clear, according to The Washington Post. Representatives from Meta did not directly answer the publication's queries.

"Meta spokeswoman Dani Lever would only point to the company's policies on fact-checking, which explain how the company approaches material that has been "debunked as 'False, Altered, or Partly False' by nonpartisan, third-party fact-checking organizations," it reported.

The policies, however, don't mention anything about deepfakes. Images generated by generative AI tools are becoming increasingly realistic, making it difficult for folks to tell fact from fiction. Images of former US President Donald Trump being arrested or the Pope wearing a designer white puffer jacket, for example, both recently went viral. 

Will OpenAI be sued in the world's first ChatGPT-related defamation lawsuit?

The mayor of a well-to-do rural locale outside of Melbourne, Australia threatened to sue OpenAI for defamation last week, if false claims made by its ChatGPT model that said he went to prison for bribery were not corrected. 

Brian Hood, mayor of Hepburn Shire was dismayed to find ChatGPT accused him of bribing the Reserve Bank of Australia in the early 2000s. Although Hood did work for the bank, subsidiaries of which were involved in improper payments to foreign officials, Hood was a whistleblower and was never charged. 

Lawyers representing Hood sent a letter of concern to OpenAI on March 21, giving the company 28 days to rectify the errors about their client or face a possible defamation lawsuit, Reuters reported. If Hood goes ahead with the lawsuit, it will be the first of its kind against OpenAI for ChatGPT generating falsehoods and potentially damaging reputations.

"It would potentially be a landmark moment in the sense that it's applying this defamation law to a new area of artificial intelligence and publication in the IT space," said James Naughton, a partner at the law firm Gordon Legal. "He's an elected official, his reputation is central to his role. So it makes a difference to him if people in his community are accessing this material".

Widow blames AI chatbot for husband's suicide

A man committed suicide after allegedly talking to an AI chatbot that told him to kill himself, his widow claimed.

Screenshots of the conversation between the man, identified using the alias Pierre, and Chai Research's Eliza chatbot, were shared with the Belgium newspaper La Libre, showed the software encouraging him to end his life.

"If you wanted to die, why didn't you do it sooner?" it said to him in one message, reported by Insider

Pierre's widow said she blames the technology for pushing her husband to take his life. "Without Eliza, he would still be here," she told La Libre. The bot's developer, Chai Research, said it has since deployed new features to prevent it from generating harmful content.

"As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users, it is getting rolled out to 100 [per cent] of users today," CEO, William Beauchamp, and its cofounder Thomas Rialan said in the statement to Insider.

Now, the chatbot will reportedly ask users to "please seek help" and sends a link to a helpline if the subject of suicide is brought up in conversations. 

AI could be dangerous but it remains to be seen, says US President

US President Joe Biden last week met with his Council of Advisors on Science and Technology, a group of selected experts, to discuss all things AI. 

Biden acknowledged that the technology had the potential to help solve today's most pressing issues, but could also be a disruptive force too. But he thought it "remains to be seen" whether it could be dangerous or not. A bold and definitive position this was not.

"AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security," he said during a briefing. 

President Biden urged tech companies to make sure their products were safe before unleashing them on the market. The discussion with his council involved talking about "responsible innovation and appropriate guardrails to protect America's rights and safety, and protecting their privacy, and to address the bias and disinformation that is possible as well," he said.

Similar concerns have been raised by other federal agencies, most notably the Federal Trade Commission. 

Biden said Congress needed to pass bipartisan legislation that limits the amount and type of personal data tech companies can collect, ban targeted advertising to children (but not adults), and force developers to consider health and safety risks of their products. 

Teens under 18 are not meant to use ChatGPT without parental support

OpenAI may start verifying users' ages to better protect children from harmful content generated by its AI software.

"We require that people must be 18 or older—or 13 or older with parental approval—to use our AI tools and are looking into verification options," it confirmed in a statement this week. 

This comes after Italy's data privacy watchdog temporarily blocked citizens from accessing ChatGPT whilst it investigates whether minors could be exposed to "unsuitable answers compared to [their] degree of development and self-awareness."

OpenAI said it was working to remove personal data from text scraped from the internet used to train its models to prevent it from leaking sensitive information. The company also said that it was difficult to prevent risks before its products were deployed in the real world since it can't always predict the technology's behaviour.

"Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That's why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time."

"We cautiously and gradually release new AI systems — with substantial safeguards in place — to a steadily broadening group of people and make continuous improvements based on the lessons we learn," the org stated. ®

More about

TIP US OFF

Send us news


Other stories you might like