This article is more than 1 year old

OpenAI CEO warns that GPT-4 could be misused for nefarious purposes

ALSO: Discord quietly edited its privacy policy after rolling out new generative AI features, and more

In brief OpenAI's CEO Sam Altman admitted in a television interview that he's "a little bit scared" of the power and risks language models pose to society.

Altman warned that their ability to automatically generate text, images, or code could be used to launch disinformation campaigns or cyber attacks. The technology could be abused by individuals, groups, or authoritarian governments.

"We've got to be careful here," he told ABCNews. "I think people should be happy that we are a little bit scared of this."

OpenAI has been criticized for keeping technical details about its latest GPT-4 language model secret – it has not disclosed information on the model's size, architecture, training data, and more.

Some people, however, are confused by the startup's behavior. If the technology is as dangerous as OpenAI claims, why is it readily available to anyone willing to pay for it? Still, Altman added: "A thing that I do worry about is … we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it."

You can watch the interview below.

YouTube Video

Discord briefly changed its data collection policy after announcing new AI tools

Instant messaging app Discord quietly removed policies promising not to collect user data after it rolled out a series of new generative AI features, and added them back in after users noticed the change.

Discord rolled out a chatbot named Clyde – powered by AI models developed by Stable Diffusion and OpenAI – that is capable of producing text and images to generate memes, jokes, and more. 

When it added new features to Clyde, a paragraph from its privacy policy stating: "We generally do not store the contents of video or voice calls or channels" and "We also don't store streaming content when you share your screen" suddenly disappeared. Users grew concerned that the chat platform may collect and store their data to train future AI models.

Discord quietly added both rules back in after it was criticized, TechRadar reported. A spokesperson said: "We recognize that when we recently issued adjusted language in our Privacy Policy, we inadvertently caused confusion among our users. To be clear, nothing has changed and we have reinserted the language back into our Privacy Policy, along with some additional clarifying information."

Discord did, however, admit it may build features that will process voice and video content in the future. 

London nightclub plays AI-generated music for partygoers

Clubbers danced to music generated using AI software in a trendy dance bar in London in the first event of its kind last month, Reuters reported this week. 

The Glove That Fits, a nightclub in East London known for playing electronic music, hosted "Algorhythm" – a night promoting music created using an app called Mubert that makes AI-generated tracks.

The DJ booth may have been empty, but the dance floor wasn't. A couple of partygoers even said the music wasn't too bad.

"It could be more complex," said Rose Cuthbertson, an AI master's student. "It doesn't have that knowledge of maybe other electronic genres that could make the music more interesting. But it's still fun to dance to."

Pietro Capece Galeota, a computer programmer, said the software had "been doing a pretty good job so far." 

Paul Zgordan, Mubert's CEO, said AI will create new jobs for artists and novel ways of producing music. "We want to save musicians' jobs, but in our own way. We want to give them this opportunity to earn money with the AI. We want to give people new (jobs)." ®

More about


Send us news

Other stories you might like