OpenAI CEO warns that GPT-4 could be misused for nefarious purposes
In brief OpenAI's CEO Sam Altman admitted in a television interview that he's "a little bit scared" of the power and risks language models pose to society.
Altman warned that their ability to automatically generate text, images, or code could be used to launch disinformation campaigns or cyber attacks. The technology could be abused by individuals, groups, or authoritarian governments.
"We've got to be careful here," he told ABCNews. "I think people should be happy that we are a little bit scared of this."
OpenAI has been criticized for keeping technical details about its latest GPT-4 language model secret – it has not disclosed information on the model's size, architecture, training data, and more.
Some people, however, are confused by the startup's behavior. If the technology is as dangerous as OpenAI claims, why is it readily available to anyone willing to pay for it? Still, Altman added: "A thing that I do worry about is … we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it."
You can watch the interview below.
Discord briefly changed its data collection policy after announcing new AI tools
Instant messaging app Discord quietly removed policies promising not to collect user data after it rolled out a series of new generative AI features, and added them back in after users noticed the change.
Discord rolled out a chatbot named Clyde – powered by AI models developed by Stable Diffusion and OpenAI – that is capable of producing text and images to generate memes, jokes, and more.
Discord did, however, admit it may build features that will process voice and video content in the future.
- We read OpenAI's risk study. GPT-4 is not toxic ... if you add enough bleach
- AI-generated art can be copyrighted, say US officials – with a catch
- Microsoft's Copilot AI to pervade the whole 365 suite
London nightclub plays AI-generated music for partygoers
Clubbers danced to music generated using AI software in a trendy dance bar in London in the first event of its kind last month, Reuters reported this week.
The Glove That Fits, a nightclub in East London known for playing electronic music, hosted "Algorhythm" – a night promoting music created using an app called Mubert that makes AI-generated tracks.
The DJ booth may have been empty, but the dance floor wasn't. A couple of partygoers even said the music wasn't too bad.
"It could be more complex," said Rose Cuthbertson, an AI master's student. "It doesn't have that knowledge of maybe other electronic genres that could make the music more interesting. But it's still fun to dance to."
Pietro Capece Galeota, a computer programmer, said the software had "been doing a pretty good job so far."
Paul Zgordan, Mubert's CEO, said AI will create new jobs for artists and novel ways of producing music. "We want to save musicians' jobs, but in our own way. We want to give them this opportunity to earn money with the AI. We want to give people new (jobs)." ®