OpenAI says Chinese gang tried to phish its staff

Claims its models aren't making threat actors more sophisticated - but is helping debug their code

OpenAI has alleged the company disrupted a spear-phishing campaign that saw a China-based group target its employees through both their personal and corporate email addresses.

The group, which OpenAI says is called SweetSpecter, sent phishing emails that contained a malicious attachment designed to deploy the SugarGh0st RAT malware. The malware had the capability to give the hacker group control over a compromised machine, allowing them to execute arbitrary commands, take screenshots, and exfiltrate data.

OpenAI was tipped off of the campaign by what it called a “credible source,” and banned associated accounts. The emails were blocked by the company’s security systems before reaching the employees.

“Throughout this process, our collaboration with industry partners played a key role in identifying these failed attempts to compromise employee accounts,” stated [PDF] OpenAI. “This highlights the importance of threat intelligence sharing and collaboration in order to stay ahead of sophisticated adversaries in the age of AI.”

The company believes that SweetSpecter has also been using OpenAI’s services for offensive cyber operations, including reconnaissance, vulnerability research, and scripting support. The ChatGPT-maker downplayed the use of its AI, writing that the threat actor’s use of its models did not help it to develop novel capabilities that couldn't be sourced from public resources.

The China phishing allegation was raised in a document titled “Influence and cyber operations: an update” in which OpenAI also claimed it has “disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models.”

The firm’s analysis of those efforts is that most “used our models to perform tasks in a specific, intermediate phase of activity – after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed ‘finished’ products such as social media posts or malware.”

“Activities ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts,” detailed OpenAI.

The document also found that threat actors “continue to evolve and experiment with our models” but OpenAI has not seen evidence that its tools allowed “meaningful breakthroughs in their ability to create substantially new malware or build viral audiences.”

But threat actors are finding other uses for Open AI. One threat actor – an outfit named “STORM-0817” – using its tools to debug their code. The AI outfit also “found and disrupted a cluster of ChatGPT accounts that were using the same infrastructure to try to answer questions and complete scripting and vulnerability research tasks.”

The model-maker has also observed attempts to use its tools to influence elections, usually by creating social media posts or news articles. OpenAI nipped some of those efforts in the bud, but none it saw gained a substantial audience. ®

More about

TIP US OFF

Send us news


Other stories you might like