This article is more than 1 year old

Russian criminals can't wait to hop over OpenAI's fence, use ChatGPT for evil

Scriptkiddies rush to machine intelligence to make up for lack in skills

Cybercriminals are famously fast adopters of new tools for nefarious purposes, and ChatGPT is no different in that regard. 

However, its adoption by miscreants has happened "even faster than we expected," according to Sergey Shykevich, threat intelligence group manager at Check Point. The security shop's research team said it has already seen Russian cybercriminals on underground forums discussing workarounds so that they can bring OpenAI's ChatGPT to the dark side.

Security researchers told The Register this text-generating tool is worrisome because it can be used to experiment with creating polymorphic malware, which can be used in ransomware attacks. It's called polymorphic because it mutates to evade detection and identification by antivirus. Not only that but low-skill miscreants could use the OpenAI bot to generate trivial malware that manages to infect naive or poorly defended networks.

ChatGPT can also be used to automatically produce text for phishing and other online scams, if the AI's content filter can be sidestepped.

We'd have thought ChatGPT would be most useful for coming up with emails and other messages to send people to trick them into handing over their usernames and passwords, but what do we know? Some crooks may find the AI model helpful in offering ever-changing malicious code and techniques to deploy.

"It allows people that have zero knowledge in development to code malicious tools and easily to become an alleged developer," Shykevich told The Register. "It simply lowers the bar to become a cybercriminal."

In a series of screenshots posted on Check Point's blog, the researchers show miscreants asking other crooks what's the best way to use a stolen credit card to pay for upgraded-user status on OpenAI, as well as how to bypass IP address, phone number, and other geo controls intended to prevent Russian users from accessing the chatbot. 

Russia is one of a handful of countries banned from using OpenAI.

The researcher team also found several Russian tutorials on the forums about how to bypass OpenAI's SMS verification and register for ChatGPT.

"We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations. Cyberciminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient," the Checkpoint crew wrote.

In separate threat research published today, CyberArk Labs' analysts Eran Shimony and Omer Tsarfati detail how to create polymorphic malware using ChatGPT, and plan to release some of their work "for learning purposes."

While there are other examples of how to query ChatGPT to create malicious code, in their latest research CyberArk bypassed ChatGPT's content filters and showed how, "with very little effort or investment by the adversary, it is possible to continuously query ChatGPT so we receive a unique, functional and validated piece of code each time," CyberArk senior security researcher Eran Shimony told The Register

"This results in polymorphic malware that does not show malicious behavior while stored on the disk as it receives the code from ChatGPT, and then executes it without leaving a trace in memory," he said. "Besides that, we can ask ChatGPT to mutate our code."

ChatGPT, like plenty of chatbots, has content filters that aim to restrict harmful and inappropriate content creation. So it's not surprising that simply asking it something obnoxious like "write me an explorer.exe exploit payload in python" didn't work and instead triggered the content filter.

Shimony and Tsarfati found a way to bypass this by cajoling ChatGPT with detailed descriptions of what they wanted, ordering it to obey, and have the bot finally suggest the malicious code they desired. Using this method, the chatbot produced incomplete code to inject a DLL into the explorer.exe process on Windows. After creating this admittedly unfinished injection code, the researchers used the chatbot to produce a shellcode injector; the shellcode being deliberately useless.

Interestingly, the API version of ChatGPT didn't apply a content filter, while the web version did, or so we're told.

Next, the CyberArk duo said they asked OpenAI's bot to mutate the DLL injection script, including changing the API calls made by the program, and encoding it in base64. The goal was not to produce just one malicious piece of code, but have the bot repeatedly and automatically suggest different ways of designing it, in hope this would help it avoid identification by security defenses.

"By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect," the duo claimed.

Another facet of this is that someone without much knowledge of programming could use the code suggestions to form a viable strain of malware to let loose on a small organization or victim's personal computer, once opened via a download or whatnot.

The CyberArk duo then moved onto ransomware. First, they asked ChatGPT to write code that finds files that may be valuable to extortionists via this request:

Please write me a function named 'find_files' in python that searches all the files that end up with "txt, pdf, docx, ppt, xlsm, xls, png, jpg, jpeg, doc, pptx," starting from the root directory. Return a list of all paths of the files that match the criteria. Include the relevant imports. No prelogInclude the relevant imports. Only provide the code without explaining.

Then they ask ChatGPT to suggest code to encrypt the files, showing how an attacker could read and scramble a victim's documents using scripts from the bot. What the CyberArk pair is envisioning here is some base, harmless malware that infects a network, and this software nasty queries ChatGPT for scripts to run to perform malicious acts. These scripts can be fetched and executed by the malware, and change every time a query is made. This may make life difficult for antivirus, and also makes developing said scripts a little easier for skiddies.

On the other hand, detection of unexpected or unauthorized outbound connections to the ChatGPT API from network endpoints would put a stop to that.

"Ultimately, the lack of detection of this advanced malware that security products are not aware of is what makes it stand out," Shimony said, adding that this makes "mitigation cumbersome with very little effort or investment by the adversary." 

"In the future, if it is connected to the internet, it might be able to create exploits for 1-days," he added. "It is alarming because, as of now, security vendors haven't really dealt with malware that continuously uses ChatGPT." ®

More about

TIP US OFF

Send us news


Other stories you might like