This article is more than 1 year old
Cybercrooks are telling ChatGPT to create malicious code
Chatbot might let unskilled criminals launch attacks, if the code works
Cybercriminals are beginning to use OpenAI's wildly popular ChatGPT technology seemingly in hope of quickly and easily developing code for malicious purposes.
A spin around underground hacking sites uncovered instances of miscreants trying to develop cyberthreat tools using the large language model (LLM) interface OpenAI unveiled in late November and opened up for public use, according to infosec outfit Check Point Research.
A nice [helping] hand to finish the script with a nice scope
Similar to the rise of as-a-service models in the cybercrime world, ChatGPT opens up another avenue for less-skilled crooks to easily launch cyberattacks, the researchers claimed in a report Friday.
"As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all," they wrote. "Although the tools that we present in this report are pretty basic, it's only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad."
Let's not forget that ChatGPT is also notorious for generating buggy code - Stack Overflow has banned software generated by the AI system because it's often seriously flawed. But the technology is improving and last month a Finnish government report warned AI systems are already in use for social engineering and in five years could drive a huge surge in attacks.
ChatGPT's machine learning capabilities enable the text-based tool to interact in a conversational way, with users typing a question and receiving an answer in a dialogue format. The technology can also answer follow-up questions and challenge users' answers.
The sophistication of OpenAI's offering has generated as much worry as enthusiasm, with educational institutions, conference organizers, and other groups moving to ban the use of ChatGPT for everything from school papers to research work.
The analysts in December demonstrated how ChatGPT can be used to create an entire infection flow, from phishing emails to running a reverse shell. They also used the chatbot to build backdoor malware that can dynamically run scripts created by the AI tool. At the same time, they showed how it can help cybersecurity pros in their work.
Now cybercriminals are testing it.
A thread titled "ChatGPT – Benefits of Malware" popped up December 29 on a widely used underground hacking forum written by a person who said they were experimenting with the interface to recreate common malware strains and techniques. The writer showed the code of a Python-based information stealer that searches for and copies file types and uploads them to a hardcoded FTP server.
Check Point confirmed that the code was from a basic stealer malware.
In another sample, the writer used ChatGPT to create a simple Java snippet that downloads a common SSH and telnet client that is run secretly on a system using PowerShell.
"This individual seems to be a tech-oriented threat actor, and the purpose of his posts is to show less technically capable cybercriminals how to utilize ChatGPT for malicious purposes, with real examples they can immediately use," the researchers wrote.
- AI conference and NYC's educators ban papers done by ChatGPT
- Microsoft chases Google with ChatGPT-powered Bing
- University students recruit AI to write essays for them. Now what?
- Alphabet reshuffles to meet ChatGPT threat
On December 21, a person calling themselves USDoD posted an encryption tool written in Python that includes various encryption, decryption, and signing operations. He wrote that OpenAI's technology gave him a "nice [helping] hand to finish the script with a nice scope."
The researchers wrote that USDoD has limited development skills but is active in the underground community with a history of selling access to compromised organizations and stolen databases.
Another discussion thread published on a forum on New Year's Eve talked about how easy it is to use ChatGPT to create a dark web marketplace to trade illegal tools like malware or drugs and stolen data like accounts and payment cards.
The thread's writer published some code created with ChatGPT that uses third-party APIs to get up-to-date prices for such cryptocurrency as Bitcoin, Monero, and Ethereum for the marketplace's payment system.
This week, miscreants talked on underground forums about other ways to leverage ChatGPT for various schemes, including using it with OpenAI's Dall-E 2 technology to create art to sell online through legitimate sites like Etsy and creating an ebook or short chapter on a specific topic that can be sold online.
To get more information about how ChatGPT can be abused, the researchers asked ChatGPT. In its answer, ChatGPT talked about using the AI technology to create convincing phishing emails and social media posts to trick people into giving away personal information or to click on malicious links or to create video and audio that could be used for misinformation.
ChatGPT also defended its creator.
"It is important to note that OpenAI itself is not responsible for any abuse of its technology by third parties," the chatbot said. "The company takes steps to prevent its technology from being used for malicious purposes, such as requiring users to agree to terms of service that prohibit the use of its technology for illegal or harmful purposes." ®