This article is more than 1 year old
ChatGPT fans need 'defensive mindset' to avoid scammers and malware
Palo Alto Networks spots suspicious activity spikes such as naughty domains, phishing, and worse
ChatGPT fans need to adopt a "defensive mindset" because scammers have started using multiple methods to trick the bot's users into downloading malware or sharing sensitive information.
Researchers with Unit 42 – Palo Alto Networks' threat intelligence unit – this week published a report that found an 910 percent increase in domain names related to ChatGPT between November 2022 and April 2023.
In the same period, the researchers spotted 17,818 percent growth of related squatting domains from DNS Security logs, and "up to" 118 daily detections of ChatGPT-related malicious URLs.
Those surges, the researchers assert, indicate that scammers want to lure ChatGPT users to seemingly related sites and fake chatbots that are designed to do harm.
"As OpenAI released its official API for ChatGPT on March 1, 2023, we've seen an increasing number of suspicious products using it," Unit 42 researchers Peng Peng, Zhanhao Chen, and Lucas Hu wrote in the report.
"While conducting our research, we observed multiple phishing URLs attempting to impersonate official OpenAI sites. Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information."
"Additionally, scammers might use ChatGPT-related social engineering for identity theft or financial fraud," Palo Alto's researchers wrote. "Despite OpenAI giving users a free version of ChatGPT, scammers lead victims to fraudulent websites, claiming they need to pay for these services."
One site mentioned is designed to entice victims into providing such confidential information as credit card details and email addresses. Another used OpenAI's logo and Elon Musk's name and image to lure victims into a cryptocurrency fraud scheme.
The report also details multiple instances of miscreants registering and using squatting domains featuring "openai" and "chatgpt" in their names, among them openai.us and chatgpt.jobs.
As of earlier this month, these domains weren't holding anything malicious, but given that they're not controlled by OpenAI or authentic domain management companies, they could well be abused in the future.
The growth of such squatting domain registrations was steady since November, but spiked after Microsoft – the major investor in OpenAI that is seeding the startup's technologies like GPT-4, Dall-E, and ChatGPT throughout its portfolio – on February 7 announced a version of the Bing search engine with ChatGPT.
Shortly after that, more than 300 ChatGPT-related domains were registered. The number of ChatGPT squatting domains in the DNS Security logs jumped sharply on the days that OpenAI released the ChatGPT API and GPT-4.
Phishing with ChatGPT
- What does an ex-Pharma Bro do next? If it's Shkreli, it's an AI Dr bot
- Cybercrims hop geofences, clamor for stolen ChatGPT Plus accounts
- US cyber chiefs warn AI will help crooks, China develop nastier cyberattacks faster
- Europol warns ChatGPT already helping folks commit crimes
There also is a growing number of copycat AI chatbots, some of which have their own large language models and others that claim to offer ChatGPT services via OpenAI's public API. These chatbots can be a security risk, particularly in countries where ChatGPT is not available, the researchers warned.
"Before the release of the ChatGPT API, there were several open-source projects that allowed users to connect to ChatGPT via various automation tools," they wrote, noting that in such countries, "websites created with these automation tools or the API could attract a considerable number of users from these areas."
Most of the copycat bots are not as powerful as ChatGPT because they're based on GPT-3, which was released in June 2022. ChatGPT is based on GPT-3.5 and GPT-4. In addition, the copycat services are another way for threat groups to make money from the ChatGPT-curious by collecting and stealing the information users give them.
In one case, the researchers downloaded an "AI ChatGPT" extension from a copycat chatbot and found it adds highly obfuscated JavaScript into the background that calls the Facebook Graph API, stealing the victim's account details. It also may get more access to the Facebook account.
Antivirus vendor Guardio in a recent report outlined a similar malicious browser extension scheme in which a Chrome extension was hijacking Facebook accounts and installing backdoors, including one that gave the miscreants super admin permissions.
As with much in cybersecurity, the best defense is the users themselves. They need to be wary of suspicious emails or links that are related to ChatGPT and access ChatGPT through OpenAI's website rather than using copycat chatbots, the researchers wrote. ®