This article is more than 1 year old

Europol warns ChatGPT already helping folks commit crimes

There is no honor among chatbots

Criminals are already using ChatGPT to commit crimes, Europol said in a Monday report that details how AI language models can fuel fraud, cybercrime, and terrorism.

Built by OpenAI, ChatGPT was released in November 2022 and quickly became an internet sensation as netizens flocked to the site to have the chatbot generate essays, jokes, emails, programing code, and all manner of other text.

Now, the European Union's law enforcement agency, Europol, has detailed of how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim. 

"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol stated in its report [PDF]. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."

Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI's content filter system. Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse," Europol warned.

The agency admitted that all of this information is already publicly available on the internet, but the model makes it easier to find and understand how to carry out specific crimes. Europol also highlighted that the model could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism. 

ChatGPT's ability to generate code - even malicious code - increases the risk of cybercrime by lowering the technical skills required to create malware.

"For a potential criminal with little technical knowledge, this is an invaluable resource. At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi,"* the report said. 

Large language models (LLMs) are unsophisticated and still in their infancy, but they're rapidly improving as tech companies invest resources in developing the technology. OpenAI has released GPT-4, a more powerful system, already, and these models are being increasingly integrated into products. Microsoft and Google have both launched AI-powered web search chatbots into their search engines since the release of ChatGPT.

Europol said that as more companies roll out AI features and services, it will open up new ways to use the technology for illegal activities. "Multimodal AI systems, which combine conversational chatbots with systems that can produce synthetic media, such as highly convincing deepfakes, or include sensory abilities, such as seeing and hearing," the law enforcement org's report suggested.

Clandestine versions of language models with no content filters and trained on harmful data could be hosted on the dark web, for example.

"Finally, there are uncertainties regarding how LLM services may process user data in the future – will conversations be stored and potentially expose sensitive personal information to unauthorised third parties? And if users are generating harmful content, should this be reported to law enforcement authorities?," Europol asked. ®

* That's the plural of modus operandi.

More about


Send us news

Other stories you might like