This article is more than 1 year old
What to do with a cloud intrusion toolkit in 2023? Slap a chat assistant on it, duh
Don't worry, this half-baked Python script is for educational purposes onl-hahaha
Infosec bods have detailed an underground cybersecurity tool dubbed Predator AI that not only can be used to compromise poorly secured cloud services and web apps, but has an optional chat-bot assistant that only kinda works.
The multi-purpose software is not entirely novel; there are others out there like it; and while it's supposedly offered for educational purposes only, it can be used by miscreants to attack other people's deployments. It could also be used by IT types to test their infrastructure for holes.
Predator AI is apparently programmed to be to able to exploit 30 kinds of misconfigured or poorly setup web-based services and technologies, ranging from Amazon Web Services' Simple Email Service, Twilio, and WordPress to OpenCart, Magento, OneSignal, Stripe, and PayPal, SentinelLabs boffin Alex Delamotte explained on Wednesday.
Its optional chat-bot assistant – partially powered by OpenAI's ChatGPT – is likely only "somewhat functional" at the moment, Delamotte added. This particular feature, which allows you to ask the tool questions about its operation and potentially perform actions, is not yet advertised on the tool's primary Telegram channel. That said, it's under active development, we're told, with its makers posting videos of it in action and taking feature requests.
It wouldn't hurt to take a look over the software's capabilities and ensure your web apps and cloud infrastructure are fully secured against the tool's techniques. It may be that Predator uses code and methods found in other toolkits.
"Predator's web application attacks look for common weaknesses, misconfigurations or vulnerabilities in Cross Origin Resource Sharing (CORS), exposed Git configuration, PHPUnit Remote Code Execution (RCE), Structured Query Language (SQL), and Cross-Site Scripting (XSS)," Delamotte wrote.
Predator, written in Python, has more than 11,000 lines of code, and provides a Tkinter-based graphical user interface that requires several JSON configuration files. The script defines 13 classes that correlate to the malware's various side features as well as its core malicious functionality.
These side features include doing things like building information-stealing Windows malware executables; using Windows commands that check if the current user is running as an administrator; crafting fake error messages for testing XSS exploitation on Windows systems; and translating dialog boxes and menu items into Arabic, English, Japanese, Russian, and Spanish.
The configurable data-harvesting malware Predator can build can use Discord or Telegram for command-and-control purposes, and a video posted by its developer last month claimed the code is "fully undetectable."
SentinelLabs, however, said it was "unable to successfully use this feature as the required configuration files were not available."
- Malware crooks find an in with fake browser updates, in case real ones weren't bad enough
- Do you use comms software from 3CX? What to do next after biz hit in supply chain attack
- Atlassian cranks up the threat meter to max for Confluence authorization flaw
- 81K people's sensitive info feared stolen from Hilb after email inboxes ransacked
Then there's the GPTj class, which uses ChatGPT to provide a text-based assistant: queries are typed into the GUI, which displays the response. The script will first try to handle a user-submitted request by itself internally: there are more than 100 use-cases it can recognize and carry out itself or via a third-party service, before trying to use a remote ChatGPT API to understand the request.
This class contains "several partially implemented utilities related to AWS SES and Twilio, as well as utilities to get information about IP addresses and phone numbers," according to SentinelLabs. The extent of GPTj's capabilities is not entirely clear: it looks as though ChatGPT is used to handle basic questions about the tool, and actions are actually handled by the script itself when it recognizes requests from a hardcoded list.
We could be wrong; this might change over time anyway. You'd think OpenAI would have guardrails in place to stop the thing from doing or saying anything problematic. It gives the Predator's developers a reason to slap AI on the name.
"The actor designed Predator AI to try to find a local solution first before querying the OpenAI API, which reduces the API consumption," Delamotte explained. "This class searches the user's input for strings associated with a known use case centered around one of Predator's web application and cloud service hacking tools."
To keep up the appearance of it being legitimate software, the code "has a disclaimer saying the tool is for educational purposes and the author does not condone any illegal use," Delamotte said. That'll do the trick in court. ®