Black Hat Here's perhaps a novel use of a neural network: proof-of-concept malware that uses AI to decide whether or not to attack a victim.
DeepLocker was developed by IBM eggheads, and is due to be presented at the Black Hat USA hacking conference in Las Vegas on Thursday. It uses a convolutional neural network to stay inert until the conditions are right to pounce.
When samples of software nasties are caught by security researchers, they can be reverse-engineered to see what makes them tick, and what activates their payload, which is the heart of the malicious code that spies on the infected user, steals their passwords, holds their files to ransom, and so on. These payloads can be triggered by all sorts of things, from the country in which the computer is located, whether or not it is running in a virtual machine, how long the machine has been idle, etc.
This is all information that network defenders and antivirus tools can use to thwart or mitigate the spread and operation of the software. However, while it's possible to reverse-engineer simple heuristic checks within a malicious program, to figure out the trigger conditions, it's rather hard to work out what will make a trained neural network run a payload, just by studying its data structure.
Similarly, if the payload is encrypted, it's possible the decryption key can be figured out from the heuristic code that unlocks it. However, if the payload is encrypted using a key derived from a neural network's output, and you can't easily reverse-engineer the network, you'll have a hard time making it up cough up the right key and decrypting the payload.
To demonstrate this, IBM took a copy of the WannaCry ransomware, encrypted and hid it in a benign video-conference app, and wrapped machine-learning code around it that used a trained neural network to cough up the key to unlock and run the file-scrambling WannaCry payload.
That neural network was trained to recognize a particular victim's face from the computer's front-facing camera. When it spotted the right person in front of the PC, it provided the key needed to unlock the payload so it could be executed, and hold the system's documents to ransom.
You should find out what's going on in that neural network. Y'know they're cheating now?READ MORE
The ingenious part is that it turns what many consider a major weakness of neural networks into a strength. Neural networks are frustratingly difficult to understand since they act like black boxes, it’s difficult to study how they arrive at their final answer from a given input just by looking at how individual neurons in the system are fired.
“A simple 'if this, then that' trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher," Marc Stoecklin, a principal research scientist and manager of IBM's Cognitive Cybersecurity Intelligence group, explained. "In addition to that, it is able to convert the concealed trigger condition itself into a 'password' or 'key' that is required to unlock the attack payload."
Since it’s difficult to work out what triggers the payload, such a model would be very difficult to tackle, the researchers argued. Rest assured, however: the IBMers have not released any code, and there's no sign of any malware using this machine-learning technique in the wild.
“While a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals," said Stoecklin.
"In fact, we would not be surprised if this type of attack were already being deployed. The security community needs to prepare to face a new level of AI-powered attacks. We can’t, as an industry, simply wait until the attacks are found in the wild to start preparing our defenses. To borrow an analogy from the medical field, we need to examine the virus to create the 'vaccine.'" ®