This article is more than 1 year old
Air gaps: Happy gas for infosec or a noble but inert idea?
Spooks and boffins jump 'em, but real-world headwinds remain strong
Feature Last year Michael Sikorski of FireEye was sent a very unusual piece of malware.
The custom code had jumped an air gap at a defence client and infected what should have been a highly-secure computer. Sikorski's colleagues from an unnamed company plucked the malware and sent it off to FireEye's FLARE team for analysis.
"This malware got its remote commands from removable devices," Sikorski said. "It actually searched for a specific formatted and hidden file that was encrypted, and would then decrypt it to access a series of commands that told it what to do next."
External network links are the lifeblood of most malware. This sample provided the means for malcode to be implanted on victim machines and served as the command and control link over which stolen data could be shipped off to attackers, allowing additional and further infections.
Sikorski's unnamed malware used employees to spread to other machines and distribute commands. Attackers hacked internet enabled computers they knew staff with access to the air-gap machine would use and turned any external storage device in into a digital bridge.
Those bridged machines allowed Sikorski and his colleagues to retrieve the malware, allowing them to establish that it was part of a wider attack on air gapped machines.
Their analysis showed the malware could be told to conduct reconnaissance, seek out particular pieces of valuable information, list directories and execute new malware carried over on the staff thumb drives.
"Somebody would come by, plug in their stick, pull the drive out, and all the commands would have been run. The malware is still resident on the system so next time a drive is plugged in, it could receive more commands."
Into the lab
Such attacks are intriguing because it is often assumed that a few feet of air implies extra security: hackers need a network on which to operate, so air's non-conductive properties (for data) are therefore seen as the last word in security. It therefore generates no shortage of intrigue when that theory is disproved and an isolated computer is breached.
(l-r) FireEye FLARE engineers Matthew Graeber, Richard Wartell, and Michael Sikorski.
But as Sikorski's tale proves, air gaps can be beaten.
And around the world, researchers are proving it's possible, sometimes with outlandish means of bypassing physical security such as sucking data out of monitors and speakers.
The hack Sikorski and pals identifed came out of Israel's Ben-Gurion University where four hackers at the cyber security lab had honed an attack through which already-infected air-gap computers could exfiltrate data to passing mobile phones through FM radio signals emitted by video cards. The AirHopper technique, which in other forms has been around for decades and was according to leaked Edward Snowden documents popular with the NSA and other spy agencies, used off-the-shelf hardware to funnel information off infected systems at a distance of up to seven metres between machine and attacker.
"This kind of attack scenario assumes the air-gapped computer is already infected by malware by means of a USB stick or malicious files copied to the computer," chief technology officer of the Israeli university's security labs Dudu Mimran says. "Such infection can take place at any time before and can be very fast since it does not involves the actual data leakage and as such can go unnoticed. Later on the malware can leak the data from the infected computer, either the current data being typed on the keyboard or existing documents from the computer."
Compartmentalising and delaying data theft until after infection meant there was less chance the loss would be detected, Mimran says, and allows an attacker to set a trap to steal credentials from staff who subsequently use the machine. He assumes though cannot prove the attack is being used in the wild to bridge air gaps.
Others have found means to suck data without first needing to infect an air-gapped machine. In December Australian security governance boffin Ian Latter rocked the Kiwicon hacker confab with an attack that demonstrated how data can be exfiltrated through monitor pixels in an attack that bypassed known detection methods. Attackers would need physical access to machines to install a commercial HDMI recording device and an Arduino keyboard that left no traces of the attack for forensics to analyse, aside from perhaps close circuit television footage of the manipulation.
Latter, also the author of the ThruGlassXfer attack, also likes an attack developed in 2012 using a do-it-yourself USB human interface device where data was hoovered off air-gap systems using booby-trapped CAP, number and scroll lock keys on a keyboard.
"I can't go past it for its sheer subtlety - is your organisation tracking the CAPS-lock status on every device?" Latter says, also giving a nod to Mimran's work. The attack was improved in 2013 to exfiltrate data at a faster rate of 10 Kbps and may have been used as early as 2008.
Other air gap attacks are also intriguing. Consider BadBios, the bizarre case of malware reportedly capable of spreading over airwaves, self-healing, and persistence. It stood a tidal wave of doubt as the discoverer of the attack and also its victim, Dragos Ruiu, is a respected security researcher. The rootkit detailed in late 2013 could reportedly hop air gaps, survive motherboard firmware rewrites and mess with a variety of operating systems.
Less than a month after Ruiu reported the malware messing with his personal computers, German geeks Michael Hanspach and Michael Goetz had concocted an attack in which malware could slowly spread between nearby computers using microphones and speakers. That attack was an adoption of robust communications for covert theft using the near ultrasonic frequency range.