This article is more than 1 year old

Meet Stuxnet's stealthier older sister: Super-bug turned Iran's nuke plants into pressure cookers

New report documents Mark I cyber-weapon build

Analysis Newly published research has shined new light on super-malware Stuxnet's older sibling – which was also designed to wreck Iran's nuclear facilities albeit in a different way.

The lesser-known elder strain of the worm, dubbed Stuxnet Mark I, dates from 2007 - three years before Stuxnet Mark II was discovered and well documented in 2010.

Writing in Foreign Policy magazine yesterday, top computer security researcher Ralph Langner claimed that the Mark I version of the weapons-grade malware infected the computers controlling Iran's sensitive scientific equipment, and carefully ramped up the pressure within high-speed rotating centrifuges: these machines are vital in Iran's uranium enrichment process as they separate the uranium-235 isotope used in, say, nuclear power plants and atomic.

Crucially, the malware did by overriding gas valves attached to the equipment while hiding sensor readings of the abnormal activity from the plant's engineers and scientists. The end goal was to sabotage the cascade protection system that kept thousands of 1970s-era centrifuges operational.

The 2010 version, by contrast, targeted the centrifuge drive systems: it quietly sped up and slowed down rotors connected to centrifuges until they reached breaking point, triggering an increased rate of failures as a result.

Stuxnet Mark II famously hobbled high-speed centrifuges at Iran's uranium enrichment facility at Natanz in 2009 and 2010 after infecting computers connected to SCADA industrial control systems at the plant. This flavour of Stuxnet was allegedly developed as part of a wider US-Israeli cyber-warfare effort, codenamed Operation Olympic Games, that began under the presidency of George W Bush.

But prior to that, Stuxnet Mark I sabotaged the protection system the Iranians hacked together to keep their obsolete and unreliable IR-1 centrifuges safe, as Langner explained in detail in his 4,200-word article. Once installed on computers controlling the equipment, the subtle overpressure attack ultimately damaged the machinery beyond repair, forcing engineers to replace it. The malware took great care to closely monitor its effects, allowing its masters to carefully avoid any activity that may result in immediate, catastrophic destruction – because that would have led to a postmortem examination that could have exposed the stealthy sabotage.

Samples of the Mark I malware were submitted to online malware clearing house VirusTotal in 2007, but it was only recognised as such five years later in 2012.

The results of the overpressure attack are unknown, but whatever they were, Stuxnet Mark I's handlers decided to try something different in 2009, deploying the Mark II variant that became famous after it accidentally escaped into the wild in 2010. Langner reckons Stuxnet Mark II was "much simpler and much less stealthy than its predecessor" – a less complex yet more elegant Stuxnet could have proved more effective and reliable than the convoluted Mark I version.

The Mark I had to be installed on a computer connected to the industrial control system to carry out its sabotage, or otherwise infect a machine from a USB drive; it was probably installed by a human, either wittingly or unwittingly.

Later, the Mark II spread over local-area networks, exploited zero-day Microsoft Windows vulnerabilities to silently install itself, and was equipped with stolen digital certificates so its driver-level code appeared to be signed legit software. But this made Mark II easy to recognise as malign by antivirus experts once it was discovered.

Langner, well known for his earlier Stuxnet analysis, reckons the Mark II escaped into the wild after it infected the Windows laptop of a sub-contractor who subsequently connected the PC to the wider web, contrary to the myth that the malware spread itself across the internet as the result of an internal software bug.

Having compromised industrial control systems at Iran's nuclear centre, Stuxnet's masters "were in a position where they could have broken the victim's neck, but they chose continuous periodical choking instead", according to Langner.

The Mark II's effect on Iran

He reckoned the 2010 build of Stuxnet set back the Iranian nuclear programme by two years: it subtly reduced the centrifuges' ability to reliably enrich uranium at volume, forcing the scientists to tear their hair out in frustration and chase a ghost in the machine. This was a far longer delay than if the software nasty triggered the sudden catastrophic destruction of all operating centrifuges, because Iran would have been able to diagnose the problem and rebuild its processing plant using spares.

The effectiveness of the whole scheme is a matter of some dispute among foreign policy and security analysts with some even arguing it ultimately galvinised Iran's nuclear efforts.

That issue aside, the stealth and disguise of early version of Stuxnet came at the cost of vastly increasingly the difficulty of creating the cyber-munition, according to Langner:

I estimate that well over 50 percent of Stuxnet's development cost went into efforts to hide the attack, with the bulk of that cost dedicated to the overpressure attack which represents the ultimate in disguise - at the cost of having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride.

And while Stuxnet was clearly the work of a nation-state - requiring vast resources and considerable intelligence - future attacks on industrial control and other so-called "cyber-physical" systems may not be. Stuxnet was particularly costly because of the attackers' self-imposed constraints. Damage was to be disguised as reliability problems.

And unlike the Stuxnet attackers, these adversaries are also much more likely to go after civilian critical infrastructure. Not only are these systems more accessible, but they're standardised. In fact, all modern plants operate with standard industrial control system architectures and products from just a handful of vendors per industry, using similar or even identical configurations.

In other words, if you get control of one industrial control system, you can infiltrate dozens or even hundreds of the same breed more.

Langner's research adds a missing chapter to the already complex story of Stuxnet which continues to interest both military strategists and security researchers because it showed that malware could be used to physically sabotage equipment even in closely guarded facilities.

"The Stuxnet revelation showed the world what cyberweapons could do in the hands of a superpower," Langner concluded. "It also saved America from embarrassment. If another country - maybe even an adversary - had been first in demonstrating proficiency in the digital domain, it would have been nothing short of another Sputnik moment in US history. So there were plenty of good reasons not to sacrifice mission success for fear of detection."

The publication of the article [PDF] coincides with the release of a white paper by Langner on Stuxnet, entitled To Kill a Centrifuge: A Technical Analysis of What Stuxnet’s Creators Tried to Achieve. The white paper combines results from reverse engineering the attack code with intelligence on the design of the attacked plant and background information on the attacked uranium enrichment process to provide what's billed as the most comprehensive research on the Stuxnet malware to date. ®

More about


Send us news

Other stories you might like