Financial institutions with critical systems and cash on the line are reorganizing to deal with the closing gap between the hole and the patch.
Zero day exploits are upon us. Case in point, the 25 June Russian attacks that turned IIS servers into delivery platforms for identity-thieving Trojan keystroke loggers. The attacks relied on two vulnerabilities in Internet Explorer that security researchers discovered for the first time weeks earlier on a malicious adware-implanting website. At the time of the attack, no patch was available.
ISPs were able to quickly contain the threat by shutting down traffic to the Russian host serving up the malware. But the episode proved that the zero day concern is more than hyperbole. "We believe zero day vulnerabilities are imminent," says Oliver Friedrichs, senior manager at Symantec's Security Response center. [Symantec publishes SecurityFocus]. "In this example, that was proven true."
As the window shrinks between the discovery of vulnerabilities and the exploits that follow them, security patching - once an obscure and neglected chore - is beginning to take on a more urgent role in some corners of the business world, say analysts and IT managers. Leading the way are organizations with mission-critical technology - chiefly finance agencies - who've managed to reduce critical security patch times from weeks to just days. "In some cases, it took 200 days to roll out a patch across 36,000 machines," says Rober Garique, VP and CISO of the Bank of Montreal. "Now we can do that in less than a week."
The key, they say, is that they've moved patch management from their small security organizations into their network infrastructure management. It's a culture shift - a new way of working with network administration, says Mike Corby, director of META Group Consulting. In this model, security teams rate the criticality of each patch, but administrators manage the actual patching as part of their normal network and system management processes.
"This is part of the natural evolution of security," says Garique. "Patching is a change-management control activity. So the different system administration groups should do their own testing and patching as part of their overall system management."
At Bank of Montreal, this approach gets critical patches to onto over 30,000 devices in two to three days. The Bank of New York boasts similar deployment speeds for an equally large network. For non-critical patches, each bank folds the patches into administrative updates in cycles of one week, three weeks or further out, depending on severity.
"They [administrators] own that infrastructure," Garique says. They have root access. And they're the ones held accountable for 99.9 per cent availability - not the security people. Once they're aware of their ownership of the problem, they're professionally accountable."
Avoiding the Chicken Little Syndrome
With network administrators handling patch management, IT security is free to assume more of a role of advisor and 9-1-1 operator, sending alerts to administrators assigned to patch the networking segment. The security team has more responsibility in vetting out the severity of patches and their impact on the organization - and that means fewer fire drills.
"About two years ago, awareness among the infrastructure people was an issue when we used to rely too much on the severity ratings provided by the vendors," says Eric Guerrino, senior vice president and head of information security for Bank of New York. "Then we started assessing things against our own institution, and the infrastructure areas learned that we're not telling them everything's critical anymore."
At Bank of New York, the infosec team takes alerts and reports from vendors, CERT, the Financial Services ISAC, vulnerability alerting services, the media and other sources of information. Infosec then assesses the alerts against the bank's internal environment - platforms, systems, configuration, probability of exploit and criticality of system, to name a few.
Based on these and other criteria, the IT security department issues patch requirements and levels of severity to the affected networking segments on the patch distribution list.
"Sometimes, especially on the network, most of the critical patches we're concerned with need to be rolled out at the edge devices but not necessarily the entire network. So we'll give it rating of high for servers in the DMZ, and a medium rating for everywhere else," says Guerrino.
Administrative groups also do their own impact assessments based on the devices and patch levels in their environments. Network administrators at both banks use vulnerability and asset management tools, along with network protocols and network management tools to keep track of devices, services, versions and patch levels.
"Network assets are always changing. The key is continuous assessment of your network devices, their versions, and their patch levels. And you need to assign asset value to those systems - for example a financial or health care database is more critical and sensitive than, say, your Web server," says Abraham Kleinfeld, president and CEO of nCircle, a vulnerability assessment vendor in San Francisco.
Buying time with firewalls
Guerrino and Garique say that their security patching routines have become sane - nearly predictable, except for when the occasional big one hits. Even then, tested processes used by network and system administrators keep costs and measures from spinning out of control. "I'm not saying I've discovered the silver bullet," Garique says. "In some cases, patch management still has to be done by hand, and we are still going through continual improvement on system management, patch management and security tools."
META's Corby says most businesses haven't created such a process for patch management. During a presentation on patch management at NETSEC in San Francisco in June, students of his course expressed anger at vendors for software failures and hopelessness over the whole patching problem. Students were clearly overwhelmed. "Patching is still ad-hoc," he says. "In most cases, companies don't keep up with patches."
The concern, Garique says, is that the tempo of new vulnerability releases has increased. And that, he says, has thrown a lot of organizations into disarray.
That tempo, the time between vulnerability discovery and exploit, has compressed 90 per cent during the past three years - the average being 11 days between discovery and exploit (well below the 23 days most enterprises need to patch), according to a June META research paper. "We're really close to the day where we have no time to test and patch before exploits happen," says Corby.
Symantec's Friedrichs believes that skilled hackers are already sitting on exploit code for unknown vulnerabilities, keeping the information close to the vest so only they can use it. And he predicts that it's only a matter of time before a Blaster-level worm exploits a hitherto unknown vulnerability.
In this way, patching will always be reactive. So layered protection is still the best, starting with policy-based, centrally-managed desktop firewalls and anti-virus, say experts.
"We don't rely on patches to protect us," says Guerrino. "We've rolled out desktop firewall software because it buys us time. For example, Sasser was of minimal impact because we had firewalls and it blocked the ports that Sasser was trying to exploit."