This article is more than 1 year old
Yeah, WannaCry hit Windows, but what about the WannaCry of apps?
Patching done proper
WannaCrypt crippled 230,000 Windows PCs internationally, hitting unpatched Windows 7 and Windows Server 2008 and computers still running Microsoft's seriously old Windows XP, though the latter wasn't responsible for its spread.
The initial reaction was a predictable rush to patch Microsoft’s legacy desktop operating systems.
Laudable, if late, but slow down: while the impact of WannaCrypt was huge, it was also relatively exceptional: Windows 7 ranks as number 14 and XP number 17 in the top 20 of software stacks with the most “distinct” vulnerabilities administered by the US government-funded and supported CVE Details.
Putting aside the Linux kernel, which tops the CVE list this year (in previous years it struggled to make the top 10), what is instructive is the healthy presence of applications on that list versus the number of operating systems like Windows. There are eight in the top 20. Indeed, over the years, it's been Internet Explorer, Word, Adobe’s Flash Player who've played an active part in throwing open systems and leaving IT pros rushing to patch.
If applications are nearly half the problem what are our safeguards? Operating systems tend to be supported for a surprisingly long time – ten years or more for some Linux distributions. Can the same be said for the applications? Let’s look at a couple of staples.
Microsoft’s SQL Server 2005, for instance – hugely popular and well known. It was released in … well, 2005. The end of mainstream support – so long as you’d kept up with your Service Pack installs – was in 2011, with extended support finishing in 2016.
An 11-year lifecycle for an application is really not that bad, and it means that if you’re so inclined you only have to think about the upheaval of a full version upgrade every decade or so. Assuming, that is, that you don’t feel that the effort of an upgrade is worth the performance benefits it might give you, as per the comment that SQL Server 2016 “just runs faster” than what came before.
And Microsoft isn’t alone in supporting its codebases for a fair while. The other obvious place to look is Oracle’s product lifecycle doc. Now, Oracle has jumped about a bit. Looking at its core database: version 8.1.7 had a lifetime for extended support of a tad over six years; 9.2 was eight years; 11.2 leapt to more than 11 years; and with 12.2 we’re back to eight (right now both 11.2 and 12.2 are within extended support).
But you need to patch
What we’re seeing, then, is that at least for a small sample of well-known apps, there’s not a huge worry about having to risk moving to a new version every few years. But it’s very easy to get complacent (yes, that word again) and use this as an excuse not to perform updates. Particularly with applications that sit inside the corporate network and aren’t accessible (directly, at least) from the internet, it’s understandable that a company would do a risk assessment and decide that the inconvenience of an outage outweighed the benefit of installing a particular security patch. And that may be true, but patches and updates wouldn’t exist if there was no need for them.
We’ve all had “important” users (this generally means “self-important”, actually) who demand special treatment because (say) they don’t want their machines to reboot overnight. The correct response to such individuals is a two-word one, the second being “off”
Let’s pick a couple from Microsoft’s SQL Server 2014 Service Pack 2 page: KB3172998 is labelled: “FIX: A severe error occurs when you use the sys.dm_db_uncontained_entities DMV in SQL Server 2014”, for example; or there’s KB3170043 which fixes “Poor performance when query contains anti-join on a complex predicate in SQL Server 2014 SP1”. Patches fix stuff and/or make stuff better, so why wouldn’t you want to partake?
Patching equals supported
If you don’t patch your systems, there’s also a decent chance that the vendor will stop supporting them. Moving away for a moment from applications, I vividly remember working for a company that had a big pile of kit from a particular manufacturer. If something broke, the support call went like this:
Us: “Hi, we have a fault on a server.”
Vendor: “OK, please send us a [diagnostics] log.”
Us: “Here you go.”
Vendor: “Ah, some of your firmware is out of date; please upgrade it all and then call us back if it’s still broken.”
The vendor of the phone system we used was also very strict, but happily was much less restrictive: you were supported if and only if you were running the current version of the software or the immediately previous one. Anything older and you were on your own.
Now, if you’ve looked at the Microsoft SQL Server support lifecycle link from above, you’ll have noticed that product’s supportedness depends on the service packs you’ve installed.
SQL Server 2014’s support end date has already passed, you’ll see. But Service Pack 1 is supported until later this year, and Service Pack 2 until 2019 (or 2024 if you go for extended support). So at the very least you need to be keeping up with the service packs on your apps, or you’ll find yourself unsupported and the patches – functional and security – will dry up before you know it. You need to be similarly careful with non-Microsoft apps, too: check out the minor version numbers on Oracle 12, for instance and you’ll see that 12.1 is about to hop into its death bed (for basic support, anyway) next year while 12.2 lives on until 2022.
Designing in upgradeability
Back in my network admin days, I became used to being able to upgrade Cisco ASA firewalls with zero downtime. This was because: (a) we ran them in resilient pairs; and (b) the devices would continue to run as a cluster so long as the versions weren’t too dissimilar.
The same applies to many applications: the manufacturers know that you hate downtime and so they make their products such that you can do live or near-live upgrades, and all you have to do is design your instances of those products to exploit all that funky upgrade-ability they included. Back at our SQL Server example, for instance, there’s a hefty tome on how you go about upgrading, which discusses minimising downtime for different types of installation.
When I talked last time about ongoing OS patching, I pointed out that in these days of virtual systems the hypervisor writers have handed you a big stack of get-out-of-jail-free cards when it comes to patching and updating. The same logic applies to applications: you can test updates in a non-live setting by cloning your systems into a test network, and you can protect against duff patches by snapshotting the live virtual servers before you run the update. The are few things easier in this world than rolling back a server to the previous snapshot. So even when you can’t entirely eliminate downtime, you can at least minimise it.
Accepting the downtime
Patches for big, core applications aren’t a light-touch affair (if you have legacy stuff there’s a good chance that it wasn’t designed and installed with live updates in mind. I’ve seen people do core application patching (particularly stuff like Microsoft Exchange servers) over a number of days, with multiple downtimes, and so you have to plan and communicate properly.
What you shouldn’t do, though, is be beaten into postponing such controlled outages indefinitely just because the business users moan. Of course, you need to avoid unnecessary outages but the point is that some outages aren’t just advisable, they’re essential. We’ve all had “important” users (this generally means “self-important”, actually) who demand special treatment because (say) they don’t want their machines to reboot overnight. The correct response to such individuals is a two-word one, the second being “off”.
Yes, you need to keep outages to a sensible level, but you absolutely shouldn’t be put off having the right number of them. Not least because in many cases you can demonstrate how few people were using a particular app at the time of an update, and hence hopefully persuade the users that the impact really isn’t as bad as they think.
So... the complacency of application patching
Merriam-Webster defines “complacency” as: “Self-satisfaction especially when accompanied by unawareness of actual dangers or deficiencies” or “an instance of usually unaware or uninformed self-satisfaction.”
In a security context that’s a very scary thing… in fact in an IT manager or sysadmin context it’s equally scary. On one hand it might be that you have systems that you have left unpatched because you decided it was OK to do so. On the other hand you may be completely patched up-to-date and sit there smugly not realising that you’ve (say) made something insecure or unstable through a configuration error or some such.
So just as it was inexcusable for operating system maintenance, complacency is also a sin for the apps. Most of the updates you’ll do are relatively straightforward, and the rollback is often equally simple. So you’ll usually point at the business people as your excuse – either they’ve not given you the resource you need in order to do all the updates, or they’ve told you you can’t have the downtime.
And the day your unpatched system gets ransomwared or DDoSed and you’re down for a week will be the day you wish you’d been a little bit more insistent on a few hours of reboots here and there. ®