Yeah, WannaCry hit Windows, but what about the WannaCry of apps?

Patching done proper

WannaCrypt crippled 230,000 Windows PCs internationally, hitting unpatched Windows 7 and Windows Server 2008 and computers still running Microsoft's seriously old Windows XP, though the latter wasn't responsible for its spread.

The initial reaction was a predictable rush to patch Microsoft’s legacy desktop operating systems.

Laudable, if late, but slow down: while the impact of WannaCrypt was huge, it was also relatively exceptional: Windows 7 ranks as number 14 and XP number 17 in the top 20 of software stacks with the most “distinct” vulnerabilities administered by the US government-funded and supported CVE Details.

Putting aside the Linux kernel, which tops the CVE list this year (in previous years it struggled to make the top 10), what is instructive is the healthy presence of applications on that list versus the number of operating systems like Windows. There are eight in the top 20. Indeed, over the years, it's been Internet Explorer, Word, Adobe’s Flash Player who've played an active part in throwing open systems and leaving IT pros rushing to patch.

If applications are nearly half the problem what are our safeguards? Operating systems tend to be supported for a surprisingly long time – ten years or more for some Linux distributions. Can the same be said for the applications? Let’s look at a couple of staples.

Microsoft’s SQL Server 2005, for instance – hugely popular and well known. It was released in … well, 2005. The end of mainstream support – so long as you’d kept up with your Service Pack installs – was in 2011, with extended support finishing in 2016.

An 11-year lifecycle for an application is really not that bad, and it means that if you’re so inclined you only have to think about the upheaval of a full version upgrade every decade or so. Assuming, that is, that you don’t feel that the effort of an upgrade is worth the performance benefits it might give you, as per the comment that SQL Server 2016 “just runs faster” than what came before.

And Microsoft isn’t alone in supporting its codebases for a fair while. The other obvious place to look is Oracle’s product lifecycle doc. Now, Oracle has jumped about a bit. Looking at its core database: version 8.1.7 had a lifetime for extended support of a tad over six years; 9.2 was eight years; 11.2 leapt to more than 11 years; and with 12.2 we’re back to eight (right now both 11.2 and 12.2 are within extended support).

But you need to patch

What we’re seeing, then, is that at least for a small sample of well-known apps, there’s not a huge worry about having to risk moving to a new version every few years. But it’s very easy to get complacent (yes, that word again) and use this as an excuse not to perform updates. Particularly with applications that sit inside the corporate network and aren’t accessible (directly, at least) from the internet, it’s understandable that a company would do a risk assessment and decide that the inconvenience of an outage outweighed the benefit of installing a particular security patch. And that may be true, but patches and updates wouldn’t exist if there was no need for them.

We’ve all had “important” users (this generally means “self-important”, actually) who demand special treatment because (say) they don’t want their machines to reboot overnight. The correct response to such individuals is a two-word one, the second being “off”

Let’s pick a couple from Microsoft’s SQL Server 2014 Service Pack 2 page: KB3172998 is labelled: “FIX: A severe error occurs when you use the sys.dm_db_uncontained_entities DMV in SQL Server 2014”, for example; or there’s KB3170043 which fixes “Poor performance when query contains anti-join on a complex predicate in SQL Server 2014 SP1”. Patches fix stuff and/or make stuff better, so why wouldn’t you want to partake?

Patching equals supported

If you don’t patch your systems, there’s also a decent chance that the vendor will stop supporting them. Moving away for a moment from applications, I vividly remember working for a company that had a big pile of kit from a particular manufacturer. If something broke, the support call went like this:

Us: “Hi, we have a fault on a server.”

Vendor: “OK, please send us a [diagnostics] log.”

Us: “Here you go.”

Vendor: “Ah, some of your firmware is out of date; please upgrade it all and then call us back if it’s still broken.”

The vendor of the phone system we used was also very strict, but happily was much less restrictive: you were supported if and only if you were running the current version of the software or the immediately previous one. Anything older and you were on your own.

Now, if you’ve looked at the Microsoft SQL Server support lifecycle link from above, you’ll have noticed that product’s supportedness depends on the service packs you’ve installed.

SQL Server 2014’s support end date has already passed, you’ll see. But Service Pack 1 is supported until later this year, and Service Pack 2 until 2019 (or 2024 if you go for extended support). So at the very least you need to be keeping up with the service packs on your apps, or you’ll find yourself unsupported and the patches – functional and security – will dry up before you know it. You need to be similarly careful with non-Microsoft apps, too: check out the minor version numbers on Oracle 12, for instance and you’ll see that 12.1 is about to hop into its death bed (for basic support, anyway) next year while 12.2 lives on until 2022.

Designing in upgradeability

Back in my network admin days, I became used to being able to upgrade Cisco ASA firewalls with zero downtime. This was because: (a) we ran them in resilient pairs; and (b) the devices would continue to run as a cluster so long as the versions weren’t too dissimilar.

The same applies to many applications: the manufacturers know that you hate downtime and so they make their products such that you can do live or near-live upgrades, and all you have to do is design your instances of those products to exploit all that funky upgrade-ability they included. Back at our SQL Server example, for instance, there’s a hefty tome on how you go about upgrading, which discusses minimising downtime for different types of installation.

When I talked last time about ongoing OS patching, I pointed out that in these days of virtual systems the hypervisor writers have handed you a big stack of get-out-of-jail-free cards when it comes to patching and updating. The same logic applies to applications: you can test updates in a non-live setting by cloning your systems into a test network, and you can protect against duff patches by snapshotting the live virtual servers before you run the update. The are few things easier in this world than rolling back a server to the previous snapshot. So even when you can’t entirely eliminate downtime, you can at least minimise it.

Accepting the downtime

Patches for big, core applications aren’t a light-touch affair (if you have legacy stuff there’s a good chance that it wasn’t designed and installed with live updates in mind. I’ve seen people do core application patching (particularly stuff like Microsoft Exchange servers) over a number of days, with multiple downtimes, and so you have to plan and communicate properly.

What you shouldn’t do, though, is be beaten into postponing such controlled outages indefinitely just because the business users moan. Of course, you need to avoid unnecessary outages but the point is that some outages aren’t just advisable, they’re essential. We’ve all had “important” users (this generally means “self-important”, actually) who demand special treatment because (say) they don’t want their machines to reboot overnight. The correct response to such individuals is a two-word one, the second being “off”.

Yes, you need to keep outages to a sensible level, but you absolutely shouldn’t be put off having the right number of them. Not least because in many cases you can demonstrate how few people were using a particular app at the time of an update, and hence hopefully persuade the users that the impact really isn’t as bad as they think.

So... the complacency of application patching

Merriam-Webster defines “complacency” as: “Self-satisfaction especially when accompanied by unawareness of actual dangers or deficiencies” or “an instance of usually unaware or uninformed self-satisfaction.”

In a security context that’s a very scary thing… in fact in an IT manager or sysadmin context it’s equally scary. On one hand it might be that you have systems that you have left unpatched because you decided it was OK to do so. On the other hand you may be completely patched up-to-date and sit there smugly not realising that you’ve (say) made something insecure or unstable through a configuration error or some such.

So just as it was inexcusable for operating system maintenance, complacency is also a sin for the apps. Most of the updates you’ll do are relatively straightforward, and the rollback is often equally simple. So you’ll usually point at the business people as your excuse – either they’ve not given you the resource you need in order to do all the updates, or they’ve told you you can’t have the downtime.

And the day your unpatched system gets ransomwared or DDoSed and you’re down for a week will be the day you wish you’d been a little bit more insistent on a few hours of reboots here and there. ®

Similar topics

Other stories you might like

  • India reveals home-grown server that won't worry the leading edge

    And a National Blockchain Strategy that calls for gov to host BaaS

    India's government has revealed a home-grown server design that is unlikely to threaten the pacesetters of high tech, but (it hopes) will attract domestic buyers and manufacturers and help to kickstart the nation's hardware industry.

    The "Rudra" design is a two-socket server that can run Intel's Cascade Lake Xeons. The machines are offered in 1U or 2U form factors, each at half-width. A pair of GPUs can be equipped, as can DDR4 RAM.

    Cascade Lake emerged in 2019 and has since been superseded by the Ice Lake architecture launched in April 2021. Indian authorities know Rudra is off the pace, and said a new design capable of supporting four GPUs is already in the works with a reveal planned for June 2022.

    Continue reading
  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading

Biting the hand that feeds IT © 1998–2021