Updated IBM’s cloud experienced an “unplanned event” that caused its McAfee-as-a-service offering to operate with sub-par performance for nearly a day.
“At approximately 0347 AM UTC on June 20, engineers with Compute Infrastructure identified a database issue that necessitated the restoration of a key update repository for McAfee Antivirus services from backup,” read an advisory sent to customers today.
The advisory explained that some 27 IBM data centers across Europe, Asia, the USA, and Mexico, experienced disrupted anti-malware scanning.
A later advisory dated "Thursday 21-Jun-2018 01:26 UTC" noted "all Mcafee services have been returned to operational status, we apologize for the inconvenience."
IBM touts McAfee-as-a-service as part of a portfolio of offerings to help its customers secure and defend their rented servers in the Big Blue cloud.
But that help wasn't very helpful because the first advisory said “Customers in the listed regions may experience difficulty updating existing McAfee services, provisioning new McAfee services, or performing scheduled scans or maintenance on McAfee services.”
Yup, you read that right: if you relied on IBM to host McAfee antivirus that protects your cloudy stuff, it was probably not be up to the job of scanning for viruses for a while and/or might not have been able to slurp new virus-squishing updates. Users would also have been unable to do the "hey, the cloud lets me run up new servers whenever I want to" thing, because new services weren't starting reliably.
Needless to say this is not an optimal mode of operation for antivirus software, and just isn’t the sort of thing that a cloud or SaaS operator is supposed to do.
It gets worse: IBM said the incident was spotted at approximately “0347AM UTC on 6-20-2018”. The final notice came nearly 22 hours later.
Of course you’re not worried because as the kind of cunning person who reads The Register you do defence in depth and had other anti-malware arrangements in place, right? Right? ®