Should SANs be patched to fix the Spectre and Meltdown bugs? Er ... yes and no

General assumption is yes. But five suppliers say no

Analysis Is the performance sapping spectre of the X86 Spectre/Meltdown bug fixes hanging over SAN storage arrays? The general assumption is "yes" but five suppliers say not.

You would expect SANs to need patching; they run their controller software on X86 servers after all.

UK storage architect Chris Evans writes: “Patching against Meltdown has resulted in performance degradation and increased resource usage, as reported for public cloud-based workloads.”

His understanding is that “the overhead for I/O is due to the context switching that occurs reading and writing data from an external device. I/O gets processed by the O/S kernel and the extra work involved in isolating kernel memory introduces an extra burden on each I/O. I expect both traditional (SAS/SATA) and NVMe drives would be affected because all of these protocols are managed by the kernel.`'

He wonders if there’s a difference between SAS/SATA and NVMe, simply because NVMe is more efficient.

Specifically, for traditional storage arrays: “The additional work being performed with the KAISER patch appears to be introducing extra CPU load in the feedback reported so far.  This means it also must affect latency. … The impact to traditional storage is two-fold.  First, there’s extra system load, second potentially higher latency for application I/O."

Customers implementing this patch need to know if the increased array CPU levels will have an impact on their systems. A very busy array could have serious problems.

"The second issue of latency is more concerning. That’s because like most performance-related problems, quantifying the impact is really hard. Mixed workload profiles that exist on today’s shared arrays mean that predicting the impact of code change is hard. Hopefully, storage vendors are going to be up-front here and provide customers with some benchmark figures before they apply any patches.“

Nothing to see here - carry on...

But five suppliers say no, their SAN systems will not be affected.

In a blog post IBM says its storage appliances will emerge unscathed.

Here is a statement from Netapp: “Unlike a general-purpose operating system, Element OS is a closed system that does not provide mechanisms for running third-party code. Due to this behaviour, Element OS running on SolidFire or NetApp HCI Storage nodes is not affected by either the Spectre or Meltdown attacks as they depend on the ability to run malicious code directly on the target system.“

On this basis we would expect the same to be true for its DataONTAP FAS arrays as well.

Tintri founder and CTO Dr Kieran Harty tells us: “We are not vulnerable because we only run our own software on our appliances,” adding, “We’re not planning on patching the software that runs on our appliances.”

In effect, Tintri says it doesn’t have to make the performance or security choice - because its dedicated engineered appliance systems don’t run anybody else’s code except Tintri’s and so are secure already.

Meanwhile, DataCore told us that “once a [DataCore SAN] target request has been received by the kernel, whether from a SAN, a Hyper-Converged environment, or MaxParallel, there are no additional transitions to user space involved.

"As a result, based on the information currently available about the proposed mitigation strategies, it seems unlikely that there will be any performance impact on the storage presented by DataCore. Tests are still in progress in the lab to verify that this is the case.”

It doesn’t believe its SAN software needs patching: “DataCore has a close connection to the operation of the Windows kernel, however, it is currently believed that no software changes will be required to protect against the vulnerabilities or as a result of the mitigations. [Again] tests are still in progress in the lab to verify that this is the case.”

What is the rationale for this stance? DataCore says that in the event that a DataCore installation has been compromised, the risk of data under management being exposed currently appears to be almost zero.

A Datacore document that The Register has seen makes these claims:

In order for Meltdown to gain unauthorised access, the memory needs to have a virtual address assigned to it which is not the case for the DataCore cache. A virtual address will be assigned temporarily to individual cache buffers when performing specific operations on a snapshot, replicating data, or allocating storage to a thinly provisioned volume, but this will be released as soon as the operation is complete.

Given that the reported data access rate using Meltdown is up to 503KB/s it is implausible that an attacker would be able to identify a temporary mapping and extract data in the time available.

The DcsAddMem processes have access to user virtual addresses for the cache contents which would potentially open up an attack route using Spectre. However, Spectre requires that the application under attack be executing in order to be vulnerable and this is not the case for DcsAddMem.  The processes are blocked within the kernel until virtualisation is stopped at which point the memory is released.

Infinidat CTO Brian Carmody was asked if Infinidat arrays would be affected, and told us: "Not affected. The design of InfiniBox provides no facility for non-privileged users to run 3rd party code locally on the system."

He's repeating the message put out by the other suppliers.

Reg comment

The consequences of some malware-toting person gaining access to mission-critical data could be severe so you really would not want your shared external storage arrays compromised. The Spectre and Meltdown bugs increase X86 servers' vulnerability attack surface, and SANs and filers are controlled by X86 servers, ergo … except not ergo, according to IBM, NetApp, Tintri, Infinidat and DataCore.

It seems this is a judgement call in a way. The suppliers are saying their customers do not have to make a choice between performance and security, because their systems are secure enough already. Are they? It’s not even your call as these suppliers are proposing not to patch their systems. ®

Narrower topics

Other stories you might like

  • Demand for PC and smartphone chips drops 'like a rock' says CEO of China’s top chipmaker
    Markets outside China are doing better, but at home vendors have huge component stockpiles

    Demand for chips needed to make smartphones and PCs has dropped "like a rock" – but mostly in China, according to Zhao Haijun, the CEO of China's largest chipmaker Semiconductor Manufacturing International Corporation (SMIC).

    Speaking on the company's Q1 2022 earnings call last Friday, Zhao said smartphone makers currently have five months inventory to hand, so are working through that stockpile before ordering new product. Sales of PCs, consumer electronics and appliances are also in trouble, the CEO said, leaving some markets oversupplied with product for now. But unmet demand remains for silicon used for Wi-Fi 6, power conversion, green energy products, and analog-to-digital conversion.

    Zhao partly attributed sales slumps to the Ukraine war which has made the Russian market off limits to many vendors and effectively taken Ukraine's 44 million citizens out of the global market for non-essential purchases.

    Continue reading
  • Colocation consolidation: Analysts look at what's driving the feeding frenzy
    Sometimes a half-sized shipping container at the base of a cell tower is all you need

    Analysis Colocation facilities aren't just a place to drop a couple of servers anymore. Many are quickly becoming full-fledged infrastructure-as-a-service providers as they embrace new consumption-based models and place a stronger emphasis on networking and edge connectivity.

    But supporting the growing menagerie of value-added services takes a substantial footprint and an even larger customer base, a dynamic that's driven a wave of consolidation throughout the industry, analysts from Forrester Research and Gartner told The Register.

    "You can only provide those value-added services if you're big enough," Forrester research director Glenn O'Donnell said.

    Continue reading
  • D-Wave deploys first US-based Advantage quantum system
    For those that want to keep their data in the homeland

    Quantum computing outfit D-Wave Systems has announced availability of an Advantage quantum computer accessible via the cloud but physically located in the US, a key move for selling quantum services to American customers.

    D-Wave reported that the newly deployed system is the first of its Advantage line of quantum computers available via its Leap quantum cloud service that is physically located in the US, rather than operating out of D-Wave’s facilities in British Columbia.

    The new system is based at the University of Southern California, as part of the USC-Lockheed Martin Quantum Computing Center hosted at USC’s Information Sciences Institute, a factor that may encourage US organizations interested in evaluating quantum computing that are likely to want the assurance of accessing facilities based in the same country.

    Continue reading
  • Bosses using AI to hire candidates risk discriminating against disabled applicants
    US publishes technical guide to help organizations avoid violating Americans with Disabilities Act

    The Biden administration and Department of Justice have warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

    Under the ADA, employers must provide adequate accommodations to all qualified disabled job seekers so they can fairly take part in the application process. But the increasing rollout of machine learning algorithms by companies in their hiring processes opens new possibilities that can disadvantage candidates with disabilities. 

    The Equal Employment Opportunity Commission (EEOC) and the DoJ published a new document this week, providing technical guidance to ensure companies don't violate ADA when using AI technology for recruitment purposes.

    Continue reading
  • How ICE became a $2.8b domestic surveillance agency
    Your US tax dollars at work

    The US Immigration and Customs Enforcement (ICE) agency has spent about $2.8 billion over the past 14 years on a massive surveillance "dragnet" that uses big data and facial-recognition technology to secretly spy on most Americans, according to a report from Georgetown Law's Center on Privacy and Technology.

    The research took two years and included "hundreds" of Freedom of Information Act requests, along with reviews of ICE's contracting and procurement records. It details how ICE surveillance spending jumped from about $71 million annually in 2008 to about $388 million per year as of 2021. The network it has purchased with this $2.8 billion means that "ICE now operates as a domestic surveillance agency" and its methods cross "legal and ethical lines," the report concludes.

    ICE did not respond to The Register's request for comment.

    Continue reading
  • Fully automated AI networks less than 5 years away, reckons Juniper CEO
    You robot kids, get off my LAN

    AI will completely automate the network within five years, Juniper CEO Rami Rahim boasted during the company’s Global Summit this week.

    “I truly believe that just as there is this need today for a self-driving automobile, the future is around a self-driving network where humans literally have to do nothing,” he said. “It's probably weird for people to hear the CEO of a networking company say that… but that's exactly what we should be wishing for.”

    Rahim believes AI-driven automation is the latest phase in computer networking’s evolution, which began with the rise of TCP/IP and the internet, was accelerated by faster and more efficient silicon, and then made manageable by advances in software.

    Continue reading
  • Pictured: Sagittarius A*, the supermassive black hole at the center of the Milky Way
    We speak to scientists involved in historic first snap – and no, this isn't the M87*

    Astronomers have captured a clear image of the gigantic supermassive black hole at the center of our galaxy for the first time.

    Sagittarius A*, or Sgr A* for short, is 27,000 light-years from Earth. Scientists knew for a while there was a mysterious object in the constellation of Sagittarius emitting strong radio waves, though it wasn't really discovered until the 1970s. Although astronomers managed to characterize some of the object's properties, experts weren't quite sure what exactly they were looking at.

    Years later, in 2020, the Nobel Prize in physics was awarded to a pair of scientists, who mathematically proved the object must be a supermassive black hole. Now, their work has been experimentally verified in the form of the first-ever snap of Sgr A*, captured by more than 300 researchers working across 80 institutions in the Event Horizon Telescope Collaboration. 

    Continue reading
  • Shopping for malware: $260 gets you a password stealer. $90 for a crypto-miner...
    We take a look at low, low subscription prices – not that we want to give anyone any ideas

    A Tor-hidden website dubbed the Eternity Project is offering a toolkit of malware, including ransomware, worms, and – coming soon – distributed denial-of-service programs, at low prices.

    According to researchers at cyber-intelligence outfit Cyble, the Eternity site's operators also have a channel on Telegram, where they provide videos detailing features and functions of the Windows malware. Once bought, it's up to the buyer how victims' computers are infected; we'll leave that to your imagination.

    The Telegram channel has about 500 subscribers, Team Cyble documented this week. Once someone decides to purchase of one or more of Eternity's malware components, they have the option to customize the final binary executable for whatever crimes they want to commit.

    Continue reading

Biting the hand that feeds IT © 1998–2022