Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customise your settings, hit “Customise Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

Big Blue boffins scan 10 billion files in Flash in a flash

Enticing developments at the lab


IBM and Violin have announced a great big GPFS numbers record: the software scanned 10 billion files in a flash – well, 43 minutes – using four Violin flash memory arrays.

This was 37 times faster than a previous GPFS record of scanning one billion files in three hours, but that was with the file system metadata stored, like the file data, on disk drives.

Why does this matter? IBM says it is because GPFS needs to scan files its filesystem so that they can be moved between storage tiers, migrated, archived, etc. This is non-production work and has to be done in the background. When done with metadata on disk, the process becomes slower and slower as the number of files in a GPFS system rises and rises. So much so that, conceivably, eventually there aren't enough hours in the day to do it.

So IBM Research tried putting the metadata on flash arrays and seeing how much faster the system went. The result is impressive, very impressive, but not that surprising. It's also a tad, well, background, since the system wasn't handling real data.

Back in February, IBM announced a wondrous SONAS SPECsfs 2008 benchmark result of 403,326 SPECsfs ops/sec from a single GPFS system using 1,975 hard disk drives.

EMC bounded past this with a flash-heavy VNX system doing 497,623 ops/sec, using 436 x 200GB, SAS SSDs and 8 file systems.

SONAS is based on GPFS. From where El Reg sits, a re-run of the IBM SONAS SPECsfs2008 benchmark looks feasible, but this time using a few Violin Memory Arrays to hold the SONAS data and so get to the 500,000 SPECsfs2008 ops/sec area and beyond. We have asked both IBM and Violin about this but didn't expect to get anything looking like a "Yes, we're doing this" reply.

Much to our surprise we received this from Bruce Hillsberg, director of storage systems, IBM Research – Almaden: "You are correct: if we were to re-run the SPECsfs benchmark on a SONAS system with using the technology described in the press release, we would see a significant performance improvement."

Enticing, isn't it? ®

Similar topics


Other stories you might like

Biting the hand that feeds IT © 1998–2021