This article is more than 1 year old

NetApp accused of short-stroking its new hardness

'Everyone short-strokes', admits industry rival

Analysis NetApp's bombshell NFS benchmark record has generated accusations that it is artificially boosting performance by short-stroking disks behind the scenes and scaling up rather than out.

Short-stroking is the technique of accelerating data transfer rates by only accessing a small part of each disk's surface, massively wasting capacity. Scale-up is scaling performance by adding more oomph to a single computing resource rather than scaling out by adding more computing resource nodes.

Scaling up and not out

For example, EMC-watching blogger Storagezilla said that although there was a single namespace in the benchmarked NetApp system there were actually 24 FlexVols (virtual volumes) set up as "one filesystem per node", adding: "Scale out my eye; it's scale up."

Alex McDonald from NetApp's Office of the CTO said: "[It's] one filesystem as far as the server-end is concerned. How we do our magic is up to us. SPECsfs2008 demands you carry all the I/O to all parts of the filesystem across all the nodes. The namespace is the filesystem as far as the server is concerned. All the parts of the filesystem and all the I/O done to them have to go across all nodes.

"This was NFS v3 by the way, and pNFS will be even better."

Storagezilla made some more detailed points, to which El Reg gave McDonald the opportunity to respond:

Storagezilla: "In ONTAP 8 the directory in which a file is stored determines its physical location in a volume attached to one filer."

McDonald: This is not true.

Storagezilla: "It doesn't matter if you put 11 other filers next to it, that filer sees no benefit as the I/O always goes to one volume on one filer."

McDonald: Not true.

Storagezilla: "Volume migrations are manual. [If] one filer becomes a hotspot, it's up to you to figure out what to move where."

McDonald: Not relevant to benchmark issue.

Storagezilla: "As such if you have eight filers and add another four you see zero benefit until you relay out all the existing volumes on the other eight yourself. All aggregate sizes are fixed. Once you build volumes on them you're not reclaiming any unused storage. [You] can't guess so you oversubscribe."

McDonald: Aggregate sizes are not fixed. 'Zilla doesn't understand aggregates.

Storagezilla: "Snapshots, deduplication, compression and so on all operate at a volume level. You can't snapshot, dedupe or compress data across multiple filers."

McDonald: You can snapshot and compress across multiple filers but not dedupe.

Short-stroking

EMC Isilon's chief technology officer for the Americas, Rob Pegler, said he thought short-stroking was involved: "The math says 1,728 disks x 450GB = 777,600 GB. Yet their SPEC finding only shows 288TB exported. That's roughly 37 per cent... it is not quite right. The key for all SPEC studies is export capacity and fileset size compared to raw disk. Everyone short-strokes."

McDonald responded saying this: "The total exported capacity is the combined capacity of all the volumes that were created. It does not have any bearing on the performance."

Pegler also made a protocol-related point about ONTAP 8.1: "If NetApp ran any SAN protocols (Fibre Channel, iSCSI) on 8.1 they are limited to four nodes; 24 nodes is the limit for NAS protocols."

Storagezilla made a more general point that seems particularly relevant: "Scale-out means different things to Isilon, [IBM] SONAS, [HP] IBRIX and NetApp." That difference in definition is perhaps getting in the way of each vendor understanding the other vendors' scale-out technologies. Indeed. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like