This article is more than 1 year old
Storage pros: Big or small, you still have to hit the sweet spot
How does size matter in a hard drive?
You the expert We talked to four Reg-reading storage gurus on where and when to use 3.5-inch hard disk drives and when to use 2.5-inch ones. They each admit a role for the smaller form factor drives – with one view being that they could take over completely as their capacities increase. But not one of our quartet thinks that 2.5-inch drive use will decrease.
This is what they had to say:
Mikel Kirk
Special Projects Co-ordinator and storage consultant
For the most part, people I know have used the 2.5-inch HDDs (SFF) for the boot O/S for servers for the last couple years – the SAS 6Gig ones now. Usually a redundant pair (RAID1) for reliability, 10K or 15K rpm and a smallish size like 146GB. 2U servers come with up to 25 of these, but people mostly use just one pair and put their data on external storage – usually a SAN except for SMB, where they use an appliance. The servers are still available with the 3.5-inch (LFF) drives – up to 14 for a 2U box, and I see these in the SMB space where they only have one or two servers and want to put all the storage in the box. Customers like to standardise on the SFF drives because they're more flexible – you can still get the spindle count up on a 1U box if you need to. Of course the advantage of having a standard size is it reduces the cost of the cold spare pool.
Most of the people I deal with now are doing or have done server consolidation, so the boot drives in the server are fairly irrelevant – they could boot the VMHost from SAN or SD card. The drives are there for tradition, and because that's easier to set up than anything else. They have the VMHost boot on them and nothing else. For these people the physical drive size is irrelevant. The cheapest drive will do because the VMHost typically loads once into RAM on boot and the drives are idle all the rest of the time except for VM logs mostly. That may be selection bias so I wouldn't read it as a general industry trend too much yet – I work for a company that's been big in both VMWare and Citrix consulting for many years and our customers may be selecting us for that reason.
Big drives or small you still have to match performance specs with the price data to hit the sweet spot on any given day for a particular solution. You have to look at the bottlenecks of the whole system to make sure it will meet its design goals. It's usually obvious, but the exceptions are worth the rule.
On desktop machines, the LFF SATA still rules the roost, one per system. I'm working a project today with a thousand of those. The enterprise is selecting the smallest available drive because they don't want people storing stuff on it anyway, and because, on the other end of the lifecycle, large drives take a long time to wipe. Unfortunately for them, apparently the minimum size is 320GB. They might benefit from moving to SFF or SSD just to get the wiping cost down. I'd like to see more SMB's adopt the hybrid drives in the next year – they can boost productivity and the cost is low, but they're not yet an available option from the big system vendors and they're new and relatively untried. The hybrid drives so far aren't working out for RAID.
Speaking of RAID, there's a trap with the really huge and slow 3TB SATA drives. The streaming write bandwidth on these drives is about 120MB/s. That makes almost seven hours to write the whole drive. In practice that equates to over a day, maybe several, to rebuild an array. That's too long. A lot can go wrong in a day and in a larger array nearing end of life it might enter a semi-permanent rebuild state where drives fail as often as the array rebuilds, or worse: induced multi-drive failure during rebuild from rebuild stress resulting in data loss. This situation will only get worse as drive capacities continue to grow faster than streaming write bandwidth. These drives are sweet density for backup targets. They're the new tape. But as always with backups: inspect what you expect. An untested backup isn't a backup at all – it's just wasted time and money.
For drives, if you want the most storage for the dollar today in business class, it's hard to beat the 2TB LFF SAS drives. The lower-end SAS drives cost about the same as server SATA drives from the big vendors. In an odd aside, I'm also seeing big SATA appliances like Drobo in the enterprise where you normally wouldn't expect them, and more home-brew Linux file servers and iSCSI solutions than ever before. School districts, where dollars are rare and tolerance for risk is high, seem to be taking to OpenFiler. People are wising up to the idea that highly redundant cheap stuff can be more cost-effective than, and just as reliable as, "Enterprise class" gear. Of course this stuff is external to the server in a dedicated filer or SAN (or a server purposed as such, which is the same thing really). Don't get me wrong: "more than ever before" is not anywhere close to "most of the time". It's changing, but it's not close to taking over yet.
Mikel Kirk is currently a PC deployment project manager for a 3,000+ unit desktop hardware refresh with Windows 7 migration. When that's complete, it's back to speccing and servicing servers, storage and networking. He is on a multi-year 10,000+ seat VDI project for a local school district, quite largish for VDI deployments at this point in time. He says: "Disclaimer: I work for a company. They sell stuff, including some of the products mentioned here. Their opinion is not mine, and mine is not theirs."