This article is more than 1 year old
Make sure your storage can grow with your business
Tips for the canny SME
We have referred in passing to the storage device, but what do you actually want from it? Well, the basic attributes are pretty straightforward:
- Multiple power supplies that are hot-swappable
- Multiple LAN interfaces and LACP/EtherChannel support
- All disk modules hot-swappable
- At least RAID5, but preferably with hot-spare capability too so you never lose resilience
If you are completely paranoid about resilience then you need to look for a device that can be paired with at least one other device and auto-replicate.
Loads of them can, using a cluster-like approach where they interchange data with each other – often in an intelligent way so that each data item resides on at least two devices to protect against single point of failure problems.
And I would heartily recommend going for this type of offering if the budget will stretch that far, because it opens some doors.
Scalability: if the product you choose is able to partner with at least one other, then even if you initially go for a single-node installation you have the option to scale up both the storage and the resilience by adding more nodes later.
High availability: where each data item resides on at least two nodes, then as long as the clustering software is able to present the file stores seamlessly in the event of a node failure you are protected from node death.
Upgrades: if you have a multi-node cluster with each data item stored in at least two places you can generally upgrade the operating firmware a node at a time and hence manage the risk of an upgrade failure.
The latter point is often forgotten, incidentally. The world is littered with installations where the storage array and its corresponding server adaptor are running prehistoric firmware versions because of the downtime involved in the upgrade and the concern that it might not come back up after being flashed with the new operating code.
If you have a single device with resilient, hot-swap everything, then that is probably well within the acceptable risk profile of the average SME. But keep one eye on expansion and don't cut off your options.
Opening the store
OK, we have talked about attaching the storage to your servers, so now it is time to return to the concept of file serving. You have a couple of choices here, and I know which I would go for.
Option one is to use a server to present the file stores. That is fine, but remember that you need to preserve the resilience aspect: don't have a crappy server that can keel over and kill off access to your nice resilient storage subsystem.
At the very least, then, you will want a pair of servers running in a clustered setup (WS2012 will do this happily for you, for example).
Option two is to go with a storage system that inherently supports Windows file sharing at the very least, and preferably NFS too for good measure.
My personal preference? Go for storage that talks CIFS and NFS natively. Adding two servers just for file serving simply adds two more things to break, and if it has sufficient LAN interfaces you can present iSCSI on the storage LAN and user-facing file shares on the main LAN.
So now we are advertising the storage so it can be consumed by our server estate, but do we want to present it as just a straightforward lump of storage?
Well, perhaps we do if the throughput of the storage system exceeds the total demand our various servers will be putting on it, but that tends not to be the case.
Generally speaking the I/O of the storage will max out from time to time, particularly during high-demand periods. If there's a mechanism we can use to prioritise critical systems over others – perhaps to ensure that the database server enjoys uninterrupted access at the expense of something less important – that is a bonus.
Such quality-of-service selection is commonplace in today's storage arrays. For each of the volumes presented by an array or cluster you can take the total capacity (a known quantity for a particular device with the particular disks it contains) and either define a minimum and maximum throughput per volume, or at least prioritise some volumes over others in case of contention.