This article is more than 1 year old
Make sure your storage can grow with your business
Tips for the canny SME
The average SME has a modest infrastructure which has grown organically: a file server of some sort, probably an email server, then a handful of application servers hosting things like finance systems or the database back-ends to business applications.
In most cases server A is pushing the limits of its storage capacity while server B has hundreds of gigabytes available.
As businesses grow, there comes a time when they need to think again about the company's storage strategy and systems.
Where do you start, and how do you ensure that what you install will scale as your company (hopefully) grows?
To start with, here are the basic requirements:
- Shared storage that hooks into your directory service (usually Active Directory) so you can control access to files and folders
- The ability to break the storage into separate logical volumes and present them separately
- Access to the storage both by users (via file-sharing protocols) and servers
- Some form of resilience or backup.
A concept that emerged a few years back was the idea of taking a bunch of servers with on-board storage (either internal or an external array that was directly SCSI-connected) and using a software layer on top to present that bunch of distributed disks as a unified virtual storage entity.
I must admit I am not a great fan of that approach. The storage layer is only as fast and as reliable as its weakest link, and if you are virtualising server disks of assorted vintages then you can expect a failure sooner rather than later.
I am assuming that you have grown past the stage when this type of approach is sufficiently performant and stable, so we will move on.
You could decide that you want to retain the model of a traditional file server – a Windows or Linux box that presents a collection of disks using Windows, Mac or Unix file-sharing protocols. This doesn't answer the question of how you actually implement the storage though.
If you choose to use internal disk, or more likely a direct-attached storage array, then the disk subsystem will be accessible only to the file-server machine. You are not moving forward at all from the model you have had for years.
We will come back to the basic file-server concept in a bit, though, as it is not entirely dead.
We need to head, then, toward a storage system that your servers can connect to as if it were internal disk. At the top end this is Fibre Channel, which means expensive switches and pricy fibre adaptors in the servers. As we are firmly in SME-land, though, it basically means iSCSI.
Now, let's dispel a myth here. Yes, iSCSI is all about accessing your storage over an IP network (generally Ethernet these days). And yes, if you just plug it into your LAN then performance will be ropey at best.
But that is the point – you won't just plug it into your LAN: you will buy a couple of high-speed switches and have a dedicated storage network.
Start as you mean to go on. You could use Gigabit Ethernet but I just checked one of my suppliers at random and it will do you a 24-port 10GbE switch for about three grand.
That will do nicely if you can stretch the budget a bit, but if not then there's no shame in going for Gigabit Ethernet if that's what you can afford.
Don't let suppliers try to make you spend vast sums on big brand names. While I am a Cisco man all the way, for instance, there is really nothing wrong with an SME looking to the likes of (say) NetGear for a 10GbE installation. I have seen it done very successfully.
The point of iSCSI is that the storage is connected to a network and is hence accessible from any server whose operating system supports iSCSI (that will be all of them, then).
The server and the storage use shared credentials for authentication so not just any old device on the network can mount a volume on the storage.
Regardless of what network speed you go for, the important bit is to ensure you can connect it to your world in a resilient manner, which means it can survive a link failure or a network switch blow-up.
The technology you need is supported in all but the tiniest switches these days, and depending on the brand you choose it will be called EtherChannel, 802.3ad or LACP.
They are all variations on a theme: you have multiple physical connections between the switch and the storage unit but you can configure both devices so that they see the interconnect as a single, aggregated link – and equally importantly, use the aggregated link at its full physical speed.
So if you bundle a pair of 1Gbps connections with EtherChannel it will operate as a single 2Gbps virtual link. If a physical link fails the endpoints will keep on humming, but only at the total speed of the links that have survived.
To run a bonded link between a storage unit and a pair of switches, those switches generally need to stackable (hooked together via some kind of proprietary mechanism that lets the pair act and be managed as a single virtual switch). Most network vendors provide switches that can do this.
Oh, and of course you also need a storage array with multiple LAN interfaces that can do LACP/EtherChannel. They are such well-established concepts, though, that this won’t be a problem.