This article is more than 1 year old

Dude, relax – it's Just a Bunch Of Disks: Our man walks you through how JBODs work

Trevor Pott puts AIC's DAS subsystem through its paces

Sysadmin blog I'm one of those terrible people who "learn best by doing" and have always had a difficult time wrapping my head around exactly how high availability using "JBOD" external disk chassis systems was supposed to work. But my initial ignorance can work for both of us as we learn together.

As luck would have it, AIC was interested in having me do a review of its new XJ3000-4243S external disk chassis. The AIC device turned out to be a great unit and the extended tinkering time allowed me to learn quite a bit.

Plug it in, plug it in

To grok how the JBOD systems work, let's cover some storage basics. Those of you who've taken apart a computer in the past decade will probably be familiar with SATA connectors. You have a power cable and a data cable, you hook one of each up to a hard drive and the data cable gets hooked up to the system in one of three ways.

The first method is plugging the hard drive directly into the motherboard. Here the SATA controller is generally built right in to the southbridge, the motherboard's BIOS picks up the disk and the operating system (hopefully) can use it without any further intervention.

Next up you have the Host Bus Adapter (HBA) – an add-in card that provides you some ports to attach your disks but typically does not do hardware RAID. The last method for attaching disk to system is the RAID card: basically an HBA with hardware RAID processing capabilities, its own BIOS to configure and manage the RAID, RAM cache and even a backup battery unit.

Hard disk attachment to systems can thus range from fundamental integrated feature of virtually any system motherboard to RAID cards that are entire computers in their own right handling the disks on their own and merely presenting a "virtual" disk to the system in the form of the post-RAIDed usable storage volumes.

AIC JBOD rear image

Unlike the SATA ports on your motherboard, the SAS ports on those HBAs and RAID cards typically are not presented as individual ports. They are presented as mini-SAS connectors which are four ports in a single physical connector. If you've used RAID cards with SATA drives you've probably seen this directly in the form of a break-out cable that turns an internal mini-SAS port into four SATA cables.

Given the serial nature of SAS, however, four ports does not mean that a mini-SAS connecter can use a maximum of four drives. These mini-SAS connector can be attached to SAS expanders to host dozens of drives off a single connector. If you've ever wondered what the relevance of 12Gb/sec per SAS port was when most hard drives can supply maybe 1/10th of that, this is why.

Two ports are better than one

In the world of SATA and SAS, almost any HBA or RAID card that is designed for SAS drives can handle SATA drives. It is perfectly possible to use each of these attachment methods without ever using SAS drives or even understanding the advantages SAS brings to the table.

I won't even try to cover the full range of differences here – for that I recommend Scott Lowe's piece on the topic – but the item that concerns the use of JBODs is that unlike SATA disks, SAS drives have two data connecters on each drive.

Storage systems manufacturers and I have a history of strong disagreements about the concept of "redundancy". Too many of them call a system "fully redundant" when all that it really has is redundant power supplies, RAID to survive the odd dead disk and two HBAs each connecting to a different port on those SAS disks so that you can survive the loss of even the HBA.

SAS expansion is how a server like the Supermicro 6047R-E1R36L can run 36 drives off of a pair of four-port mini-SAS connectors. Indeed, it's designed with SAS expanders that support dual HBAs so if you wanted to roll your own storage server that was just like the big guys, this is exactly what you'd use.

That sounds all fine and good, but I've experienced motherboard failures, RAM failures and even dead CPUs often enough that I can't put just one of these systems into a client's business and cross my fingers. Even with "four-hour enterprise repair" I'm uncomfortable: without the storage system nothing works. If those four hours happen at the wrong time of day, that can end some of my clients.

More about

TIP US OFF

Send us news


Other stories you might like