This article is more than 1 year old
Goodbye to physical Fibre Channel
FC is being killed by SAS and FCoE
Analysis Here's a thought: Fibre Channel has begun its death march, with physical fabrics under notice from FCoE, and FC-interface hard drives under notice from SAS. You might not agree, but here's the argument in favour:
Internal array Fibre Channel
Currently Fibre Channel (FC) has four main incarnations, hardware and software inside and outside storage arrays. Storage area network (SAN) fabrics use Fibre Channel protocols travelling between servers with FC host bus adapters (HBAs) over FC cables to FC switches and directors and on to FC HBAs, used as target devices, interfacing to a storage device. Inside the modular, twin controller storage arrays there is generally - or was - a Fibre Channel Arbitrated Loop (FCAL) connecting FC interface drives to the controller complex.
Even inside high-end arrays, the Symms, DS8000s and USPs, the ones not using FCAL, the performance drives are FC ones.
This internal modular array FC hardware infrastructure is positioned to be swept away by an oncoming sea of SAS backplanes. Already HDS' new AMS, EMC's AX, Data Domain deduping arrays and, we suspect, HP's coming ExDS9100, use SAS drive shelf to controller interconnects.
Many FCALs run at 2Gbit/s with some, such as NetApp using 4Gbit/s FCAL. SAS one at 3Gbit/s beats 2gig FCAL and 6Gbit/s SAS two is coming, Xyratex having released a 6Gbit/s SAS array already. By using up to 32 point-to-point SAS links HDS' AMS has much more bandwidth compared to the previous FCAL scheme. SAS is simply going to be faster than FCAL, support more links and be more affordable.
Apparently, there never will be 8Gbit/s FC drives - instead, performance drives will swap over to 6Gbit/s SAS two. The internal modular performance array future looks like these SAS drives connected by SAS links to a controller complex. Eventually, it's assumed, solid state drives (SSDs) will become affordable and in a few years' time both modular and monolithic drive arrays will have all the data that was on fast HDDs on SSDs, but they'll still talk SAS and use a SAS linkage to the controller.
Also - joy - a SAS controller can look after bulk data-storing SATA drives. The conclusion to this logic? Exit FCAL and FC HDDs, and FC-interface SSDs.
In the network connecting block-level arrays to servers there is the current FC infrastructure of HBAs, cables, switches and directors. But FCoE layers the Fibre Channel protocol on top of Ethernet and Ethernet-promoting Cisco and Cisco-chasing Brocade are both pushing this hard, each thinking they can use it to grow against the other, together with HBA vendors chasing each other and survival - meaning Emulex and QLogic, mostly.
They are all pushing the story that Ethernet can be lossless and have predictable latency in its new Data Centre Ethernet (DCE) form, and that 10Gbit/sEthernet has plenty of speed to run the traffic from multi-core, multi-socket servers crammed with virtual machines demanding instant access to desktop boot images, database records, etc.
So Cisco and QLogic reckon that, gradually, an FCoE ring will grow like an onion skin around the physical FC fabric core. Servers with converged network adapters (CNAs) will send their FC and FCoE messages out to FC fabrics or an Ethernet swiitch and on to FC-interfaced storage devices or onto native FCoE-interfaced devices.
Netapp has already released a native FCoE interface FAS and V series capability, using QLogic target CNAs. With that, why should an FCoE-using CNA bother with sending messages via the FC fabric when it can go Ethernet all the way to the storage array?
So we'll see end-to-end FCoE communications passing wholly along Ethernet, bypassing the FC fabric core. We'll see iSCSI not growing up into physical FC. Why bother? There will be FCoE ready and waiting to run on the same Ethernet base.
The existing physical FC fabric will become a static and then shrinking core, wrapped inside progressively more and more FCoE links that go around it and not through it. Eventually that core will diminish as SAN infrastructure functions transfer to the Ethernet FCoE infrastructure. We might see IBM's SVC getting a link to Ethernet switches to control FCoE-attached storage arrays.
We might see HDS' USP-V getting FCoE connectivity. It's this sort of thing that, the argument goes, will stop the physical FC fabric expanding and then start it contracting.
The conclusion to this argument? Physical FC fabrics will wither away over the next two or five to ten years. The number of physical FC ports will start to stabilise and then decrease. That will be the sign.
SAS and Ethernet
What about FCoE itself? Its raison d'etre is to virtualise FC fabrics by running FC over Ethernet, to separate the FC software from the FC hardware. With no physical FC underpinning, will software FC survive? Should it?
If FC becomes just a network software layer used to connect servers to storage arrays with an internal SAS interface to SAS SSDs or HDDs, then could you extend SAS over Ethernet, remove the SAS distance limitation, and so remove the need for FCoE? That could simplify storage networking still further. Have a look at this patent and ponder on FCoE being a stopgap, a way station on the road to SASoE.
Is physical Fibre Channel really being mugged, being walked off the storage stage by two thugs called FCoE and SAS? Will FCoE then be next? Is this all fiction? I reckon the odds on physical FC death are one in four with FCoE death being much less likely, say one in 20. What do you think?
UPDATE. As a Reg reader pondered, the Mike Ko named in the SAS over Ethernet patent is the Mike Ko that works at IBM's Almaden Research Establishment. Well, well. ®