Blade Network kicks RackSwitches up to 40 Gigabits
Single ASIC pumps 1.28 terabits
Blade Network Technologies kicked off the inevitable move toward 40 Gigabit Ethernet data-center networking on Thursday, rolling out the first 40 Gigabit switches aimed specifically at enterprises and their relatively flat networks.
With the RackSwitch G8264, Blade Network throws down the gauntlet to its competitors, claiming a significant lead in delivering 40 Gigabit uplinks into the top-of-rack switch market. "Others will come over the next few quarters," says Dan Tuchler, the company's vice president of product management. "We're just happy to get there first."
Tuchler concedes that there are high-end routers and core switches out in the market that have 40 Gigabit uplinks, but says that they aren't aimed at the data-center customers Blade Network chases.
Those other routers and switches are huge, cost lots of dough, and have all kinds of expansion ports for firewalls, intrusion-protection systems, and other features that a company creating cloudy infrastructure to run their workloads don't need in a switch. And besides, on those glitzy routers and switches aimed at service providers and telcos, a single 40 Gigabit port costs in excess of $10,000. Not exactly a low price.
Extreme Networks has been showing off 40 Gigabit uplink modules for its BlackDiamond and Summit switches since April, but it's not clear if they are shipping yet. The plan was to get 40 Gigabit uplink modules into the field to trial customers during the third quarter of this year with volume shipments following after that. If Blade Network beats Extreme Networks to market, it probably won't be by much.
Blade Network is the privately held spinoff of the bankrupt Nortel Networks, and is in the process of being acquired by IBM for a rumored $400m.
The blade and rack switches sold by Blade Network are to become the foundation of a revived networking business at IBM — if someone doesn't swoop in and try to steal the company away. Big Blue exited the networking business 11 years ago, forfeiting its business to Cisco Systems in exchange for a lucrative reseller agreement. IBM now partners with Cisco, Juniper Networks, Voltaire, Brocade Communications, and Blade Network for switches, and Cisco and Oracle have entered the server space with their own integrated networking. IBM needs to have a counterpunch to the integrated systems that HP, Cisco, and Oracle are peddling — hence the deal to buy Blade Network.
The RackSwitch G8264 has a single ASIC to deliver 1.28 terabits of aggregate bandwidth, something that Tuchler says other vendors can only do with multiple ASICs. He would not divulge who Blade Network has partnered with for its Ethernet silicon, but word will get around soon enough.
The two other Blade Network switches — the 24-port G8124 10 Gigabit Ethernet switch and the 48-port G8052 (which has 48 Gigabit downlinks and four 10 Gigabit uplinks) — are also based on single-ASIC designs.
Tuchler claims that Blade Network is the first vendor to break the terabits barrier with a single chip. More precisely, however, that unnamed chip maker has broken that barrier. I was guessing foolishly that it is Foundry Networks, now part of Brocade, but the obvious choice, as one reader pointed out, is Fulcrum Microsystems.)
Blade Network's top-of-rack RackSwitch G8264
That 48-port downlink and four-port uplink ratio is the sweet spot in the top-of-rack Ethernet switch racket, says Tuchler, all the way back to the days before the dot-com boom. The killer product in the dot-com era was a 1U switch with 48 100Mbit/sec (or Fast Ethernet) downlinks with four (then blazingly fast) Gigabit Ethernet uplinks. "These sold like crazy for years and years," says Tuchler, because you could plug all the servers in a rack plus a few other crazy items into the downlinks.
The same ratio works well today with blade and rack servers — but in an increasingly virtualized server world, a rack of physical servers with 10 Gigabit links to the rack switch can swamp the 10 Gigabit uplinks. Companies have been trying to aggregate 10 Gigabit links or daisy-chain rack switches together to get enough bandwidth on the uplink side, but what they really want and need are fatter uplinks.
One wonders if 40 Gigabit will be enough, and how long it will be until companies clamor for 100 Gigabit uplinks in their switches?
The RackSwitch G8264 has 48 10 Gigabit downlink ports that support SFP+ connections based on either copper or fiber cables; these links can also run at the slower Gigabit speeds if that's what is on the other end of the wire.
The switch has four 40 Gigabit uplinks based on QSFP+ cables. The ASIC supports both Layer 2 and Layer 3 of the Ethernet stack. If you are creating a flat network and want to lash together a whole bunch of machines at the Layer 2 level (where virtual machines and HPC clusters like to play), you can buy some adapter cards that split each 40 Gigabit uplink into four 10 Gigabit ports, giving you a total of 64 ports to link the servers and adjacent switches together to create the cluster.
The switch comes in AC or DC power versions, and like all other RackSwitches can be set up to blow air front to back or back to front, which allows you to keep your cold aisles cold no matter which way you mount the switch in the rack. The switch is rated at 375 watts, which works out to an average 5.86 watts per port if you have it set up with all 64 10 Gigabit ports. It's not clear how much juice is needed to drive a 40 Gigabit port, since the electronics are all on the same ASIC.
The new Blade Network switch supports the Transparent Interconnection of Lots of Links (TRILL) standard, which is an overlay on the Ethernet protocols that allows for multipathing between switches and servers, but which ensures that Ethernet packets can't get caught in loops and bring the network down. The TRILL setup is being championed as an alternative to spanning tree networks, which require a hierarchy of switches to allow a large number of machines to communicate. A flat Layer 2 network supporting TRILL is faster and cheaper, says Tuchler.
The RackSwitch G8264 switch also supports Blade Network's VMready feature, which allows for network connections for a virtual machine to be preserved automatically as the VMs are live-migrated around a pool of servers. The switch supports Fibre Channel over Ethernet (FCoE) protocols to converge storage and server network traffic onto the same device, as well as IBM's Virtual Fabric overlays for virtualizing its BladeCenter blade servers — and perhaps soon, all IBM servers.
The G8264 switch is available now. It costs $22,500, which works out to $350 per 10 Gigabit downlink port plus $1,400 a pop for each 40 Gigabit uplink port. ®