Facebook has taken another step towards kicking the traditional switch vendors out of its network, setting their chassis-based switches in the cross-hairs.
The Social NetworkTM has created a chassis and fabric for the Wedge switches it let loose on its data centres last year.
Announced at today's Facebook Networking@Scale event, the chassis design – called 6-pack – is going to be contributed to the Open Compute Project.
While breaking the proprietary stranglehold on its network was one reason for Wedge, Facebook also told The Register that avoiding proprietary layer between network and silicon lets it find the deep-down faults that plague its huge networks.
With Wedge in production, the company turned its attention to the modular switches its own switch was passing traffic to.
For the chassis, the original Wedge switch has been stripped down a little to the switch modules, with 16 x 40 Gbps ports facing the front and 16 facing the back, also retaining the ASIC and OCP microserver.
OCP partners will be getting ready to roll out indy variants of the Facebook chassis switch
In the Wedge chassis config, those sit side-by-side to become the line cards of each slot.
The fabric gets shifted off to its own cards – two line cards facing backwards, and twin ASICs per fabric card providing the non-blocking bandwidth and aggregating the out-of-band management. There are two fabric cards per 6-Pack configuration.
The fabric configuration creates a full local mesh, which Facebook says “enables a very simple backplane design”.
That delivers 640 Gbps to front and the same to the back – or, in an aggregation configuration, all 1.28 Tbps can be facing backwards.
6-pack's fabric: on-board microservers let it be 'managed like a server'
In the video that accompanies Facebook's blog POST, Omar Baldonado reiterates Facebook's focus on “scalability, flexibility and feature development speed”.
Baldonado says Wedge was the natural starting point because the top of rack was “the simplest place to insert new software and hardware into the network”.
In Facebook's implementation, there's a fan tray for each card and, of course, a modular power system.
In terms of port count and throughput, one 6-pack is a long way from (say) a Cisco Nexus 9000, fully configured with 576 ports of 40 GBps and a maximum throughput of 60 Tbps - which will make the in-service deployment of the switches interesting to watch.
Facebook discusses the 6-pack in this blog post ®