This article is more than 1 year old

Need speed? Then PCIe it is – server power without the politics

No longer for nerds and HPC geeks

Thinking outside the box

Success in bringing PCIe outside the box will probably depend largely on the people involved, and their ability to convince us to collectively invest in the changes to our hypervisors, operating systems and applications necessary to really take advantage of PCIe as an inter-node interconnect.

The biggest problem I see currently is that eliminating communication layers between nodes – Ethernet, the TCP/IP stack and so forth – is a deeply nerdy endeavour. The benefits are clear, but it's spectacularly complicated and the various minds behind the differing approaches are understandably proud of what they have managed to achieve.

That can – and does – lead to blind spots regarding market realities. The CTO of one of the companies involved in extending PCIe outside the box recently said he didn’t view two companies that use the same bus with different goals as competitors. In one sense, he's right: if the technological approach is sufficiently different, the result will be two very different products that simply won't compete.

The question, of course, is whether or not the market as a whole cares about the subtle engineering differences or whether it cares about the results. Fibre Channel over Ethernet (FCoE) and iSCSI are two very different technologies using the same bus (Ethernet). Despite this, they serve the same ultimate purpose: connect storage to nodes. iSCSI won, and FCoE is basically dead.

VHS and beta – you knew I had to work that in here – is another example. Even virtualisation versus containerisation; none of it really matters to the people with the pocketbooks. What matters is the results. We want bigger computers that go faster.

We can do that by lashing multiple nodes together into a hybrid mainframe supercomputer thing a-la Numascale that run great big workloads, or we can lash nodes together into a cluster that runs a bunch of really tiny workloads, a-la virtualisation and cloud, then use load balancers and other tricks to slice up requests across VMs.

They'll always be distinct on some level, but where it really matters – where the big money is spent – the underlying tech doesn't matter. Only the results.

PCIe is as close to the CPU as you can get in a modern PC without trying to pull Hypertransport or QPI out, and history tells us that involves politics doomed to failure. But PCIe is everywhere. If the network layers can be made to go away and computers can interconnect directly through PCIe, then the lines between "a bunch of nodes working in unison" and "creepy hive-mind frankencomputer" start to blur.

A supercomputer made of nodes bonded by PCIe brings us back to a world where individual CPUs can easily talk directly to the RAM of another node. Not quite as easily as if Hypertransport or QPI were extended, but a lot more quickly and easily than going through layers of networking.

PCIe matters. Whether you're just plugging in a graphics card or a RAID controller or you are seeking to build the next-generation supercomputer, in the modern computer it's the one interconnect that binds everything together. ®

More about

TIP US OFF

Send us news


Other stories you might like