Europe's ExaNeSt project is looking behind the couch for ten million ARM processors, to support its exascale supercomputing project.
As well as a bucket of processing, what it's calling its “straw-man prototype” will use liquid cooling, Flash memory in the processor fabric, and will use “innovative, fast interconnects” to avoid congestion.
The prototype won't get anywhere near millions of processors: it will have 1,000 cores using 64-bit ARM Xilinx Zynq Ultrascale+ processors using 16 Gbps interconnect, and 16 GB of low power memory on each board.
The project had its kick-off meeting in December 2015, announcing European Commission funding of €8.5 million to support its three-year design project.
The project expects to eventually need enough cooling to run as much as 240 kW computing power per rack, but in this January presentation, Manolis Katevenis and Nikolaos Chrysos of the University of Crete nominated interconnect as the big roadblock for exascale machines.
That's because the cost of moving data around – expressed in pici-joules per bit – is higher for interconnect than for processing, and is still rising.
One of ExaNeSt's commercial partners, ExaNoDe, will be designing the ARM-based “microserver HPC” implementations.
Other contributors to the project include EuroServer (inter-processor communications design), Ecoscale (programmable hardware accelerators), Xilinx (FPGAs and communications), Micron (low power memory and storage), and Kaleao (productisation).
ExaNeSt's full list of collaborators is here. ®