The NGMI and other concepts are discussed in a February 2016 video by Packard:
“The storage part will have four banks of memory (each with 1TB), each with its own NGMI FPGA," we're told. "A given SoC can access memory elsewhere without involving the SoC on the node where the memory resides — the NGMI bridge FPGAs will talk to their counterpart on the other node via the photonic interface. Those FPGAs will eventually be replaced by application-specific integrated circuits (ASICs) once the bugs are worked out.”
Each node’s persistent memory is aggregated into a central pool of fabric-attached memory (FAM) which is not cache coherent.
“Eight of these nodes can be collected up into a 5U enclosure, which gives eight processors and 32TB of memory. Ten of those enclosures can then be placed into a rack (80 processors, 320TB) and multiple racks can all be connected on the same 'fabric' to allow addressing up to 32 zettabytes (ZB) from each processor in the system.” That’s the first Machine instantiation we can expect.
Packard says that, with this 320TB rack: “I can support problems today you can’t even really contemplate putting in a single address space.”
The computational units (SOCs) can, in theory, be heterogeneous. The persistent memory becomes a kind of communications fabric between heterogenous processors – big-endian and little-endian. It doesn’t matter. They can all work on the same data all at once. We are then to envisage multiple racks, hundreds if them, with thousands of machines taking over a data centre.
The Machine – a Packard video node schematic diagram.
Since processors can only address 256TB, except ones from IBM, the 320TB of memory-accessed storage represents an addressing problem which is being solved with Fabric-Attached Memory Mapping. HPE is constructing a 75-bit address space, hence the 32ZB space.
We haven’t even started talking about software yet. There is much more on the Packard video which is well worth the 45 minutes needed to watch it.
HPE is going to reveal a lot more detail on The Machine's state and progress at HPE Discover in London on November 28.
A series of HPE Labs blogs is available to provide more information about The Machine's backstory.
The Machine is real – well, it will be if HPE delivers the prototype and developers can see it, touch it, code with it, in HPE offices and customer alpha test sites.
Watching the Packard video you get a sense of the solidity of HPE’s Machine initiative and the momentum building behind it. Developers with projects that the Machine could be good for should get in touch with HPE. And HPE, if it has any sense, will love them to bits. Developer evangelism is going to be key.
Those developers have got to create applications that will completely blow away any existing standard server architecture system. The Machine has to be an order of magnitude better, a quantum leap on what currently exists, for it to succeed and repay the tens of millions of dollars HPE has invested in it.
It cannot be Superdome mark 3. This monstrous sucker has to rip the competitions’ socks to shreds and blast their feet to ribbons, otherwise it could falter and fail, much like Intel’s 3D XPoint memory is at the moment.
El Reg thinks that the first Machine prototype will be announced in the next three to nine months. Good luck to all involved. ®