The System z11 mainframe announcement date has been set for July 22, as expected, and Rod Adkins, general manager of IBM's Systems and Technology Group, and Steve Mills, general manager of Software Group, will host a shindig in New York debuting Big Blue's so-called "system of systems."
As El Reg already reported, the System z11 machine will sport 96 cores and give about 80 of them over to running either z/OS or Linux in a single system image (rated at around 50,000 aggregate MIPS by our math) or allow them to be carved up into logical partitions with somewhere around 1,100 MIPS per core.
The System z11 machines, variously known as the z11 or zNext as placeholder names, will be called the zEnterprise 196, according to sources familiar with IBM's plans. And, yes that's zEnterprise 196 — without the characteristic slash used in IBM mainframe names since a lot of us were babies and before some of you were born.
As El Reg told you back in May, the new mainframe will sport Power7 blades for running AIX and x64 blades for running Linux and maybe Windows, and hence the name "system of systems" that Adkins has been using. What's not yet clear is how IBM is lashing the Power and x64 blades to the mainframe's central electronics complex.
Some stories out there are talking about the mainframe, Power, and x64 machines sharing memory, but this is nonsense. IBM could, in theory, make a memory appliance that is an analog to a storage area network (SAN) for disk-based storage, allowing for these three different servers to grab a partition of DDR3 memory to run their workloads. A fast bus for linking processors to memory could be used as a Virtual LAN network, too, between the machines, which IBM has done for logical partitions Power-based servers for years now.
Shared memory across different processor architectures may be something that happens in the future, but as far as I know, the z196 processor books (what we would call cell boards or system boards in other architectures) have one more socket than the existing z10 books and memory for the mainframe engines implemented on the books.
Here's what one anonymous source told us about the links between the z196 mainframe and its blades:
The new z11 will include a blade enclosure as part of the server. This enables x86 and Power blades to work in the same footprint as the mainframe — running Unix, Windows, Linux etc. The main mainframe unit and the blades will be managed via a shared Hardware Management Console (HMC) — workload management via a single interface. This new hybrid blade architecture has been code-named zGryphon.
Well, that zGryphon description gets us a little closer to understanding what IBM might be up to, but it doesn't really tell us how tightly the blades are coupled to the mainframes and what protocols they are using.
As I said last week, IBM was a big and early adopter of both Fibre Channel and InfiniBand and has used both, in that order, to provide remote I/O capabilities off its system motherboards to remote peripheral devices.
FICON on the mainframe is based on Fibre Channel, and the High-Speed Loop interconnect on Power systems is a tweak Fibre Channel link running at 2Gb/sec (HSL) or 4Gb/sec (HSL-2), and the 12X I/O links that Power5+, Power6, Power6+, and Power7 machines use is based on 20Gb/sec InfiniBand and link back into the GX+ system bus on the Power chips.
The simplest thing for IBM to do is to put a GX+ bus on the z11 engine and use InfiniBand coming right off the engine to talk to the Power and x64 blades. The Power blades already have their own GX+ ports, so IBM could create a private network between z11 and Power engines with existing electronics; it probably would be better to have a switch in the works, however, since x64 processors do not have GX+ ports.
No matter how IBM does it, one thing is clear: there is no way, given the security paranoia of mainframe shops, that the network that interfaces the mainframe engines and their associated Power and x64 blades to the outside world will be used to allow Power and x64 blades to talk back to the mainframes.
Big Blue will no doubt create a private network that only z/OS and maybe Linux running natively on the z196 server can see to lash the blades to the mainframes. It seems likely that in addition to this private communication network, IBM will also implement a private management network that includes the HMC console, which manages logical partitions on the mainframe and Power iron, which is all IBM code. Presumably the HMC will be able to run a management console for whatever x64 hypervisor IBM anoints. ®