Waiting to exascale: Now that IBM has Summit-ed, who's to node what comes next?

Big Blue's rig with Nvidia grunt looks to be first truly exascale system

Comment IBM's 200 petaFLOPS (200,000 trillion calculations per second) Summit supercomputer was unveiled at Oak Ridge National Laboratory last Friday and, scaled up, has proven itself capable of exascale computing in some applications.

That's 1,000 petaFLOPS or one quintillion floating point operations per second.

In comparison, the Cray/Intel Aurora supercomputer project clocked in at 180 petaFLOPS with 50,000 x86 nodes, interconnected with 200Gbit/s OmniPath 2.

These nodes were supposed to be augmented with Intel's Knights Hill version of its multicore Phi co-processor. However, the Knights Hill development was canned in November 2017. Aurora has given way to Aurora 2, due for delivery in 2021, which should be an exascale system with redesigned Phi processors.

The US Department of Energy is part-funding the development of exascale computers through its Coral-2 programme (Coral being "Collaboration of Oak Ridge, Argonne and Livermore", three national labs). The original Coral programme generated the Aurora and Summit systems, the latter of which was kicked off by IBM in 2014. Cray and Intel were awarded $200m in April 2015 to build Aurora. Though Aurora was due to be delivered this year, the failed co-processor design meant that wasn't possible.

Six bidders – AMD, Cray, HPE, IBM, Intel and Nvidia – were invited by the DoE to respond to a Coral-2 request for proposals and some or all did so by May 24. The individual bidders have not been revealed and the bids are being evaluated.

There are three server/HPC system builders – Cray, HPE and IBM – and three processor/co-processor vendors – AMD, Intel and Nvidia.

We may assume Cray/Intel are bidding for the Aurora follow-on (called A21), and we looked at aspects of an HPE exascale system, suggesting an HPE/AMD partnership might be feasible.

The Summit reveal provided hints about an IBM exascale system and that's what we're going to dig into.

Summit nodes

Summit has just 4,608 nodes, which are more powerful than Aurora's X86 ones, interconnected with dual-rail 100Gbits EDR InfiniBand. As Nicole Hemsoth pointed out at our sister publication, The Next Platform, the system also has far fewer nodes than the 18,688 of its Oak Ridge neighbour and previous US supercomputing speed record-holder, Titan, but nevertheless "deliver[s]... 5X to 10X more performance while only increasing power consumption from nine to 13 megawatts."

Each node, basically an AC922 server, has two 3.1GHz Power9 CPUs with 22 cores and 6 x Tesla V100 GPUs, connected by NVLink 2. There is 1.6TB of memory per node.


US regains supercomputer crown from Chinese, for now


The nodes are interconnected with Mellanox dual-rail EDR 100Gbit/s InfiniBand links, 200Gbit/s per node.

There is more than 10PB of main Summit memory and it uses IBM's Spectrum Scale filesystem, initially with about 3PB capacity and 30 GB/sec bandwidth. These numbers will rise to 250PB, 2.5TB/sec sequential and 2.2TB/sec random IO. Peak power usage is 13MW.

HPE has mentioned that exascale computers could have tens of thousands, if not hundreds of thousands, of nodes. But not if Summit could be scaled to an exaFLOPS machine – meaning a fivefold increase in performance.

That would mean 23,040 nodes using the current Power9/6xGPU node setup.

But Nvidia has moved on, having announced its HGX-2 2 petaFLOPS GPU grunt box with 16 Tesla V100s, the latest GPU architecture, connected using six NVSwitches. And there may well be a Volta GPU follow-on, with Ampere and Turing names floating around.

IBM is moving on, developing a POWER10 CPU, due to arrive in 2020. The Coral-2 systems are meant to be deliverable from 2021. It could have 48 cores and support a faster NVLink 3 interconnect.

Mellanox is developing NDR 400Gbit/s InfiniBand switching interconnect.

Could Spectrum Scale have its performance pushed higher? There's no reason to doubt that.

Join the dots

Let's suggest a scaled-up Summit node using POWER10 CPUs, souped-up Nvidia GPUs with faster NVLink, NDR InfiniBand internode links, and a faster/larger Spectrum Scale could provide a pathway to exascale with fewer than 23,040 nodes.

If we scale up Summit nodes 2.5x in performance, using these technologies, then 9,216 of them would get us to 1 exaFLOPS. There's a 40MW limit for power consumption and scaling up Summit power usage by 2.5x gets us to 32.5MW (this is a simplistic extrapolation as power usage in general is directly proportional to number of nodes/performance if the node tech is the same). An enticing prospect.

There are other technologies that could help, such as high-bandwidth memory and storage-class memory. As long as these don't need application software rewrites, they could go into the mix too.

HPE's Machine-based exascale technology set is adventurous and exciting – for HPE. A scaled-up Summit is basically more of the same –less adventurous perhaps but possibly a safer bet.

Cray/Intel's Aurora A21 looks to be as risky as HPE's system, because Intel co-processor development has stalled – perhaps it will use its in-development GPUs – and Xeon under-performance, compared to POWER10, would predicate many tens of thousands of nodes with unproven co-processor/GPU technology.

Big Blue could stalk the processor design halls in triumph: Yeah, Xeon. x86? More like ex-86. Feel the POWER, etc. etc. ®

Other stories you might like

  • Electron-to-joule conversion formulae? Cute. Welcome to the school of hard knocks

    Shake, rattle and roll is incompatible with your PABX

    On Call There are some things they don't teach you in college, as a Register reader explains in this week's instalment of tales from the On Call coalface.

    Our reader, safely Regomised as "Col", headed up the technical support team of a PABX telecom provider and installer back in the early 1990s. PABX, or Private Automatic Branch eXchange, was the telephony backbone of many an office. A failure could be both contract and career-limiting.

    Col, however, was a professional and well versed in the ins and outs of such systems. Work was brisk and so, he told us, "I took on a university grad with all the spunk and vigour that comes with it. He knew the electron-to-joule conversion formulae et al."

    Continue reading
  • Korea's NAVER Cloud outlines global ambitions, aim to become Asia's third-biggest provider

    Alibaba is number two in much of the region, but is a bit on the nose right now

    Korean web giant NAVER has outlined its ambition to bring its cloud to the world, and to become the third-largest cloud provider in the Asia-Pacific region.

    NAVER started life as a Korean web portal, added search, won the lion's share of the market, and has kept it ever since. South Korea remains one of the very few nations in which Google does not dominate the search market.

    As NAVER grew it came to resemble Google in many ways – both in terms of the services it offers and its tendency to use its muscle to favour its own properties. NAVER also used its scale to start a cloud business: the NAVER Cloud Platform. It runs the Platform in its home market, plus Japan, Hong Kong, and Singapore. Presences in Taiwan, Vietnam and Thailand are imminent.

    Continue reading
  • Build it fast and they will come? Yeah, but they’ll only stay if you build it right

    Here’s where to start

    Sponsored Developers have never had so much choice. Every week there’s a new framework, API, or cloud service that promises to help deliver software to market faster than ever. And it’s not just tooling. Agile, continuous integration, and DevOps techniques have made teams more efficient, too. But speed brings with it increased expectations. Pressure from customers and colleagues, alongside the burden of staying current with new tooling, can lead to mistakes.

    Whether it’s a showstopping bug that slips through into production or an edge case that lies in wait for years, pressure to deliver is driving some teams to pile up technical debt and mismatched stakeholder expectations.

    What’s the solution? Well, it’s to do what we’ve always done: build on what came before. In the absence of unlimited time and budget, a low-code platform gives both experienced and new developers a suite of tools to accelerate their development. Automation in just the right places lets teams bring their unique value where it really matters, while all the standard building blocks are taken care of.

    Continue reading
  • Royal Navy will be getting autonomous machines – for donkey work humans can't be bothered with

    No robot killers 'in my lifetime' says admiral

    DSEI 2021 The British armed forces will be using robots as part of future warfare – but mostly for the "dull, dangerous and dirty" parts of military life, senior officers have said.

    At London's Defence and Security Equipment International arms fair, two senior officers in charge of digitisation and automation said the near future will be more Wall-E than Terminator – but fully automated war machines are no longer just the stuff of sci-fi.

    Brigadier John Read, the Royal Navy's deputy director of maritime capability, said in a speech the military "must automate" itself so it can "take advantage of advances in robotics, AI and machine learning."

    Continue reading
  • WTF? Microsoft makes fixing deadly OMIGOD flaws on Azure your job

    Clouds usually fix this sort of thing before bugs go public. This time it's best to assume you need to do this yourself

    Microsoft Azure users running Linux VMs in the IT giant's Azure cloud need to take action to protect themselves against the four "OMIGOD" bugs in the Open Management Infrastructure (OMI) framework, because Microsoft hasn't raced to do it for them.

    As The Register outlined in our report on this month's Patch Tuesday release, Microsoft included fixes for flaws security outfit Wiz spotted in Redmond's open-source OMI agents. Wiz named the four flaws OMIGOD because they are astonishing.

    The least severe of the flaws is rated 7/10 on the Common Vulnerability Scoring System. The worst is rated critical at 9.8/10.

    Continue reading
  • Businesses put robots to work when human workers are hard to find, argue econo-boffins

    The lure of shiny new tech isn't a motivator, although in the USA bots are used to cut costs

    Researchers have found that business adoption of robots and other forms of automation is largely driven by labor shortages.

    A study, authored by boffins from MIT and Boston University, will be published in a forthcoming print edition of The Review of Economic Studies. The authors, Daron Acemoglu and Pascual Restrepo, have both studied automation, robots and the workforce in depth, publishing numerous papers together and separately.

    "Our findings suggest that quite a bit of investment in robotics is not driven by the fact that this is the next 'amazing frontier,' but because some countries have shortages of labor, especially middle-aged labor that would be necessary for blue-collar work,” said Acemoglu in a canned statement.

    Continue reading
  • After eight years, SPEC delivers a new virtualisation benchmark

    Jumps from single-server tests to four hosts – but only for vSphere and RHV

    The Standard Performance Evaluation Corporation (SPEC) has released its first new virtualisation benchmark in eight years.

    The new SPECvirt Datacenter 2021 benchmark succeeds SPEC VIRT_SC 2013. The latter was designed to help users understand performance in the heady days of server consolidation, so required just one host. The new benchmark requires four hosts – a recognition of modern datacentre realities.

    The new tests are designed to test the combined performance of hypervisors and servers. For now, only two hypervisors are supported: VMware’s vSphere (versions 6.x and 7.x) and Red Hat Virtualisation (version 4.x). David Schmidt, chair of the SPEC Virtualization Committee, told The Register that Red Hat and VMware are paid up members of the committee, hence their inclusion. But the new benchmark can be used by other hypervisors if their vendors create an SDK. He opined that Microsoft, vendor of the Hyper-V hypervisor that has around 20 per cent market share, didn’t come to play because it’s busy working on other SPEC projects.

    Continue reading
  • Forget that Loon's balloon burst, we just fired 700TB of laser broadband between two cities, says Google

    Up to 20Gbps link sustained over the Congo in comms experiment

    Engineers at Google’s technology moonshot lab X say they used lasers to beam 700TB of internet traffic between two cities separated by the Congo River.

    The capitals of the Republic of the Congo and the Democratic Republic of Congo, Brazzaville and Kinshasa, respectively, are only 4.8 km (about three miles) apart. The denizens of Kinshasa have to pay five times more than their neighbors in Brazzaville for broadband connectivity, though. That's apparently because the fiber backbone to Kinshasa has to route more than 400 km (250 miles) around the river – no one wanted to put the cable through it.

    There's a shorter route for data to take between the cities. Instead of transmitting the information as light through networks of cables, it can be directly beamed over the river by laser.

    Continue reading
  • Apple's M1 MacBook screens are stunning – stunningly fragile and defective, that is, lawsuits allege

    Latest laptops prone to cracking, distortions, owners complain

    Aggrieved MacBook owners in two separate lawsuits claim Apple's latest laptops with its M1 chips have defective screens that break easily and malfunction.

    The complaints, both filed on Wednesday in a federal district court in San Jose, California, are each seeking class certification in the hope that the law firms involved will get a judicial blessing to represent the presumed large group of affected customers and, if victorious, to share any settlement.

    Each of the filings contends Apple's 2020-2021 MacBook line – consisting of the M1-based MacBook Air and M1-based 13" MacBook Pro – have screens that frequently fail. They say Apple knew about the alleged defect or should have known, based on its own extensive internal testing, reports from technicians, and feedback from customers.

    Continue reading
  • Microsoft's Azure Virtual Desktop now works without Active Directory – but there are caveats

    General availability of Azure AD-joined VMs

    Microsoft has declared general availability for Azure Virtual Desktop with the VMs joined to Azure AD rather than Active Directory, but the initial release has many limitations.

    Azure Virtual Desktop (AVD), once called Windows Virtual Desktop, is Microsoft's first-party VDI (Virtual Desktop Infrastructure) solution.

    Although cloud-hosted, Azure Virtual Desktop is (or was) based on Microsoft's Remote Desktop Services tech which required domain-joined PCs and therefore a connection to full Windows Active Directory (AD), either in the form of on-premises AD over a VPN, or via Azure Active Directory Domain Services (AAD DS) which is a Microsoft-managed AD server automatically linked to Azure AD. In the case that on-premises AD is used, AD Connect is also required, introducing further complexity.

    Continue reading
  • It's bizarre we're at a point where reports are written on how human rights trump AI rights

    But that's what UN group has done

    The protection of human rights should be front and centre of any decision to implement AI-based systems regardless of whether they're used as corporate tools such as recruitment or in areas such as law enforcement.

    And unless sufficient safeguards are in place to protect human rights, there should be a moratorium on the sale of AI systems and those that fail to meet international human rights laws should be banned.

    Those are just some of the conclusions from the Geneva-based Human Rights Council (HRC) in a report for the United Nations High Commissioner for Human Rights, Michelle Bachelet.

    Continue reading

Biting the hand that feeds IT © 1998–2021