Michael Dell heralds supercomputing fourth wave

Just like the third wave - with more marketing


SC08 The annual Supercomputing 2008 trade show kicked off this morning in Austin, Texas with a sales call keynote by local billionaire and sometime HPC player, Michael Dell. As chairman and once again chief executive officer of a company that's trying to make a more substantial run at the HPC area, he can be forgiven (perhaps) for giving a keynote address that at times seemed to be more of a commercial for his company than a revelation about HPC (as supercomputing is now called).

The people who got up early to see the keynote here in Austin probably won't, though. Personally, I would have rather the SC08 staff had booted up the 20-year-old keynote by Seymour Cray from the original Supercomputing 1988 event, since Cray, the father of supercomputing, did not give very many interviews in his life.

That said, Dell did make a few good points in his keynote, and he did outline, albeit somewhat thinly, what this next wave of supercomputing might look like.

He started off with an interesting bit of data, showing just how far mankind has to go to build a power-efficient, easily-programmed, redundant supercomputer. The human brain, Dell said, has some 100 billion neurons, each with some 1,000 or so synapses, each running at around 200 cycles per second. When you do the math, that's around 20 petaflops of raw computing performance, which if it could be built today - and it can't; we have just broken through the petaflops barrier - would cost an estimated $3.6bn. Here's the catchy bit. "The human brain uses about 20 watts of energy, so we evidently still have a long way to go," quipped Dell.

Later in the Q&A session, Dell was asked by an attendee what it would take to simulate the human brain, a bizarre question to ask the CEO of a PC and server maker and expect an answer that makes sense, and Dell deflected the question saying that he was not suggesting that we should be simulating the brain (there are people working on this, of course, and deploying lot of computing power to do it).

He said that what he meant to imply was that HPC clusters are not terribly efficient compared to mother nature. "For me, the dream and the excitement about computers was not to replace the human brain," Dell explained. But he does see the need to a better way to interact with the machines and the software that runs on them. "It is a fairly rudimentary process today. We type keys and something happens. I think there is an enormous opportunity to improve the man-machine interface."

Dell SC08 Keynote

What? This isn't a sales call?

Please, feel free to make up your own jokes about the Borg and dildonics.

What this 4th wave of computing seems to be about is an admission that we can make ever larger clusters with lots more main memory, storage, and I/O, but after more than a decade of serious parallel computing, the ability to deliver performance has far outstripped the ability to write applications that can take advantage of the raw iron. And now we are all thinking about energy efficiency these days, and that is a spanner in the works.

The three prior waves of supercomputing, according to Dell, included specialized vector machines and proprietary operating systems in the 1970s, microprocessor-based systems (mostly RISC but other architectures) in the 1980s and 1990s, and standards-based (meaning x86 mostly) parallel clusters in the late 1990s until now. The 4th wave will deliver higher density machines, probably in blade or other customized form factors, pools of shared storage, and focus on one of the pain points in clusters - running and administering them.

Dell cited figures from supercomputing market researcher Tabor Research showing that 70 per cent of HPC budgets are consumed by staffing and administration items in the budget - those pesky humans, again. (Of course, out there in the data centers of the corporate world, 65 per cent of the IT budget is spent on administration and maintenance, according to IDC, so welcome to the club).

The 4th wave of HPC will be keenly focused on performance per watt as well, and interestingly, Dell (the man, not the machine) is predicting that some of the systems management tools commonly used in enterprises are going to swim upstream to the HPC market to help them better and more efficiently manage the resources in supercomputing labs. Usually, HPC tech flows downstream to the general market over the course of about a decade.

The availability of cheap HPC setups is something that Dell is driving, as much as it did with the direct model with PCs and then servers two decades ago. With the price of computing dropping, it becomes more widely available to smaller companies and organizations as well as to developing countries that could not have dreamed of a having supercomputer. (In some cases, they were not legally allowed to have an American-made supercomputer a decade ago because of export controls).

Five years ago, according to Dell, a teraflops of computing cost about $1m, but today, that same $1m buys you around 25 times as much oomph. The density has not gone up as much as the price for capacity has come down, but it is still impressive. Three years ago, Dell said, a 2,500 core cluster with 1,250 servers using 3 GHz x64 processors delivered about 9.8 teraflops. Today, a 1,240 core machine using a mere 155 servers delivers 10.7 teraflops. That is a 90 per cent reduction in server count.

In actual Dell product news, Dell said that the company started shipping machines based on the new "Shanghai" quad-core Opteron processors yesterday. And looking ahead, as a teaser to HPC shops, Dell said that Dell will be the first server maker with quad data rate InfiniBand ports native on its blade servers and that the future machines based on Intel's "Nehalem" next-generation Xeon processors would be able to support up to 1 TB of main memory per node. ®

Similar topics


Other stories you might like

  • (Our) hardware is still key in a multicloud world, Dell ISG chief insists
    IT giant may be shifting its focus to software and services, but systems remain the foundation

    Analysis At this month's Dell Technologies World show in Las Vegas, all the usual executives were prowling the keynote stages, from CEO Michael Dell to co-COOs Chuck Witten and Jeff Clark, all talking about the future of the company.

    Noticeably absent were the big servers or storage systems that for decades had joined them on stage, complete with all the speeds and feeds. Though a PC made an appearance, there was no reveal of big datacenter boxes.

    It's a continuing scenario that is likely to play out to various degrees at user events for other established IT hardware vendors, such as when Hewlett Packard Enterprise later next month convenes its Discover show, also in Las Vegas. It's having to adapt to the steady upward trend in multicloud adoption, the ongoing decentralization of IT and the understanding that in today's world, data is king, Hardware is still needed, but the outcomes they deliver are what is most important.

    Continue reading
  • Zero trust is more than just vendors and products – it requires process
    IT orgs need to adapt their procedures to make it all work, says Dell

    Dell Technologies World Zero-trust architectures have become a focus for enterprises trying to figure out how to secure an IT environment where data and applications are increasingly distributed outside of the traditional perimeter defenses of central datacenters.

    With the attack surface expanding and cyberthreats growing in number and complexity, many organizations are sorting through a cybersecurity space that has myriad vendors and products to choose from, according to Chad Dunn, vice president for product management for Dell's Apex as-a-service business.

    Zero trust – which essentially dictates that any person or device trying to access the network should not be trusted and needs to go through a strict authentication and verification process – will be foundational for companies moving forward, but it has to be more than simply buying and deploying products, Dunn told The Register in an interview here in Las Vegas at the Dell Technologies World show.

    Continue reading
  • Dell brings data recovery tools to Apex and the cloud
    Dell shows off full stack of cyber recovery SaaS, partners with Snowflake for data analytics

    LAS VEGAS – Dell is giving enterprises new ways to protect the data they store in public clouds.

    At the Dell Technologies World event Monday, the company unveiled a full-stack cyber-recovery managed services offering in its Apex -as-a-service portfolio and data protection technologies that will be available in both the Amazon Web Services (AWS) and Microsoft Azure public clouds.

    In addition, Dell is partnering with high-profile cloud-based data analytics vendor Snowflake to enable organizations to take the data they're keeping in their data centers in Dell object storage and run it in Snowflake's Data Cloud while keeping the data on premises or copying it to the public cloud, an important capability for companies with data sovereignty or privacy concerns who can't freely move it around.

    Continue reading
  • Dell trials 4-day workweek, massive UK pilot of shortened week begins
    Hopes to tap into pool of tech workers who aren't keen to be tied down for 40 hours per week

    Dell employees in the Netherlands will be able to work four days a week from this month, a director of Dell Technologies Netherlands has confirmed to The Register.

    The news comes just before the biggest ever 4-day working week trial begins in the UK.

    Isabel Moll, newly appointed vice president and general manager at Dell Netherlands, told us the part-time pilot has already been rolled out by the Dutch and Argentinian operations.

    Continue reading

Biting the hand that feeds IT © 1998–2022