Start learning parallel programming and make these supercomputers sing, Prez Obama orders
If you want exascale toys, you gotta make 'em dance
President Obama has signed an executive order that will pump US government money into American supercomputers just like before but in a more coherent fashion.
The order, signed Wednesday, kickstarts a new "National Strategic Computing Initiative" [PDF] that will, through a single vision and investment strategy, try to keep America ahead in the global supercomputing race. (In front of China, cough, cough.)
The US government has spent hundreds and hundreds of millions of dollars on contracts given to the likes of Intel and IBM and Nvidia in its march toward the next big computing leap: producing an exascale machine that's capable of processing a billion billion calculations per second.
Today's executive order is supposed to help bring about an exascale computer sooner rather than later.
Snapping up many of these number-crunching beasts is the US Department of Energy, which through its Fast Forward program has been trying to solve problems unique to exascale machines – such as their power use, and the sheer amount of processors, RAM chips, and networking equipment needed to build them.
But that's not all. There's a software issue, too. The executive order notes:
Current HPC [high-performance computing] systems are very difficult to program, requiring careful measurement and tuning to get maximum performance on the targeted machine. Shifting a program to a new machine can require repeating much of this process, and it also requires making sure the new code gets the same results as the old code. The level of expertise and effort required to develop HPC applications poses a major barrier to their widespread use.
In other words, we need talented programmers to start work on the world's most powerful machines so they can shine as parallel-processing marvels. Writing software to harness these supercomputers is not easy. Not enough people know how to do it.
To that end, US government agencies will start funding research into how to create tools, languages and libraries that will prepare applications and programmers for high-end systems.
In government-ese this becomes: "In working with vendors, agencies will emphasize the importance of programmer productivity as a design objective. Agencies will foster the transition of improved programming tools into actual practice, making the development of applications for HPC systems no more difficult than it is for other classes of large-scale systems."
Then the idea is to train people up on supercomputer app development, so they can make more use of these huge lumps of metal dotted around the country, such as: the forthcoming 300-petaFLOP Summit computer planned for Oak Ridge in Tennessee; the 100 petaFLOP Sierra to be installed at Livermore in California; the 27 PFLOP Titan already at Oak Ridge; and the 20PFLOP Sequoia already at Livermore.
"Right now, there are many companies and many research projects that could benefit from HPC technology," the order notes, "but they lack expertise and access. Many scientists and engineers also lack training in the concepts and tools for modeling and simulation and data analytics.
"Agencies will work with both computer manufacturers and cloud providers to make HPC resources more readily available so that scientific researchers in both the public and private sectors have ready access. Agencies will sponsor the development of educational materials for next generation HPC systems, covering fundamental concepts in modeling, simulation, and data analytics, as well as the ability to formulate and solve problems using advanced computing."
All of which could be taken to be saying: OK, you've had your lovely, hugely expensive toys for a few years now, it's time we did something really useful with them.
The order also foresees spending money on research and development for the next-generation of semiconductors.
Why are we doing all this?
The potential uses of the next level of computing are enormous and could start pulling us into the sci-fi future.
Most notorious of course is the continued failure to accurately predict the weather, especially unusual weather patterns like hurricanes or tornadoes. What high-performance computing systems might start enabling scientists to do (assuming of course the programs can get written by more than a handful of people) is combine weather simulations with real-time data from sensors and satellites.
That would enable a much faster and more accurate understanding of what is going to happen, as well as save vast sums of money in preventing disasters and false alerts.
But there are many other uses such as seismic activity, diagnostic efforts, and countless modeling, simulation, and analytics projects. ®