JURECA! Germany flips big red switch on 2.2 petaflop supercomputer
Machine open to all fields of data-crunching
Germany has officially powered up its new 2.2 petaflop JURECA supercomputer.
Based at the Forschungszentrum Jülich research facility, the JURECA (pronounced "Eureka" and short for Juelich Research on Exascale Cluster Architectures) system runs on Intel's 12-core Xeon E5-2680 Haswell CPUs, employing a total of 1,900 nodes (each containing two CPUs). The processors are equipped with Nvidia K80 GPU accelerators and connected by Mellanox 100Gbps Infiniband interconnects across 34 water-cooled cabinets, and they run a CentOS Linux distribution.
The JURECA cluster replaces the JUROPA supercomputer, which was shut down in June of this year.
According to the latest edition of the Top500 supercomputer rankings, JURECA comes in at a relatively modest number 49, the fifth most-powerful cluster in Germany and the second most-powerful at the Forschungszentrum Jülich, behind the JUQUEEN cluster.
While the research facility touts JURECA as a 2.2-petaflop system, Top500's numbers are a more modest 1.42 petaflop max and 1.69 petaflop theoretical peak. By comparison, China's world-best MilkyWay-2 has a theoretical peak of 54.9 petaflops.
The research house said that the real value of JURECA would lie less in its raw compute power and more in its flexibility to be used for a number of different applications. Forschungszentrum Jülich touts JURECA as an "all arounder" system that will be offered to researchers studying fields including medicine, IT research, Earth sciences modeling, and materials research.
"Breaking records was not as important to us as the opportunity for users to quickly and productively run their programme codes and by further optimizations make use of larger parts of the machine," said Dr. Dorian Krause of the Jülich Supercomputing Centre.
The Julich research house said that access to the cluster will be open to "all qualified scientists" internationally, with an independent panel of experts tasked to help assign who will be able to crunch their numbers on the new cluster. ®