HPC Blog Both the UK and South America were ably represented at the recent 2014 International Supercomputing Conference Student Cluster Competition, held in Leipzig, Germany. For some reason, I want to Google up trendy cuisine to see if Scottish-Brazilian fusion restaurants actually exist. But I'm afraid of the possible results – so let's rather take a close and personal look at these two teams....
University of Edinburgh: This is the second ISC outing for Team Edinburgh. They turned in a good performance at ISC’13, taking second place on the LINPACK portion of the competition. This year, they return mostly the same team, but have upped the hardware ante considerably.
Team Edinburgh, working with sponsor Boston Group, brought one of the most advanced clusters we’ve seen to date in the competition. On the surface, the system stats don’t sound all that impressive: four nodes, 80 CPU cores, 64GB memory per node (256GB total) and eight NVIDIA K40x GPUs.
So what’s the big deal? It’s a liquid-cooled rig, with water blocks on the CPUs and GPUs, plus a very large radiator on top. According to the team, when configured for air cooling, each of their high performance nodes contained 10 or 12 fans.
Using the liquid cooled variant means that the team could remove a whole bunch of them, leaving only three low power fans per node to cool the un-water-blocked memory and motherboards.
They were also able to remove a considerable number of the cooling fans pushing air through the radiator. Their four-node configuration, even with the eight GPUs running at full bore, just didn’t generate enough heat to require the full cooling capacity of their rig.
In the video, we take a look at their system, talk to the boys, and get a feel for how they did with the HPC applications. I’ve included a soul stirring rendition of "Scotland the Brave" as the soundtrack for the interview. Gotta love those bagpipes.
University of Sao Paulo: Team Brazil has gone through some changes since their first cluster competition at the ASC’14 spring classic in Guangzhou. While the players are the same, the horse is quite a bit different.
At ASC’14, the team was using an Inspur cluster with six nodes, 144 cores, and six Xeon Phi co-processors. At ISC, the team is driving a fancy new eight node SGI beast, complete with 192 CPU cores, a TB of memory, and five Xeon Phi co-processors.
Moving to another system isn’t trivial, particularly when you’re moving to a system from a vendor like SGI – which likes to put some secret sauce into the mix. Since there wasn’t a lot of time between the ASC and ISC competitions this year, the team was still learning the ins and outs of the new box when they arrived in Leipzig.
In the video, the team talks about the differences between the ASC and ISC competitions, the challenges of being the first HPC team from Brazil, and the miracle that is Infiniband, all accompanied by a sizzling Latin beat. ®