This article is more than 1 year old
Cambridge’s HPC-as-a-service for boffins, big and small
However, a ‘step change in data storage’ needed
Cambridge University has been involved in high performance computing for 18 years, starting with a “traditional” supercomputer, before a major restructuring eight years later led to what would now be considered high-performance computing (HPC).
We read much about how IT needs to become a profit not a cost center. Well, as part of the system which integrated into the university’s central IT services it is now selling its HPC as a service – to 1,500 or so firms in the surrounding science and tech parks, with the help of systems suppliers Dell and Intel.
The university recently spent £20m on a brand new data centre, bricks-and-mortar and all, to help accommodate the concept of HPC-as-a-service and expects to start filling the place with new equipment during the next year in its mandate as HPC-as-a-service supplier.
Such is the anticipated demand for its services that despite the cement only recently drying, university HPC services director Doctor Paul Calleja estimates the new centre’s power boost will last for just five years.
“The new data centre can house a hundred racks of HPC equipment instead of 30 racks [in the past] and the power capability has gone up from 300 or 400kW to 2MW, so it’s a large increase in power and physical space,” he said.
“We have not yet deployed any new equipment within the data centre, we just moved all our old equipment in, but we have got an aggressive deployment plan over the next 12 months to deploy a large amount of new data and network infrastructure."
Inside Cambridge HPC's University digs, photo: Cambridge University
“The power footprint in the data centre will stretch to 2MW and we’ll probably be at that limit within a five year schedule. We will need to be deploying more power infrastructure as we move forward,” he said.
Big science these days means big data, with huge studies in biology, astronomy, clinical medicine, genetics and physics, among others, relying on Cambridge’s HPC services.
Already, two big projects are run – at least in part – on the Cambridge platform Large Hadron Collider and BRIDGE Genome Project. Cambridge handles some of the calculations for these projects.
Ron Horgan, professor of theoretical and mathematical physics, who works in HPC particle physics, said LHC has more than enough energy to prove the Standard Model of Physics, and beyond that, that the atom-smashing boffins believe there’s still more to come. This is where HPC comes in.
“You test new theories by solving them accurately with HPC and then any discrepancy is a portent for the existence of new physics. You calculate the margin of error and work on making it smaller and smaller to prove the new physics,” Horgan explained at a press day earlier this year.
The BRIDGE Genome Project is a study of the genes of 10,000 people who suffer from rare genetic diseases. The project aims to sequence the whole gene, not just a snapshot exon or two, of the subjects to build up a library that will help doctors to spot rare disease sufferers in GP surgeries and hospitals.
More than 300 million people around the world have a rare disease, according to project member and professor of experimental haemotology Willem Ouwehand, with China thought to be vastly under-reporting cases. Many of them wait more than five years before their rare condition is properly diagnosed, and it is this problem that the project hopes to help solve.