Try it, then rent it or buy it. That's the new mantra from supercomputer maker Silicon Graphics this morning as it launches its own supercomputing on demand offering, dubbed Cyclone.
If cloud computing means virtualized server instances, then technically speaking - as if SGI could speak any other way - the Cyclone service is not a cloud. But if cloud means buying server and storage capacity on demand to run preloaded applications or homegrown ones and only paying for what you use - what some of us still call utility computing - then Cyclone is a cloud.
The semantics don't matter as much as having SGI have another product to sell, and a cloud HPC offering also allows a kind of try-and-buy sales pitch that the former Sun Microsystems did both with its physical servers and with its ill-fated Sun Grid utility compute and storage farm. The difference this time around is that SGI has lined up 16 different applications that are popularly used in five different HPC domains that lend themselves to the utility computing approach and has them preloaded onto Cyclone.
The domains include computational biology, computational chemistry and materials, computational fluid dynamics, finite element analysis, and ontologies (semantic Web and data mining). The applications include: OpenFOAM, NUMECA, Acusolve, LS-Dyna, Gaussian, Gamess, NAMD, Gromacs, LAMMPS, DL-POLY, BLAST, FASTA, HMMER, ClustalW, OntoStudio, and SemanticMiner.
The Cyclone cloud is not virtualized and when you rent resources from SGI, the infrastructure you rent is single tenancy. Doing it this way might be more of a hassle from a system administration perspective, concedes Geoffrey Noer, senior director of product marketing at SGI, but the companies and institutions that run HPC workloads want dedicated, secure servers and storage. And they don't want to share resources, either, even if this might mean having a lower cost for processing or storage capacity. What HPC customers want is to run their applications and get their results back in a predictable timeframe.
Cyclone is not restricted to running the 16 applications mentioned above in a software-as-a-service (SaaS) manner (with others coming out shortly), but can also be used as raw computing to run homegrown code or any other Linux applications that HPC shops have a license to run (known as IaaS in the cloud lingo).
SGI has set up three types of infrastructure as part of the Cyclone service. The first is a cluster of Altix ICE blade servers (which come from the original SGI side of the house) based on the current Nehalem-EP Xeon 5500 quad-core processors; this cluster is based on InfiniBand interconnects. The second cluster is based on the Altix 4700 Itanium-based supers, which implement global shared memory using NUMAlink 4 interconnects for the blade-style server nodes.
A hybrid cluster based on the Altix XE Xeon 5500 servers (which come from the Rackable Systems side of the house) is equipped with co-processor accelerators to boost their number-crunching power. Specifically, SGI is putting Nvidia's Tesla GPUs into this hybrid cluster, as well as the FireStream GPUs from Advanced Micro Devices. These GPUs boost floating point performance and, provided applications are tuned for them, deliver better flops per watt than just standalone Xeon cores could alone.
Interestingly, if customers want to accelerate the integer performance of their jobs on the Cyclone service - such as for DNA sequencing, Web search, and other text manipulation workloads - the hybrid cloud based on the Altix XE servers can also be equipped with PCI-Express cards containing Tilera's 64-core mesh processors. The co-processors are not being offered on the other two sub-clouds in the Cyclone service, but if you wanted to pay for it, there is little doubt SGI could and would do it.
No matter what part of the Cyclone service the HPC applications run on, the software stack is essentially the same: either Red Hat Enterprise Linux or Novell SUSE Linux Enterprise Server, plus the SGI ProPack extensions that tune Linux for SGI's iron and have goosed math libraries. SGI is using its own ISLE cluster manager and Altair's PBS Pro batch scheduler to manage the clusters. Microsoft's Windows HPC Server 2008 is currently not supported on the Cyclone service, but Microsoft is a launch partner and SGI is interested in supporting that Windows variant on the service.
The Cyclone cloud is currently hosted in two data centers owned by SGI - one in Fremont, California, where SGI is headquartered, and another in a data center in Chippewa Falls, Wisconsin, where SGI has historically done development and manufacturing before its acquisition by Rackable Systems. SGI did not divulge the processing capacity it has dedicated to the HPC cloud, beyond saying that it currently has thousands of processor cores and that it can be scaled as customer demands dictate.
Customers with relative small data sets can upload information to the cloud over a VPN link, or they can ship the data to SGI on disk drives. SGI is using its own InfiniteStorage arrays as well as arrays from LSI and DataDirect as the back-end storage for the Cyclone HPC cloud.
Pricing for the Cyclone cloud is fairly straight forward. It costs 95 cents per core hour for a Xeon core (such as the X5560 running at 2.8 GHz) or an Itanium core (such as the Itanium 9100 running at 1.66 GHz). The Xeon cores come with 2 GB to 3 GB of main memory each, while the Itanium machines come with 2 GB to 4 GB of memory per core.
Customers have to rent a management node for a nominal fee, which gives them access to the cluster and the ability to start and stop jobs. Storage costs 20 cents per GB per month, and obviously, customers have to work out utility-style licensing agreements with the vendors that have parked their apps on the Cyclone cloud before they can use them.
The Cyclone HPC cloud is available starting today. And later this year, when SGI is trying to bulk up the sales pipeline for its Altix UV massively parallel, shared memory supers, these machines will appear first on the Cyclone cloud.
The Altix UV supers are based on Intel's imminent "Beckton" eight-core Xeon Nehalem-EX processors for four- and eight-socket servers. SGI has created a two-socket Nehalem-EX blade design and a boosted NUMAlink 5 interconnect that allows it to scale to the petaflops performance level - much further than it can do with the current Altix 4700s. ®