HPC

Cray dips toe in supercomputing-as-a-service

Gene research a test market for cloudy graph engine


With AWS, Google, and IBM's Watson already camped in the high-performance cloud business, it's hardly surprising that Cray would tread carefully as a late entrant into the supercomputer-as-a-service business.

The premium-level HPC vendor has decided to start small both in terms of target market and in geography: it's inked a deal with US data centre operator Markley to run genomic and biotech workloads for customers in just one bit barn located in Cambridge, Massachusetts.

The service is based on Cray's Urika-GX appliance, latest in a line of big data monsters introduced in 2012.

Urika's architecture is specific to graph applications: massively multithreaded, with a heritage that reaches back to spookdom before Cray allowed it to reach the public.

The Urika-GX landed last year, with Intel E5-2600 v4 processors (up to 48 nodes and 1,728 cores), 35TB of PCIe, the Aires high-speed interconnect, and 22TB of on-board memory. It's pre-installed with OpenStack and Apache Mesos.

The hardware spec is nice, but it's the Cray Graph Engine the company hopes will convince super-shy gene researchers the service is better for them than running up a super service from one of the existing clouds.

Cray's head of life science and healthcare told The Register's HPC sister publication The Next Platform a Cambridge sequencing centre that tested the offering hit a “five times speedup on parts of their overall workflow” compared to a standard cluster.

If you consider this as a beta to a limited market, it's hardly surprising that Cray hasn't announced pricing yet – nor that for now, it looks more like a “time-share” model than a fully-cloudy offering, with access to the supercomputer-as-a-service booked through Markley.

One reason for picking Markley's data centre as the first home for the service is its connectivity, on a local basis, to major networks. The company claims more than 90 carriers and network providers have a presence in the facility.

That's important for potential customers, because they will (naturally enough) have to upload their data to the service before the resident supers can get busy crunching data.

So users can pre-test their data and scripts before pressing “go” on an expensive supercomputer run, there's a virtualised Urika GX.

For storage, Markley will offer a suitable array in the colo (if customers have their own storage in the facility, they can use that instead). ®

Similar topics

Broader topics


Other stories you might like

  • Home-grown Euro chipmaker SiPearl signs deal with HPE, Nvidia
    Claims partnerships will drive development and adoption of exascale computing in Europe

    European microprocessor designer SiPearl revealed deals with Nvidia and HPE today, saying they would up the development of high-performance compute (HPC) and exascale systems on the continent.

    Announced to coincide with the ISC 2022 High Performance conference in Hamburg this week, the agreements see SiPearl working with two big dogs in the HPC market: HPE is the owner of supercomputing pioneer Cray and Nvidia is a leader in GPU acceleration.

    With HPE, SiPearl said it is working to jointly develop a supercomputer platform that combines HPE's technology and SiPearl's upcoming Rhea processor. Rhea is an Arm-based chip with RISC-V controllers, planned to appear in next-generation exascale computers.

    Continue reading
  • HPE, Cerebras build AI supercomputer for scientific research
    Wafer madness hits the LRZ in HPE Superdome supercomputer wrapper

    HPE and Cerebras Systems have built a new AI supercomputer in Munich, Germany, pairing a HPE Superdome Flex with the AI accelerator technology from Cerebras for use by the scientific and engineering community.

    The new system, created for the Leibniz Supercomputing Center (LRZ) in Munich, is being deployed to meet the current and expected future compute needs of researchers, including larger deep learning neural network models and the emergence of multi-modal problems that involve multiple data types such as images and speech, according to Laura Schulz, LRZ's head of Strategic Developments and Partnerships.

    "We're seeing an increase in large data volumes coming at us that need more and more processing, and models that are taking months to train, we want to be able to speed that up," Schulz said.

    Continue reading
  • HPE building its 4th global 'supercomputer factory'
    Facility supports a flurry of HPC development, centered in the Czech Republic

    Hewlett Packard Enterprise (HPE) expanded its European footprint this week as it revealed plans for a manufacturing facility in the Czech Republic, dedicated to building high-performance compute (HPC) systems.

    The new facility, located in Kutná Hora, adjacent to HPE's existing server and storage manufacturing plant and about 90km outside Prague, will be built in collaboration with Foxconn.

    HPE sees the investment as an opportunity to address ongoing supply chain challenges in the region. "We are now able to manufacture the industry's leading supercomputing, HPC, and AI systems, while increasing supply chain viability and resiliency," Justin Hotard, EVP and GM of HPC and AI at HPE, said in a statement.

    Continue reading
  • Atos pushes out HPC cloud services based on Nimbix tech
    Moore's Law got you down? Throw everything at the problem! Quantum, AI, cloud...

    IT services biz Atos has introduced a suite of cloud-based high-performance computing (HPC) services, based around technology gained from its purchase of cloud provider Nimbix last year.

    The Nimbix Supercomputing Suite is described by Atos as a set of flexible and secure HPC solutions available as a service. It includes access to HPC, AI, and quantum computing resources, according to the services company.

    In addition to the existing Nimbix HPC products, the updated portfolio includes a new federated supercomputing-as-a-service platform and a dedicated bare-metal service based on Atos BullSequana supercomputer hardware.

    Continue reading

Biting the hand that feeds IT © 1998–2022