HPC

Another AI supercomputer from HPE: Champollion lands in France

That's the second in a week following similar system in Munich also aimed at researchers


HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

Those purpose-built AI technologies refer to the HPE Machine Learning Development Environment, a software platform which forms part of the HPE Machine Learning Development System that HPE launched last month.

Coincidentally, the HPE Machine Learning Development System is also based on AMD-based Apollo computer nodes fitted with Nvidia GPUs, making it likely that Champollion is essentially an incarnation of the HPE Machine Learning Development System.

The actual Champollion hardware specified by HPE comprises 20 HPE Apollo 6500 Gen10 Plus server nodes, with 160 Nvidia A100 GPUs, and Nvidia Quantum InfiniBand networking. The HPE Machine Learning Development System starts at four nodes, but customers have the option to scale up.

If it is based on that platform, each Apollo node will have 4TB of memory and 30TB of NVMe local storage, with HPE Parallel File System Storage optional. The HPE Machine Learning Development Environment runs atop this and provides an integrated platform for building and training models, compatible with popular machine learning frameworks such as TensorFlow and PyTorch.

HPE said that Champollion is currently available to select users and will be released to broader community access in the near future for users to begin developing and training their models.

Earlier this week, HPE also unveiled another AI supercomputer at the Leibniz Supercomputing Center in Munich, pairing an HPE Superdome Flex server with a Cerebras CS-2 specialized AI system. That system is also designed to accelerate applications for the scientific and engineering community. ®

Broader topics


Other stories you might like

  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading

Biting the hand that feeds IT © 1998–2022