Cavium has two more tilts at Arm servers as Nvidia offers Arm-bots

HPC types offered density, carriers get roll-your-own customer-premises kit

Cavium’s made two new attempts to find an audience for Arm-powered servers.

The first comes from Gigabyte in the form of the new H261 “Density Optimized Server platforms”. The product offers a 2U platform packing up to four dual-socket servers, each housing a pair of ThunderX2 CPUs. As that silicon can reach 32 cores per socket, there’s the chance for 84 servers and 5,376 physical cores in a standard 42U rack.

Gigabyte’s designed the H261 with modest storage – just half a dozen 2.5” disks or three 3.5 disks per node – and done so intentionally to target workloads that don’t need bulk storage. High-performance computing is squarely in Gigabyte and Cavium’s sights: the pair have seen HPE’s Arm-powered Apollo 70 product and figure that some buyers would rather deal with original design manufacturers rather than sign up for the complexities of a relationship with a tier one vendor.

The ThunderX2 was designed with HPC workloads in mind. No wonder Cavium execs told The Register they feel company has general purpose servers covered but that the new Gigabyte kit means they can now cover workloads that require density.

And do so without compromise: there’s PCIe and other expansion options to allow addition of GPUs.

Cavium’s other new toy is all its own: the “OCTEON TX” is am Armv8 system on a chip designed for customer premises equipment.

Built on first-generation Thunder cores, the two-to-24-core devices are offered as engines to run virtual network functions. Cavium hopes vendors that built appliances on its silicon will see the potential to re-package those functions as functions, then work with carriers and service providers to distribute them to customer-premises equipment. The SoCs can run Linux, Docker or Mesosphere, making it easy for service providers to package network functions.

Cavium and Gigabyte made their announcements at Taiwan’s Computex show, where Nvidia also revealed Arm-powered kit in the form of “Isaac … a new platform to power the next generation of autonomous machines, bringing artificial intelligence capabilities to robots”.

Isaac is a platform and inside you’ll find half a dozen different processors, including “Volta Tensor Core GPU, an eight-core Arm64 CPU, dual NVDLA deep learning accelerators, an image processor, a vision processor and a video processor.”

A Jetson Xavier developers kit will go on sale at US$1,299 in August 2018. Gigabyte’s H261 will debut in Q3. OCTEON TX is in production now. ®

Narrower topics

Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • AMD nearly doubles Top500 supercomputer hardware share
    Intel loses out as Instinct GPUs power the world’s fastest big-iron system

    Analysis In a sign of how meteoric AMD's resurgence in high performance computing has become, the latest list of the world's 500 fastest publicly known supercomputers shows the chip designer has become a darling among organizations deploying x86-based HPC clusters.

    The most eye-catching bit of AMD news among the supercomputing set is that the announcement of the Frontier supercomputer at the US Department of Energy's Oak Ridge National Laboratory, which displaced Japan's Arm-based Fugaku cluster for the No. 1 spot on the Top500 list of the world's most-powerful publicly known systems.

    Top500 updates its list twice a year and published its most recent update on Monday.

    Continue reading

Biting the hand that feeds IT © 1998–2022