HPC

Nvidia welcomes Intel into AI era: Fancy a benchmark deathmatch?

We love your deep learning benchmark 'mistakes'


HPC blog Nvidia just fired the first salvo in what promises to be a classic and long-lived benchmark death match vs Intel. In a webpage titled "Correcting Intel's Deep Learning Benchmark Mistakes," Nvidia claimed that Intel was using outdated GPU benchmark results and non-current hardware comparisons to show off its new Knights Landing Xeon Phi processors.

Nvidia called out three Intel claims in particular:

"Xeon Phi is 2.3 times faster in training than GPUs." This claim was made in a press presentation delivered at ISC'16 and on an Intel-produced "fact sheet" (PDFs available here and here). It specifically refers to a stat at the left side of slide 12 (and the second page of the fact sheet) where Intel claims Phi is 2.3 times faster on the AlexNet image training on a DNN (deep neural network).

Nvidia alleges that Intel is using 18-month-old AlexNet numbers for Nvidia (based on a Maxwell system), while using farm-fresh numbers for the Intel Phi.

According to Nvidia, its Pascal processors in the same four-accelerator configuration outperform Intel's Phi by 1.9 times. It also claims its new NVIDIA 8-GPU DGX-1 dedicated DNN training machine can complete AlexNet in two hours, outshining the 4 Phi system by 5.3 times. Ouch.

"Xeon Phi offers 38 per cent better scaling than GPUs across nodes." This claim also occurs in both of the Intel documents referenced above. In this case, Intel is saying that their Phi systems scale better than GPU-equipped boxes, namely when it comes to 32-way/accelerator configurations.

According to Nvidia, Intel is using four-year-old numbers from Oak Ridge's Titan machine, which was using the old Jaguar interconnect and old K20 GPUs, as a comparison to Intel's brand-new Omni Path Architecture connected Phi processors running deep learning workloads.

It points out Baidu-published specs from its speech training workload that show near linear GPU scaling not just to 32 nodes, but to 128 nodes. Ouch again.

"Xeon Phi delivers 50 times scaling on 128 nodes." I didn't see this exact claim in the Intel documents, but there were a lot of claims flying around, so I could have missed it. Whether it's there or not, Nvidia responded to it by again pointing to the near-linear Baidu 128-GPU node result. By the by, getting a 50 times speed up by adding 128 times more resources isn't the kind of scalability you write home about, you know?

What's funny to me is that at the end of Nvidia's "correction" webpage, it welcomes Intel to the era of AI, with an additional admonition that "they should get their facts straight." Hmmm.

But I'd like to see something more along the lines of the old, and unpublished, Data General ad where they are welcoming IBM to the minicomputer market. The two-line ad read: "They say that IBM's Entry Into Minicomputers Will Legitimize the Market ... The Bastards Say, Welcome."

As the budding Intel-Nvidia war develops, we're sure to see shots flying back and forth – maybe we'll even see an Informix-Oracle-like billboard fight like we did in the 1990s? The Highway 101 billboard owners should start writing their proposals now... ®

Similar topics

Broader topics


Other stories you might like

  • Lenovo reveals small but mighty desktop workstation
    ThinkStation P360 Ultra packs latest Intel Core processor, Nvidia RTX A5000 GPU, support for eight monitors

    Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.

    Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.

    Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • Will optics ever replace copper interconnects? We asked this silicon photonics startup
    Star Trek's glowing circuit boards may not be so crazy

    Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

    But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

    Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

    Continue reading
  • Intel demands $625m in interest from Europe on overturned antitrust fine
    Chip giant still salty

    Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.

    In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.

    According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."

    Continue reading
  • Intel withholds Ohio fab ceremony over US chip subsidies inaction
    $20b factory construction start date unchanged – but the x86 giant is not happy

    Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.

    The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.

    That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.

    Continue reading
  • Intel delivers first discrete Arc desktop GPUs ... in China
    Why not just ship it in Narnia and call it a win?

    Updated Intel has said its first discrete Arc desktop GPUs will, as planned, go on sale this month. But only in China.

    The x86 giant's foray into discrete graphics processors has been difficult. Intel has baked 2D and 3D acceleration into its chipsets for years but watched as AMD and Nvidia swept the market with more powerful discrete GPU cards.

    Intel announced it would offer discrete GPUs of its own in 2018 and promised shipments would start in 2020. But it was not until 2021 that Intel launched the Arc brand for its GPU efforts and promised discrete graphics silicon for desktops and laptops would appear in Q1 2022.

    Continue reading
  • Intel says Sapphire Rapids CPU delay will help AMD catch up
    Our window to have leading server chips again is narrowing, exec admits

    While Intel has bagged Nvidia as a marquee customer for its next-generation Xeon Scalable processor, the x86 giant has admitted that a broader rollout of the server chip has been delayed to later this year.

    Sandra Rivera, Intel's datacenter boss, confirmed the delay of the Xeon processor, code-named Sapphire Rapids, in a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference. Earlier that day at the same event, Nvidia's CEO disclosed that the GPU giant would use Sapphire Rapids, and not AMD's upcoming Genoa chip, for its flagship DGX H100 system, a reversal from its last-generation machine.

    Intel has been hyping up Sapphire Rapids as a next-generation Xeon CPU that will help the chipmaker become more competitive after falling behind AMD in technology over the past few years. In fact, Intel hopes it will beat AMD's next-generation Epyc chip, Genoa, to the market with industry-first support for new technologies such as DDR5, PCIe Gen 5 and Compute Express Link.

    Continue reading

Biting the hand that feeds IT © 1998–2022