Intel plans immersion lab to chill its power-hungry chips

AI chips are sucking down 600W+ and the solution could be to drown them.

Intel this week unveiled a $700 million sustainability initiative to try innovative liquid and immersion cooling technologies to the datacenter.

The project will see Intel construct a 200,000-square-foot "mega lab" approximately 20 miles west of Portland at its Hillsboro campus, where the chipmaker will qualify, test, and demo its expansive — and power hungry — datacenter portfolio using a variety of cooling tech.

Alongside the lab, the x86 giant unveiled an open reference design for immersion cooling systems for its chips that is being developed by Intel Taiwan. The chip giant is hoping to bring other Taiwanese manufacturers into the fold and it'll then be rolled out globally.

As the name suggests, immersion cooling involves dunking components in a bath of non-conductive fluid — mineral oil and certain specialized refrigerants being two of the most common — as opposed to using heat sinks or cold plates to move heat away from the chips. Intel claims its novel ideas on this established technology could reduce datacenter carbon emissions by 45 percent.

This is a big step forward for datacenter sustainability, Dell'Oro Group analyst Lucas Beran told The Register.

While the individual components and servers consume a substantial amount of power, keeping them cool accounts for upwards of 40 percent of a datacenter's energy consumption, he explained. "Simply reducing the energy consumption is a really big part of what liquid cooling and, more specifically, immersion cooling brings to the table."

Beyond, curbing energy consumption, these technologies offer several ancillary benefits. One is a substantial reduction in water usage, while another is that liquid cooling is just better than air at moving heat away, and can even be reused for things like district heating, Beran said.

Bytesnet, for example, recently announced plans to recycle heat generated by its datacenters to warm thousands of homes in the Groningen district of the Netherlands.

Balancing power, performance, and heat

One of the driving forces behind Intel's latest datacenter sustainability efforts is a trend toward ever higher power consumption by upcoming CPUs, GPUs, and AI accelerators.

Over the past few years, the thermal design power (TDP) commanded by many of these chips has more than doubled. Today, modern CPU architectures are pushing 300W, while GPUs and AI chips from Intel, AMD, and Nvidia are now sucking down 600W or more.

As these systems proliferate and find their way into mainstream datacenters, liquid or immersion cooling will eventually become unavoidable, not just to keep these systems from overheating, but to offset their exponential power consumption, Beran explained.

He highlighted one datacenter that adopted immersion cooling not because they were thermally bound, but because it enabled them to reallocate much of the power used to cool the systems to additional compute density. And this is how Beran expects most datacenter operators will approach immersion cooling in the near future.

"If you transition from a traditional rack air cooling system, to an immersion cooling system, you could just use less power," he said. But "I don't see many too many instances where folks say…'We have enough compute, we're just trying to cool it more efficiently.'"

More often than not, operators are running into trouble getting enough power into the rack, Beran added.

Can Intel drive adoption?

While immersion cooling isn't new, Beran argues Intel's involvement in developing an open reference design is still notable.


Cool building bro. Click to enlarge

"They play a really important role in developing technologies that are compatible with immersion cooling," he said. "It has the potential to have a massive impact because they can influence the server OEMs, like Dell or HPE, in terms of how they sell their products and the type of cooling infrastructure needed to support those products."

"Now they are designing products that will be born in liquids and born in immersion cooling," Beran added.

This is important because liquid and immersion cooling requires completely different form factors than what's used in air cooled datacenters today. This unfamiliarity remains one of the technology's biggest inhibitors.

The unknown is scary and many datacenter operators don't yet have a playbook for how to handle the various issues that can crop up in liquid and immersion cooled hardware. One of the most common concerns, Beran said, is associated with the tech is weight distribution, though it's rarely a problem.

"A big facility like this, a big playground if you will, where you can go and actually see this infrastructure firsthand, dip your toes in the fluid so to speak, understand how these systems operate in a datacenter like environment, has tremendous value to the industry," he said of the Intel lab. ®

Broader topics

Other stories you might like

  • Intel demos multi-wavelength laser array integrated on silicon wafer
    Next stop – on-chip optical interconnects?

    Intel is claiming a significant advancement in its photonics research with an eight-wavelength laser array that is integrated on a silicon wafer, marking another step on the road to on-chip optical interconnects.

    This development from Intel Labs will enable the production of an optical source with the required performance for future high-volume applications, the chip giant claimed. These include co-packaged optics, where the optical components are combined in the same chip package as other components such as network switch silicon, and optical interconnects between processors.

    According to Intel Labs, its demonstration laser array was built using the company's "300-millimetre silicon photonics manufacturing process," which is already used to make optical transceivers, paving the way for high-volume manufacturing in future. The eight-wavelength array uses distributed feedback (DFB) laser diodes, which apparently refers to the use of a periodically structured element or diffraction grating inside the laser to generate a single frequency output.

    Continue reading
  • Intel to sell Massachusetts R&D site, once home to its only New England fab
    End of another era as former DEC facility faces demolition

    As Intel gets ready to build fabs in Arizona and Ohio, the x86 giant is planning to offload a 149-acre historic research and development site in Massachusetts that was once home to the company's only chip manufacturing plant in New England.

    An Intel spokesperson confirmed on Wednesday to The Register it plans to sell the property. The company expects to transfer the site to a new owner, a real-estate developer, next summer, whereupon it'll be torn down completely.

    The site is located at 75 Reed Rd in Hudson, Massachusetts, between Boston and Worcester. It has been home to more than 800 R&D employees, according to Intel. The spokesperson told us the US giant will move its Hudson employees to a facility it's leasing in Harvard, Massachusetts, about 13 miles away.

    Continue reading
  • Intel’s CEO shouldn’t be surprised America can’t get CHIPS Act together
    Silicon supremo warns he could prioritize expansion in Europe if Congress doesn’t approve subsidies

    Comment How serious is Intel about delaying the build-out of its planned $20 billion mega-fab site in Ohio?

    It turns out very serious, as Intel CEO Pat Gelsinger made clear on Tuesday, less than a week after his x86 giant delayed the groundbreaking ceremony for the Ohio site to show its displeasure over Congress' inability to pass $52 billion in subsidies to fund American semiconductor manufacturing.

    In comments at the Aspen Ideas Festival yesterday, Gelsinger warned Intel would prioritize building factories in Europe over the US if Congress fails to act on the long-stalled chip subsidies bill.

    Continue reading
  • Intel details advances to make upcoming chips faster, less costly
    X86 giant says it’s on track to regaining manufacturing leadership after years of missteps

    By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.

    This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.

    The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.

    Continue reading
  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • Who's growing faster than Nvidia and AMD? Rising datacenter star Marvell
    Of the top 10 fabless chip designers, the Big M soared in Q1 thanks to switch ASICs

    In the world of fabless chip designers, AMD, Nvidia and Qualcomm usually soak up the most attention since their chips are fueling everything from top-end supercomputers to mobile devices.

    This hunger for compute is what has allowed all three companies to grow revenue in the high double digits recently. But there's one fabless chip designer that is growing faster among the largest in the world and it's far from a household name: Marvell Technology.

    Silicon Valley-based Marvell grew semiconductor revenue by 72 percent to $1.4 billion in the first quarter, which made it the fastest growing out of the top 10 largest fabless chip designers during that period, according to financials compiled by Taiwanese research firm TrendForce.

    Continue reading
  • Will optics ever replace copper interconnects? We asked this silicon photonics startup
    Star Trek's glowing circuit boards may not be so crazy

    Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

    But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

    Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

    Continue reading

Biting the hand that feeds IT © 1998–2022