Intel details how Lunar Lake PC chips deliver 120 TOPS

A bigger NPU, faster graphics core in GPU, and on-package memory will do that to a chip

Computex In the emerging world of AI PCs, everything eventually boils down to TOPS: How many trillions of byte-sized operations can your neural processing units (NPUs) , GPU, and/or CPU churn out.

In an effort to define the market, Microsoft has set the bar at 40 TOPS to earn its designation as a "Copilot+ PC" that can take advantage of its AI bells and whistles – like its controversial Recall feature – being baked into Windows. So far, Qualcomm and AMD have announced processors that meet that metric.

At Computex on Tuesday, Intel revealed a bevy of information about its upcoming Lunar Lake mobile processors – including how it's juicing its NPU to join the Copilot+ club.

Intel began integrating NPUs into its mobile processors with the launch of Meteor Lake processors back in December 2023. Intel refers to the NPU in eteor Lake silicon as "NPU 3" and it was capable of churning out roughly 11.5 TOPS - well short of Microsoft's target.

With Lunar Lake, Intel claims its NPU (called "NPU 4") will be capable of churning out 48 TOPS at INT8, and will also support FP16. The x86 giant was able to achieve this by implementing a number of changes – the most notable being increasing the die area dedicated to the NPU. Where Meteor Lake had two NPU compute tiles, Lunar Lake will have six.

Along with a bigger NPU block, Intel says it's upgraded the digital signal processor, boosting the vector compute by 4x – which, apparently, translates to a 12x improvement in overall vector performance.

Intel also worked to use AI to optimize the frequency and voltage curve to drive down energy consumption and doubled the direct memory access (DMA) to alleviate bottlenecks when running heavier workloads like large language models.

"The pipeline has been optimized for higher frequency and NPU 4 is also among the first tape outs in the industry to use ML/AI techniques to achieve up to 20 percent power reduction beyond process," Intel claims.

However, the NPU is only part of the equation. Intel also revealed that several architectural improvements made to the CPU and GPU will push the chip's total AI performance to 120 TOPS, at least on the highest-specced model.

Here's a high level overview of what you can expect from Intel's Lunar Lake mobile chips when they arrive later this year.

Here's a high-level overview of what you can expect from Intel's Lunar Lake mobile chips when they arrive later this year – Click to enlarge

Lunar Lake will feature two dies – two fewer than Meteor Lake – stitched together using Intel's Foveros packaging tech.

It's not clear which or how many of these dies will be manufactured in-house. The chip was expected to use Intel's 20A process tech but, as we've previously reported, rumor has it that Intel will employ TSMC's 3nm process tech for at least some of the compute tiles while it works to ramp capacity for its internal nodes.

According to Intel, the chip's new Lion Cove p-cores will deliver 14 percent higher instructions per clock over last-gen, while its new Skymount e-cores promise 2x higher throughput for AI workloads. The higher performance is probably a good thing, considering that these chips will have a much lower core count this time around – topping out at four p-cores and four e-cores.

In total, the CPU is capable of delivering 5 TOPS of AI performance if it has to.

The GPU, meanwhile, remains the largest contributor to TOPS performance. The eight graphics cores in Lunar Lake silicon are about 50 percent faster than Meteor Lake's units and deliver 67 TOPS. And if you'd rather run games on your GPU than AI, the chip is – at least according to Intel's internal benchmarks – up to 80 percent faster.

The second die, called the platform controller tile, will handle I/O functionality – like Wi-Fi, Bluetooth, PCIe, Thunderbolt – as well as some security capabilities.

Finally, Lunar Lake will make use of on-package LPDDR5 memory, similar to Apple's practice acros several generations of its M-series parts. Apparently, this does limit the overall capacity to just 32GB of RAM – which, depending on what you're doing, may or may not be a big deal.

This isn't the only similarity to Apple Silicon, which is well regarded for energy efficiency. Intel suggests this new architecture will be about sipping rather than guzzling power. A new integrated power controller – along with software optimizations and an improved e-core cluster – should boost battery life by up to 60 percent, the chipmaker claims.

Of course, Intel isn't the only competitor in the AI PC race. Qualcomm recently unveiled its X-series chips with NPUs capable of hitting 45 TOPS. Those parts have given the chipmaker a head start on its x86 rivals, with Microsoft blazing ahead with Qualcomm's chips at the heart of its AI push.

Not to be left out, AMD showed off its Ryzen AI 300-series mobile processors, which will be capable of delivering 50 NPU TOPS at FP16, boast either 10 or 12 cores and feature its Radeon 800M graphics, during its Computex keynote this week.

And then, of course, there's Apple's new M4, which boasts 38 TOPS, presumably at INT8. While that's not enough to meet the bar for Microsoft's Copilot features, it's not like Apple was ever planning to use them in the first place.

So, it seems, the race to the TOPS is only getting started. ®

More about


Send us news

Other stories you might like