Intel slaps forehead, says I got it: AI PCs. Sell them AI PCs

People try to put us down, talkin' 'bout ML generation

Intel CEO Pat Gelsinger used his keynote at the chip giant's Innovation conference in San Jose on Tuesday to repeatedly hammer home the idea of running large language models and other machine-learning workloads, like Llama 2 or Stable Diffusion, locally, privately, and securely on users' own PCs.

"AI is fundamentally restructuring science and so many domains, unleashing new applications and new experiences in productivity and creativity," he said. "But we also believe that ushers in the next era of the PC: the AI PC generation."

Anything to boost those PC sales numbers, huh? The exec credited the work OpenAI's CEO Sam Altman, Microsoft's CTO Kevin Scott, and Nvidia's boss Jensen Huang are doing with AI at the high end, but he argued that the bigger opportunity was in bringing these models to the masses by allowing them to be run locally and privately on their own PCs.

Gelsinger admits the so-called AI PC faces challenges and its success would depend heavily on the creation of killer apps. So it's no surprise that a solid chunk of his keynote was spent showing off various AI-enhanced apps running atop Intel's OpenVINO inferencing platform using the processor giant's hardware.

These demos included plugins for image and music generation for popular open source software, such as Audacity and the GNU Image Manipulation Program (GIMP), which we're sure Adobe was thrilled to see after launching its own generative art platform, called Firefly, last week.

Apart from one demo running Stable Diffusion in GIMP on Intel's next-gen Lunar Lake processors - expected in late 2024 - most of the demos highlighted the manufacturer's upcoming 7nm Core Ultra processors, previously code-named Meteor Lake. The chips are among Intel's first client processors to feature a multi-die architecture pairing an Intel CPU die with a TSMC-fabbed GPU.

The microprocessors also feature an integrated neural-processing engine (NPU) derived from Intel's Movidius vision processing unit (VPU).

Integrating NPU functionality into mobile processors is by no means new. NPUs have been a staple of mobile chips, including Apple's A- and M-series, for years. Meanwhile, AMD implemented similar tech in its Ryzen 7040 mobile processors earlier this year.

Unfortunately for anyone hoping to see new silicon announced at the Innovation shindig, Intel's Meteor Lake and fifth-gen Emerald Rapids Xeons won't arrive until December 14 at the earliest.

And speaking of what's coming next: Intel's Sierra Forest Xeon SP is due to arrive in 2024 with 288 energy-efficient CPU cores, or E-cores, code-named Sierra Glen.

Taking OpenVINO to the Edge

While most of the demos shown off during Gelsinger's keynote focused on PCs and notebooks performing AI inferencing – decision making and content generation by trained models – the tech giant also sees an opportunity to undermine its competitors at the edge of the network, too.

Intel disclosed Project Strata, which seeks to address challenges around zero-touch provisioning, management, security, and patching of appliances so that latency and bandwidth sensitive workloads, AI or otherwise, can be deployed by developers at the edge more easily.

"For the past 15 years, the dominant developer model has been cloud native," Gelsinger said. "We believe that the next decade or two of development isn't cloud native, its edge native."

Curiously this strategy also involves extending support for OpenVINO to Arm processors. Sachin Katti, senior veep of Intel's Edge group, told The Register that partnership is meant to address the need to run AI inference on low power edge systems, such as Internet of Things sensors, which typically are powered by Arm-compatible controllers and chipsets.

"We do not want to leave our customers stranded because they will have a portfolio that has to span across all of this; they will have devices like sensors with that level of compute," he said, adding Intel is positioning OpenVINO as the means to run AI inference at the edge regardless of the underlying infrastructure.

Intel hasn't given up on training

While much of the emphasis at Intel Innovation revolved around AI inference either on PCs or at the edge, the corporation still intends to compete with Nvidia and others on the AI training front.

On stage, Gelsinger teased what he claimed would be the "world's largest AI supercomputer in Europe," and one of the "top 15 AI supercomputers in the world." Built in collaboration with Stability AI, the company behind the popular Stable Diffusion image generation model, the all-Intel system will reportedly leverage a combination of Xeon processors and 4,000 Gaudi2 AI accelerators.

For those that don't recall, Intel dumped its Nervana AI accelerators development in favor of Habana Labs in a deal valued at $2 billion back in 2020. The fruit of that acquisition was Guadi2, which launched last spring and boasted performance roughly twice that of Nvidia's two-year old A100, at least in the ResNet-50 image classification and BERT natural Language processing models.

Intel is slated to launch its third-generation Gaudi processors next year and Gelsinger said they were already out of the fab and in the process of being packaged for launch. Meanwhile, Intel's next-gen AI accelerators, codenamed Falcon Shores, have been pushed back to 2025, but will now incorporate IP from both its GPU Max and Habana portfolios. ®

More about


Send us news

Other stories you might like