Add AI servers to the list of iDevices Apple Silicon could soon power

Where have you been, Cupertino?

Updated We can add Apple to the list of tech titans developing their own custom AI accelerators – at least that's what unnamed sources are telling the Wall Street Journal.

This week the Murdoch machine reported Cupertino was quietly developing a server chip code-named ACDC, or Apple Chips in Data Center, which are designed to run AI models – a process referred to as inferencing.

Apple has a considerable amount of experience in chip design, having had a hand in tooling its own processors going back to the Apple A4 in 2010. Since then the iDevice maker has transitioned entirely to its own Arm-compatible silicon – officially ditching its last ties to Intel with the launch of the Mac Pro in 2023.

The fruit cart last exited the server business back in 2011 when it killed off its Xserve line for good – it also dallied with server hardware in the 1990s with less success. But while Apple hasn't sold server hardware for more than a decade, that's not to say an Apple chip tuned to server requirements couldn't be used to serve internal workloads.

(And who could forget Nuvia, a startup formed from Apple processor engineers who wanted to make server chips but couldn't while at Cupertino. That outfit is now part of Qualcomm.)

In addition to their general purpose CPU cores, Apple's current Arm-compatible chips boast a fairly potent integrated GPU as well as a purpose-built neural processing unit (NPU) designed to accelerate machine learning operations.

The latest M-series silicon has already proven quite competent – capable of running quite substantial large language models (LLMs) with reasonably high performance.

If you've got an Apple Silicon Mac, you can test out running LLMs locally by following our guide here.

Much of this ability is down to how Apple's homegrown chips are designed. Up to eight LPDDR5x memory modules are co-packaged alongside the compute dies on the iGiant's flagship Ultra parts, fueling the CPU and GPU with up to 800GB/sec of memory bandwidth.

Memory bandwidth is a key factor when it comes to inference performance, and is one of the reasons we've seen GPU makers like AMD and Nvidia moving to faster high-bandwidth memory in order to avoid bottlenecks.

While most of the LLMs we've seen running on these chips are utilizing the GPU via the Metal API, Apple this week unveiled the M4 alongside refreshed iPads which boosted its NPU performance to 38 TOPS – giving it a sizable lead over Intel and AMD current-gen parts.

Whether or not Apple's rumored server chips will bear any resemblance to the A-and M-series we already know remains to be seen. But it wouldn't be at all surprising to see Cupertino go this route.

As of 2024, nearly every major tech player has deployed – or is in the process of developing – custom silicon for AI inferencing or training. Meta became the latest hyperscaler to roll out custom silicon last month with the widespread deployment of its Meta Training and Inference Accelerator v2 (MTIA v2).

While a lot of AI attention these days revolves around the LLMs behind popular chatbots and services like OpenAI's ChatGPT, Google's Gemini, or Microsoft's OpenAI-based Copilot, MTIA is actually designed to run Meta's internal recommender models – for things like ad selection and placement.

It's a similar story for Google's Tensor Processing Unit and Amazon's Trainium and Inferenia kit, which were initially developed to run the their respective internal workloads. But, as with most cloud infrastructure, surplus capacity was eventually made available to the public. That leaves Microsoft, which only revealed its custom CPUs and AI chips last fall.

Given the scale at which Apple operates, it's not unreasonable to believe it sees an opportunity to cut costs by shifting established machine-learning workloads – particularly those too large to practically run on-device – to highly optimized silicon built and controlled in-house.

However, not everyone is convinced. Noted Apple watcher Mark Gurman voiced his doubt on Xitter Monday, posting that while Apple started a project around 2018 to develop server chips, the program was abandoned. He added that a lack of differentiation, high cost, and a focus on on-device AI make the prospect even more unlikely.

Even so, Tim Cook has faced intense investor pressure amid the AI boom to discuss Apple's strategy more openly. Its perch atop the list of most valuable companies was taken by Microsoft a few months ago, at least in part on the strength of Redmond's perceived leadership in AI. We discussed what Apple's AI strategy might look like in more detail here.

Up until this spring, Apple had largely avoided using "AI" in its marketing, preferring to use the term "machine learning" – and that only sparingly. All of that changed with the launch of the M3 MacBook Air in March – when it was touted as the "world's best consumer laptop for AI."

So, even if an Apple server chip is off the table, as Gurman suggests, it's clear that Cook is keen to change the perception he's has fallen behind in the AI race. ®

Updated to add on May 8

It's now rumored that Apple could or will use the M2 Ultra for its server chip.

More about

TIP US OFF

Send us news


Other stories you might like