You OK, Apple? Seriously, your silicon lineup is … a mess
M4? The M3 is barely six months old, and what about all those Macs still stuck on the M2? When will they get some love?
Comment Apple seems to have skipped a few steps in its silicon roadmap.
The launch of Cupertino's M4 iPad Pro this week wasn't just met with backlash over another tone-deaf ad campaign, but left us with more questions than answers regarding the health of Apple's silicon portfolio.
For one, the "new" M3-series parts powering the iGiant's MacBooks and iMacs, announced at the Scary Fast event last Halloween, haven't even celebrated their first birthday. What's more, the Mac Mini, Studio, and Pro models are still rocking older M2-series processors, with the M2 Ultra still technically the top-specced part in the lineup.
We hadn't exactly expected to see the M3 Ultra make its appearance until WWDC in June, and I think we can all agree the Mac Mini is well overdue for a refresh – its design dates back to 2010.
With the launch of Arm-compatible M4 this week, we're left wondering whether an M3 Ultra will ever materialize, or if Cook and Co will jump straight to the M4. There's some precedent to suggest this could happen: the iMac never received an M2 refresh, jumping straight from the M1 to M3.
What we know about the M4 so far
The M4 launching alongside the refreshed iPad Pro offers a number of material improvements over the M3. Most notably, it's using a second-gen 3nm manufacturing process from TSMC. We understand the M4 is compatible with the Armv9 architecture, including Arm's SME2 extension.
Versus the M3, the M4 adds two additional efficiency cores, bringing the total up to 10 (four performance, six efficiency) on the full-spec 1TB or 2TB iPad Pro, and boosts the neural processing unit – or as Apple prefers, Neural Engine – performance to 38 TOPS of machine learning grunt. For the 256GB and 512GB iPad Pro, the M4 has nine cores: Three performance, and six efficiency.
If Apple's performance claims are anything to go by – and do take them with a grain of salt – the new chip's CPU is 50 percent faster than the M2 and its GPU is as much as 4x faster.
It's just too bad all that performance is trapped inside of an iPad, where you'll be hard pressed to find any applications that'll really stretch the M4's legs. As the owner of an M2 iPad Pro, this vulture can tell you with complete confidence the last thing Apple's tablets needed was higher performance.
Perhaps the most notable improvement – and the one we highlighted in our day-one coverage – was the M4's upgraded NPU. It's a big improvement over the 18 TOPS of the M3. What we still don't know is at what precision that spec is achieved. If we had to guess it's probably 8-bit.
While Apple has been shipping NPUs in its chips going back to the A11 in 2017, the tech has become a must-have for rivals Intel and AMD, as they scramble to enable the AI apps that make a computer an AI PC.
That 38 TOPS figure is also significant, as it is substantially higher than the 16 TOPS claimed by AMD's Ryzen 8040-series parts or the 10 TOPS of which Intel's Meteor Lake chips are capable. Intel has said the NPU in its Lunar Lake parts will touch 45 TOPS – but those won't hit the market until later this year. Qualcomm's X Elite parts, due out later this year, will boast 45 TOPS NPU as well.
For the record, you don't need an NPU to run an AI chatbot at home. We have a whole guide for running models like Llama2 or Mistral-7B on your CPU or GPU, which you can find here. Instead, the interest around NPUs centers on enabling AI functionality without compromising on thermals, performance, or battery life.
Another important improvement Apple claims is a boost to the chip's memory bandwidth. As we've discussed in the past, memory bandwidth is a major bottleneck when it comes to running models. In many cases, AI model performance is limited more by the speed of the memory than by FLOPS or OPS. This is why we've seen GPU makers pushing for faster and higher density HBM configurations in the datacenter – but the same applies to running AI models on consumer electronics.
And while not mentioned in Apple's marketing materials, netizens digging through Apple's Xcode 15.4 release candidate found references to Arm's Scalable Matrix Extensions (SME), which should also help to speed up machine learning workloads.
So what happened?
There's a lot that we still don't know and probably won't know until the M4 makes its way to the Mac. But that doesn't mean we don't have our theories about why Apple pushed the chip out so quickly after the M3.
Some have speculated that the move to TSMC's second-gen 3nm process tech may have necessitated a redesign on Apple's part. This may well be true. But as one of TSMC's hero customers, Apple would have been made aware of this very early on. Remember, designing a chip does not happen overnight and is an incredibly expensive endeavor – especially on bleeding-edge process nodes like we're talking about here.
As such, we don't find this to be all that satisfying an explanation. Knowing this was coming, why would Apple launch the M3 and M4 in such quick succession? I guess we'll have to wait until WWDC and see if the M3 Ultra makes an appearance, or if Apple opts to jump to the M4 Pro/Max/Ultra across the board over the course of the year.
Another possibility has to do with a particularly nasty vulnerability recently uncovered in Apple's current crop of SoCs. The bug, dubbed GoFetch, exploits the use of data memory-dependent prefetchers (DMPs) to leak cryptographic keys. We've got a full write up on GoFetch, which you can find here. DMP can be disabled on the M3, but as researchers pointed out, doing so will likely degrade performance significantly.
It's not clear whether the M4 offers any hardware level mitigations to the GoFetch vulnerability – the development timelines for these kinds of chips would suggest not – but we can't rule it out.
- Fedora Asahi Remix 40 served on Apple Silicon
- Apple crushes creativity and its reputation in new iPad ad
- Add AI servers to the list of iDevices Apple Silicon could soon power
- Apple unveils M4 chip with neural engine capable of 38 TOPS, and some other kit
The more likely reason is that Intel, AMD, and Qualcomm's next-gen chips may have Apple worried. To be clear, a look through the Geekbench browser shows that the M3 – along with its Pro and Max siblings – is still a very competitive little chip. However, CEO Tim Cook now has investors breathing down his neck wondering how he missed the boat on this whole AI thing.
In the past, the metrics folks cared about were single and multi-core CPU and GPU performance. In the AI age, all anyone wants to hear about – well the investors anyway; we suspect the average Joe couldn't care less – is FLOPS and OPS.
As we mentioned before, NPU performance has become the metric to beat in the AI PC arena, where Microsoft has set a target of 40 TOPS. Obviously, Apple isn't beholden to this mark. It can decide for itself what a good target is.
But Apple's failure to compete on this metric could lead to AI app developers prioritizing Windows devices over Macs, which could end up making the aluminum-clad systems less attractive to consumers. That not only means smaller profits from device sales, but less app revenue too – which we're sure will really get investors' blood boiling.
As for those weighing an Apple Silicon Mac, the best advice we can give is to wait. Worst-case scenario, the M4 ends up being a marginal upgrade over the M3 with much of the advantage weighted toward AI apps – which you may or may not care about at this point. If that happens, history tells us you'll be able to pick up a last-gen Mac at a pretty steep discount. ®
Editor's note: This article was updated to include the Arm architecture version of the M4 and its core configuration.