Apple ropes off at least 4 GB of iPhone storage to house AI
Better or worse than a surprise U2 album?
Apple's on-device AI model, dubbed Apple Intelligence, will require 4 GB of device storage space, and more at a later date. That's about the size of an HD movie, for now.
The iBiz declared its storage requirements in its "Introduction to Apple Intelligence on iPhone" note published in advance of the public release of iOS 18.1 next month. That's when Apple Intelligence is officially ushered in on iOS (18.1) and macOS (Sequoia 15.1).
There's also a hardware requirement: iPhone 16, iPhone 16 Plus, iPhone 16 Pro, iPhone 16 Pro Max, iPhone 15 Pro, or iPhone 15 Pro Max. Those with iPad and Mac models with M1 and later Apple Silicon will also have access to Apple's AI service.
Both the iPhone 15 Pro and the iPhone 16 line come with a minimum of 128 GB of storage. Don't forget that the OS will eat up some of that space, too, which can typically be 10 or more gigabytes.
Those testing beta versions of Apple's software have reported fluctuating model sizes, some more 5 GB and others around half that.
Apple expects the storage space required will rise over time. "Storage requirements for on-device Apple Intelligence models will increase as more features roll out," the company cautions in a footnote.
But Apple Intelligence will not be generally available with the release of iOS 18.1. Rather it will be offered as a beta service and users will have to join a waitlist for several hours prior to activation. Once the service has been activated, then the various neural network models required will be downloaded. Apple has not said whether downloaded models can be removed once installed, though presumably users will have the option to deactivate the service.
The Apple Foundation Model (AFM) is a ~3-billion-parameter generative model that's designed to run efficiently on mobile devices, complemented by a large server-based language model within Apple's Private Cloud Compute infrastructure. There's also a diffusion model for adding graphics in Messages and a coding model in Xcode.
- FBI raids HQ of US govt IT giant Carahsoft
- Feel free to ignore GenAI for now – a new kind of software developer is being born
- Who's watching you the closest online? Google, duh
- Elon's latest X-periment: Blocked users can still stalk your public tweets
Apple is quantizing its on-device AFM at 4-bits or less to keep its size down – some layers are less important and so get only about 2-bits of quantization. "On average, AFM-on-device can be compressed to only about 3.5 bits per weight (bpw) without significant quality loss," Apple's eggheads explain in a paper detailing their work. "We choose to use 3.7 bpw in production as it already meets the memory requirements."
Generally speaking, a 3B parameter model takes up about 6 GB of storage at 16-bit quantization, about 3 GB at 8-bits, and about 1.5 GB at 4-bits.
To make its AFM more flexible, Apple uses "adapters," which the corp describes as "small collections of model weights that are overlaid onto the common base foundation model." Its AI system loads and unloads these on the fly to provide specific capabilities, such as summarization, proofreading, Mail replies, and so on. Adapters require about 10s of megabytes each, according to Apple's researchers.
Initially, the capability of Apple Intelligence will be limited to:
- Writing Tools
- Clean Up in Photos
- Create a Memory movie in Photos
- Natural language search in Photos
- Notification summaries
- Reduce Interruptions Focus
- Intelligent Breakthrough & Silencing in Focus
- Priority messages in Mail
- Smart Reply in Mail and Messages
- Summaries in Mail and Messages
- Siri enhancements, including product knowledge, more resilient request handling, new look and feel, more natural voice, the ability to type to Siri, and more
Reactions to beta versions of the software include terms like "disappointment" and "meh," but perhaps broader availability and expanded capabilities will lead to a few appealing use cases. ®