Zuck dreams of personalized AI assistants for all – just like email
A model finetuned on your social media profile? What could possibly go wrong?
SIGGRAPH Big public AI models like ChatGPT, Gemini, and Copilot have become near ubiquitous over the past few years – but Meta CEO Mark Zuckerberg is banking that before long, everyone will have at least one, if not more, personalized AI assistants to call their own.
Onstage at SIGGRAPH in Denver, Colorado, on Monday, Zuckerberg and Nvidia CEO Jensen Huang discussed the Meta boss's vision: A world in which custom AI agents – trained to mimic your personality or brand – might help someone prepare for a difficult conversation, or interact with customers on their behalf long after they've signed off for the day.
"A lot of our vision is that we want to empower all the people who use our products to basically create agents for themselves," he gushed.
"Whether that's all the many, many millions of creators that are on the platform, or the hundreds of millions of small businesses, we eventually want to just be able to pull in all your content, very quickly stand up a business agent, and be able to interact with your customers and do sales and customer support."
You can replay their chat below.
This vision is at the heart of Meta's AI Studio offering. It aims to make it easier for users to take its pre-trained Llama models and, as Zuckerberg puts it, "make it so every creator can build sort of an AI version of themselves." This agent or assistant could then be put to work interacting with your followers in a tone and style that mimics your own.
For online communities with such high moral standards as Facebook or Instagram, we can't imagine how that possibly could go wrong.
Nonetheless, Zuckerberg expects these agents to take all different shapes and sizes, and be used for everything from automating monotonous tasks to entertainment and meme generators.
Ever a believer in augmented and virtual reality (AR/VR), Zuckerberg also suggested that customers might talk to their AI self using AR glasses – like the ones Meta developed in collaboration with Ray-Ban. Just imagine getting a pep talk from your smart glasses.
- Apple Intelligence beta lands in iOS 18.1, macOS 15.1 previews
- Meta's AI safety system defeated by the space bar
- Google DeepMind's latest models kinda sorta take silver at Math Olympiad
- Meta claims 'world's largest' open AI model with Llama 3.1 405B debut
However, for customers and enterprises that would rather not give Meta or its various properties access to any more of their data than they already have, Zuckerberg also echoed his commitment to open AI development.
At the heart of all of these efforts is Meta's Llama family of open large language models – the latest and largest version of which launched last week, boasting anywhere from 8 billion to 405 billion parameters, a 128,000-token context window, and support for eight languages.
"I thought Llama 2 was probably the biggest event in AI last year … because when that came out, it activated every company, every enterprise, and every industry" to embrace AI, Huang remarked of the decision to open source the models.
Unfortunately for investors hoping Meta and others' AI infrastructure investments will pay off sooner rather than later, Zuckerberg reiterated that this work won't happen overnight.
"Even if the progress on the foundation models stopped now, which I don't think it will, I think we'd have five years of product innovation for the industry to figure out how to most effectively use all the stuff that's gotten built so far," he explained.
Whether or not a clearer picture of Meta's AI strategy will assuage investors' anxiety will become clearer after the biz reports its Q2 earnings on Wednesday. But, as you may recall, setting realistic expectations didn't exactly go over that well with investors the last time.
And it's not like shareholders don't have reason to be worried. As Huang was keen to point out toward the end of their chat, Zuckerberg's business has been one of Nvidia's best customers. Meta is on track to deploy some 600,000 of Nvidia's GPUs, which you may recall can cost anywhere from $30,000 to $40,000 apiece. Investors will want to know exactly what that is buying them. ®