Yes, Mark Zuckerberg is still pushing metaverse. Next step, language translation
He probably knows a thing or two about nobody understanding him
Meta has had a bad start to the year.
When it revealed its audience growth was at a standstill and it had already sunk more than $10bn into metaverse technologies, its share price plummeted 27 per cent. More than $230bn of Meta's market cap evaporated. In social media, if you aren't growing, you're dying.
For Mark Zuckerberg, the metaverse can't come soon enough. This CEO is all in on Facebook's transformation from a dull website and app into a bright 3D world, where friends can hang out in virtual environments they create and bend at will. To turn his vision into reality, and make the metaverse a success, Meta is investing heavily in AI to power it.
"The kinds of experiences that you'll have in the metaverse are beyond what is possible today," Zuckerberg said on Monday during a very meta event showcasing a few of the AI systems that will drive the new Facebook 2.0.
"It's an immersive version of the internet. Instead of just looking at something on a screen, you're going to actually feel like you're inside or right there present with another person. And that's going to require advances across a whole range of areas, from new hardware devices to software for building and exploring worlds. And the key to unlocking a lot of these is advances in AI."
The core challenge of building the metaverse is making the transition from the physical to the virtual world as seamless as possible. In the future, Meta denizens will don AR glasses and VR goggles to navigate these made-up environments, and use a range of artificial intelligence technologies to interact with one another.
One important entity in all of this, for example, is an all-seeing and all-knowing AI assistant. Meta announced Project CAIRoke, a model designed for developing smart chat bots that operate in the metaverse. Zuckerberg demonstrated directing a robot called Builder Bot to create new features in the metaverse by instructing the CAIRoke-powered system using voice commands. "Let's add some clouds," he said. The sky in the metaverse is then filled with clouds. "Let's add an island over there." An island with moss-covered rocks pops up in the distance.
"Cool. How about we add some trees out here by the sand. Let's get a picnic blanket down here. Let's put up a table. Let's put a stereo. Let's get some drinks as well. Let's get the sound of some waves and seagulls," Zuckerberg adds. An imaginary picnic scene materializes complete with a bench and virtual cans suddenly materializes. You get the picture.
Welcome to the new world
The metaverse doesn't just need AI assistants. It relies on other areas of machine learning, including a host of generative AI models that can create all sorts of objects in a digital environment. That requires building what Joelle Pineu, director of Meta's AI research team in Montréal, Canada, called world models, which are like simulations of the world that computer systems can use to generate better predictions and responses to user requests.
"A world model is a construct that AI researchers have talked about for years," she said.
- Grab some tissues: Meta's share price tanks after Facebook emits latest figures
- Facebook fined peanuts after Giphy staff quit and firm didn't tell UK competition regulators
- This is going well: Meta adds anti-grope buffer zone around metaverse VR avatars
- Australian court finds Facebook 'divorced from reality' as it tried to define doing business down under
AI agents in the metaverse will have to learn from multiple sources of data in the real and virtual world. Meta has been compiling all sorts of datasets from digitally mapping inside homes with Habitat 2.0 to capturing all sorts of scenes recorded in the first-person with Ego 4D. The wide range of types of data, from images, audio, video, and text will mean AI models will have to be multi-modal.
It'll be impossible to annotate all that data to train models. Instead of manually spoon-feeding systems using supervised learning, they will be taught to learn in a self-supervised manner from unlabelled data. Piotr Dollar, a research director at Meta focused on computer vision, described a technique for teaching models the visual representation of objects by showing them many images, covering up pixels, and challenging the model to make its best guess and fill in the obscured parts.
If a model is presented with lots of images of car tires, for example, and then a wheel partially masked, if the model is able to complete the circular shape of the tire, it'll have learned the general structure of the object pretty much by itself. That would clear the way for faster training and deployment of neural networks – there would be much less human intervention in the loop.
Meta's VP and chief AI scientist Yann LeCun has for years backed the idea of moving away from relatively slow supervised learning methods to these faster self-supervised approaches.
"We can clearly see that humans and animals can learn new skills, or acquire new knowledge much, much faster than any of the artificial systems that we have built so far," he said.
"They can learn with fewer trials, if it's a kind of a new skill that can, you know, [they] can learn with fewer examples. So what kind of learning do humans and animals use that we are not currently able to reproduce in machines? That's the big question … And we don't know yet how to do this with machines but we have a few ideas like self supervised running and things of that type."
The end goal of the metaverse isn't just for users to interact with fancy AI assistants and models, however. Although the internet titan formerly known as Facebook has changed its name to Meta, its mission is still largely the same: to make the world more connected. Human-to-human communication is still key.
Speaking your language
Zuckerberg announced two new projects, one dubbed No Language Left Behind, which is an ambitious machine translation system that will supposedly learn every language, even if it's rare and source material is scarce.
The second project is a universal speech translator, which will allow users to communicate in the metaverse in different languages with the help of instantaneous and simultaneous speech translation.
"We're going to keep building technology that enables more people to access the internet in their language. We hope to extend that to content and experiences in the metaverse too," he said.
"This is going to be especially important when people begin teleporting across virtual worlds and experiencing things with people from different backgrounds. Now we have the chance to improve the internet and set a new standard where we can all communicate with one another no matter what language we speak, or where we come from. And if we get this right, this is just one example of how AI can help bring people together on a global scale." ®