This article is more than 1 year old

Google shows off immersive maps, AR-flavored search, Pixel 7, and more

Your essential de-hyped guide to what the Chocolate Factory teased at developer shindig

Google IO Google I/O, the ad biz's annual developer conference, returned to the Shoreline Amphitheater in California's Mountain View on Wednesday, for the first time in three years. The gathering remained largely a remote event due to the persistence of COVID-19 though there were enough Googlers, partners, and assorted software developers in attendance to fill venue seats and punctuate important points with applause.

Sundar Pichai, CEO of Google parent Alphabet, opened the keynote by sounding familiar themes. He leaned into the implied sentiment, "We're here to help," an increasingly iffy proposition in light of the many controversies facing the company.

He said he wanted to explain how Google is advancing its mission in two ways, "by deepening our understanding of information so that we can turn it into knowledge and advancing the state of computing so that knowledge is easier to access no matter who or where you are."

The opening video delivered a more succinct version of the message: "Technology has the power to make everyone's lives better. It just has to be built," was the theme.

And Google has been building things, for better or worse. Pichai announced 24 languages in Google Translate, which he attributed to advances in machine learning that can address the long tail of underrepresented tongues.

"With advances in machine learning, we have developed a monolingual approach where the model learns to translate a new language without ever seeing a direct translation of it," he said. "By collaborating with native speakers and institutions, we found these translations were of sufficient quality to be useful."

Pichai segued to Google Maps and described how the company is using computer vision to generate building models from satellite imagery.

"Using advances in 3D mapping machine learning, we're fusing billions of aerial and street level images to create a new high fidelity representation of a place," he explained. "These breakthrough technologies are coming together to power a new experience in maps, called immersive view. It allows you to explore a place like never before."

The video demo during the keynote showed a fly-through view of a restaurant interior. What's remarkable about the scene is that it was not filmed using a drone but was generated using neural-network rendering software analyzing still images. Immersive view works even on mobile devices and will show up in Los Angeles, London, New York, San Francisco, and Tokyo later this year, with additional cities at a later date.

Google is making its Live View scene labeling technology available at no cost to ARCore developers via its Geospatial API. And it's also expanding its eco-friendly routing for Maps.

"Eco Friendly routes have already rolled out in the US and Canada and people have used them to travel 86 billion miles, helping save an estimated half million metric tons of carbon emissions, the equivalent of taking 100,000 cars off the road," said Pichai. "I'm happy to share that we are expanding this feature to more places, including Europe later this year."

AI aids the video star

Over at YouTube, auto-generated chapters for videos are expected to expand from eight million today to 80 million over the next year. And speech recognition is being applied to videos to create video transcripts that are now available to Android and iOS users.

So too are auto-translated captions. Pichai said auto-translated captions will be applied to Ukrainian content on YouTube next month as part of a larger effort to increase access to accurate information about the Russia-Ukraine war.

Google's AI recently landed in Google Docs via automatic summarization. "This marks a big leap forward for natural language processing," said Pichai. "It requires understanding of long passages, information compression and language generation which used to be outside of the capabilities of even the best machine learning models and Docs is only the beginning."

This tl;dr capability is now available in Spaces.

The Chocolate Factory's fascination with AI is also evident in Workspace enhancements like "portrait light," which will allow users of applications like Google Meet to simulate the presence of in-room lights, and "portrait restore," to automatically improve video image quality.

To improve the presentation of diverse skin tones in images, Google has open sourced the Monk Skin Tone (MST) Scale, a framework for more accurate color rendering developed in conjunction with Harvard professor and sociologist Dr Ellis Monk.

Prabhakar Raghavan, SVP at Google, took a turn on stage talking up various search improvements. The recently introduced multisearch capability – where the user snaps an image and adds text to find out specific information about the thing depicted – is being tweaked to handle the "near me" parameter, to return locally relevant results. This capability should show up in English later this year.

Prabhakar Raghavan, SVP at Google

Prabhakar Raghavan at Google IO ... Click to enlarge

Another near future innovation is "scene exploration," by which searchers will be able to view a scene with a mobile device camera and get back specified information about each item in the scene, like the percentage of cocoa in each chocolate bar in the scene.

Google Assistant has been taught to respond without its "Hey, Google" wake-phrase. Starting today, the US-based Nest Hub Max can respond when looked at and addressed, for those who have opted in and pass both face and voice matching checks. The device will also respond to a limited number of quick phrases, like "Set a timer for five minutes," without "Hey, Google."

Android 13 beta made an appearance, now in version 2. It features a new photo picker with more granular media permissions, notification permission, and later this year will include a unified Security & Privacy settings page. It also comes with tablet and personalization improvements.

Sameer Samat, VP of product management at Google, highlighted Android 13's support for RCS (Rich Communication Services), an upgrade to SMS messaging that includes end-to-end encryption. "We hope every mobile operating system gets the message and upgrades to RCS," said Samat. "So your messages are private, no matter what device you're using."

Handy,for security at least, plus eary and wristy

On the hardware front, Google's Pixel 6a will be available for pre-order, starting at $449, on July 21, with availability scheduled for July 28. It comes in Chalk, Charcoal and Sage and shares the hardware used for the Pixel 6 and 6 Pro. Customers get five years of security updates but no 3.5mm port.

Brian Rakwoski, VP of product management, presented a preview of the forthcoming Pixel 7 and 7 Pro, due later this year.

"You can see that we've extended the aluminum finish to the entire camera bar for the next evolution of the pixel design language," he said. "The housing and camera bar are made from a single piece of 100 per cent recycled aluminum and the gorgeous pixel seven Pro and its triple camera system sets a completely new standard for photography, performance and design."

The Pixel 7, he said, will use the next generation Google Tensor SoC and will ship with Android 13.

Speaking of tensors, there was mention of eight pods of Google's TPUv4 AI accelerators in an Oklahoma datacenter, delivering about nine exaflops of aggregate compute capacity for Google Cloud customers so researchers and businesses can get the same kind of compute Google uses for its internal AI work.

There was also talk of Pixel Buds Pro, Google Wallet enhancements, and a preview of the Google Pixel Watch.

Security and privacy got some attention, with interface improvements like surfacing Account Safety Status messages for Google Account, extending phishing and malware detection from Gmail to Google Docs, Sheets, and Slides, 2-Step Verification (2SV) auto enrollment, and virtual payment cards for Android and Chrome this summer.

Pichai concluded the keynote with a nod to augmented reality applications, like Google Lens, multisearch, scene exploration and immersive view, as a way to enhance the real world – which is tempting to read as a dismissal of Meta CEO Mark Zuckerberg's multi-billion dollar bet on begoggled, grope-gapped virtual reality.

"That potential is what gets us most excited about AR: The ability to spend time focusing on what matters in the real, in our real lives," said Pichai. "You know, the real world is pretty amazing." ®

More about

TIP US OFF

Send us news


Other stories you might like