Google I/O 2021: Being helpful in moments that matter

Responsible next-generation AI

We’ve made remarkable advances over the past 22 years, thanks to our progress in some of the most challenging areas of AI, including translation, images and voice. These advances have powered improvements across Google products, making it possible to talk to someone in another language using Assistant’s interpreter mode, view cherished memories on Photos or use Google Lens to solve a tricky math problem. 

We’ve also used AI to improve the core Search experience for billions of people by taking a huge leap forward in a computer’s ability to process natural language. Yet, there are still moments when computers just don’t understand us. That’s because language is endlessly complex: We use it to tell stories, crack jokes and share ideas — weaving in concepts we’ve learned over the course of our lives. The richness and flexibility of language make it one of humanity’s greatest tools and one of computer science’s greatest challenges. 

Today I am excited to share our latest research in natural language understanding: LaMDA. LaMDA is a language model for dialogue applications. It’s open domain, which means it is designed to converse on any topic. For example, LaMDA understands quite a bit about the planet Pluto. So if a student wanted to discover more about space, they could ask about Pluto and the model would give sensible responses, making learning even more fun and engaging. If that student then wanted to switch over to a different topic — say, how to make a good paper airplane — LaMDA could continue the conversation without any retraining.

This is one of the ways we believe LaMDA can make information and computing radically more accessible and easier to use (and you can learn more about that here). 

We have been researching and developing language models for many years. We’re focused on ensuring LaMDA meets our incredibly high standards on fairness, accuracy, safety and privacy, and that it is developed consistently with our AI Principles. And we look forward to incorporating conversation features into products like Google Assistant, Search and Workspace, as well as exploring how to give capabilities to developers and enterprise customers.

LaMDA is a huge step forward in natural conversation, but it’s still only trained on text. When people communicate with each other they do it across images, text, audio and video. So we need to build multimodal models (MUM) to allow people to naturally ask questions across different types of information. With MUM you could one day plan a road trip by asking Google to “find a route with beautiful mountain views.” This is one example of how we’re making progress towards more natural and intuitive ways of interacting with Search.

We now do more computing where there’s cleaner energy

Shifting compute tasks across location is a logical progression of our first step in carbon-aware computing, which was to shift compute across time. By enabling our data centers to shift flexible tasks to different times of the day, we were able to use more electricity when carbon-free energy sources like solar and wind are plentiful. Now, with our newest update, we’re also able to shift more electricity use to where carbon-free energy is available.

The amount of computing going on at any given data center varies across the world, increasing or decreasing throughout the day. Our carbon-intelligent platform uses day-ahead predictions of how heavily a given grid will be relying on carbon-intensive energy in order to shift computing across the globe, favoring regions where there’s more carbon-free electricity. The new platform does all this while still getting everything that needs to get done, done — meaning you can keep on streaming YouTube videos, uploading Photos, finding directions or whatever else.

We’re applying this first to our media processing efforts, which encodes, analyzes and processes millions of multimedia files like videos uploaded to YouTube, Photos and Drive. Like many computing jobs at Google, these can technically run in many places (of course, limitations like privacy laws apply). Now, Google’s global carbon-intelligent computing platform will increasingly reserve and use hourly compute capacity on the most clean grids available worldwide for these compute jobs — meaning it moves as much energy consumption as possible to times and places where energy is cleaner, minimizing carbon-intensive energy consumption.

Google Cloud’s developers and customers can also prioritize cleaner grids, and maximize the proportion of carbon-free energy that powers their apps by choosing regions with better carbon-free energy (CFE) scores.

To learn more, tune in to the livestream of our carbon-aware computing workshop on June 17 at 8:00 a.m PT. And for more information on our journey towards 24/7 carbon-free energy by 2030, read CEO Sundar Pichai’s latest blog post.

Project Starline: Feel like you're there, together

People love being together — to share, collaborate and connect.  And this past year, with limited travel and increased remote work, being together has never felt more important.

Through the years, we’ve built products to help people feel more connected. We’ve simplified email with Gmail, and made it easier to share what matters with Google Photos and be more productive with Google Meet. But while there have been advances in these and other communications tools over the years, they’re all a far cry from actually sitting down and talking face to face.

We looked at this as an important and unsolved problem. We asked ourselves: could we use technology to create the feeling of being together with someone, just like they’re actually there?

To solve this challenge, we’ve been working for a few years on Project Starline — a technology project that combines advances in hardware and software to enable friends, families and coworkers to feel together, even when they’re cities (or countries) apart.

Imagine looking through a sort of magic window, and through that window, you see another person, life-size and in three dimensions. You can talk naturally, gesture and make eye contact.

Tackling tuberculosis screening with AI

Applying these findings in the real world

The AI system produces a number between 0 and 1 that indicates the risk of TB. For the system to be useful in a real-world setting, there needs to be agreement about what risk level indicates that patients should be recommended for additional testing. Calibrating this threshold can be time-consuming and expensive because administrators can only come to this number after running the system on hundreds of patients, testing these patients, and analyzing the results. 

Based on the performance of our model, our research suggests that any clinic could start from this default threshold and be confident that the model will perform similarly to radiologists, making it easier to deploy this technology. From there, clinics can adjust the threshold based on local needs and resources. For example, regions with fewer resources may use a higher cut-off point to reduce the number of follow-up tests needed. 

Using AI to help find answers to common skin conditions

Developing an AI model that assesses issues for all skin types 

Our tool is the culmination of over three years of machine learning research and product development. To date, we’ve published several peer-reviewed papers that validate our AI model and more are in the works. 

Our landmark study, featured in Nature Medicine, debuted our deep learning approach to assessing skin diseases and showed that our AI system can achieve accuracy that is on par with U.S. board-certified dermatologists. Our most recent paper in JAMA Network Open demonstrated how non-specialist doctors can use AI-based tools to improve their ability to interpret skin conditions

To make sure we’re building for everyone, our model accounts for factors like age, sex, race and skin types — from pale skin that does not tan to brown skin that rarely burns. We developed and fine-tuned our model with de-identified data encompassing around 65,000 images and case data of diagnosed skin conditions, millions of curated skin concern images and thousands of examples of healthy skin — all across different demographics. 

Recently, the AI model that powers our tool successfully passed clinical validation, and the tool has been CE marked as a Class I medical device in the EU.¹ In the coming months, we plan to build on this work so more people can use this tool to answer questions about common skin issues. If you’re interested in this tool, sign up here to be notified (subject to availability in your region).

What’s new for Wear

Samsung and Google have a long history of collaboration. Now, we’re bringing the best of Wear and Tizen into a single, unified platform. By working together we have been able to take strengths of each and combine them into an experience that has faster performance, longer battery life and more of the apps you love available for the watch.

For performance, our teams collaborated and made apps start up to 30% faster on the latest chipsets with smooth user interface animations and motion. To achieve longer battery life, we’ve worked to optimize the lower layers of the operating system – taking advantage of low-power hardware cores to enable better battery life. That includes handy optimizations like the ability to run the heart rate sensor continuously during the day, track your sleep overnight and still have battery for the next day. Finally, our unified platform will make it easier for developers to build great apps for the watch. 

This isn’t just for Google and Samsung. All device makers will be able to add a customized user experience on top of the platform, and developers will be able to use the Android tools they already know and love to build for one platform and ecosystem. And because of these benefits, you will have more options than ever before, whether it’s choosing which device to buy or picking which apps and watch faces to display.

Helping all your devices work better together

Unlock your car with your phone

Android Auto is designed to make it safer to use apps from your phone while you’re on the road. Today, Android Auto is available in more than 100 million cars and the vast majority of new vehicles from loved brands like GM, Ford, Honda and more will support Android Auto wireless. No more cords.

To make your phone even more helpful, we’re working with car manufacturers to develop a new digital car key in Android 12. With this feature, you’ll be able to lock, unlock and even start your car from your phone.

By using Ultra Wideband (UWB) technology, you won’t even have to take your phone out to use it as a car key. And for NFC-enabled car models, it’s as easy as tapping your phone on the car door to unlock it. Since it’s all digital, you can also securely and remotely share your car key with friends and family if they need to borrow your car.

Android 12 Beta: Designed for you

We’re also giving you more control over how much information you share with apps. With new approximate location permissions, apps can be limited to seeing just your approximate location instead of a precise one. For example, weather apps don’t need your precise location to offer an accurate forecast. 

Beyond these new privacy features in Android 12, we’re also building privacy protections directly into the OS. There are more opportunities than ever to use AI to create helpful new features, but these features need to be paired with powerful privacy. That’s why in this release we’re introducing Android Private Compute Core. It allows us to introduce new technologies that are private by design, allowing us to keep your personal information safe, private and local to your phone. 

Private Compute Core enables features like Live Caption, Now Playing and Smart Reply. All the audio and language processing happens on-device, isolated from the network to preserve your privacy. Like the rest of Android, the protections in Private Compute Core are open source and fully inspectable and verifiable by the security community. 

There are more features coming later this year, and we’ll continue to push the boundaries and find ways to maintain the highest standards of privacy, security and safety.

Your photos, your memories, your way

Control what Memories you want to see

Not all memories are worth revisiting. Whether it’s a breakup, a loss or some other tough time, we don’t want to relive everything. We specifically heard from the transgender community that resurfacing certain photos is painful, so we’ve been working with our partners at GLAAD and listening to feedback to make reminiscing more inclusive. Google Photos already includes controls to hide photos of certain people or time periods, and we’re continuing to add new ones to improve the experience as a result of this continued partnership. Later this summer we’re making these controls easier to find, so you can choose what you look back on in just a few taps. 

We’re also adding more granular controls for Memories in your grid — starting today, you’ll be able to rename a Trip highlight, or remove it completely. And coming soon, you’ll be able to remove a single photo from a Memory, remove Best of Month Memories and rename or remove Memories based on the moments you celebrate.

Search, explore and shop the world’s information, powered by AI

More ways to shop with Google 

People are shopping across Google more than a billion times per day, and our AI-enhanced Shopping Graph — our deep understanding of products, sellers, brands, reviews, product information and inventory data — powers many features that help you find exactly what you’re looking for.

Because shopping isn’t always a linear experience, we’re introducing new ways to explore and keep track of products. Now, when you take a screenshot, Google Photos will prompt you to search the photo with Lens, so you can immediately shop for that item if you want. And on Chrome, we’ll help you keep track of shopping carts you’ve begun to fill, so you can easily resume your virtual shopping trip. We’re also working with retailers to surface loyalty benefits for customers earlier, to help inform their decisions.

Last year we made it free for merchants to sell their products on Google. Now, we’re introducing a new, simplified process that helps Shopify’s 1.7 million merchants make their products discoverable across Google in just a few clicks.  

Whether we’re understanding the world’s information, or helping you understand it too, we’re dedicated to making our products more useful every day. And with the power of AI, no matter how complex your task, we’ll be able to bring you the highest quality, most relevant results.