Google I/O 2024: Everything Revealed in 12 Minutes

CNET
14 May 202411:26

Summary

TLDRGoogle IO introduced groundbreaking AI advancements and tools. With over 1.5 million developers utilizing Gemini models, Google has integrated this technology across products like Search, Photos, Workspace, and Android. Highlights include Project Astra's faster data processing and the generative video model 'VO' for high-quality video creation. Additionally, Google announced the sixth-generation TPU, Trillum, and new CPUs and GPUs for enhanced cloud services. The revamped Google Search will now use AI to generate more organized and contextual results. These innovations underscore Google's commitment to integrating AI into daily tech experiences, enhancing user interaction and data privacy.

Takeaways

  • 📈 **Gemini Model Usage**: Over 1.5 million developers are utilizing Gemini models for debugging code, gaining insights, and building AI applications.
  • 🚀 **Project Astra**: An advancement in AI assistance that processes information faster by encoding video frames and combining them with speech input into a timeline for efficient recall.
  • 📚 **Enhanced Search Experience**: Google search has been transformed with AI, allowing users to search in new ways, including with photos, leading to increased usage and satisfaction.
  • 🎥 **VO Video Model**: A new generative video model that creates high-quality 1080p videos from text, image, and video prompts, offering detailed and stylistic outputs.
  • 🧠 **TPU Generation**: The sixth generation of TPU, named Trillion, offers a 4.7x improvement in compute performance per chip over its predecessor.
  • 🔍 **Google Search Innovation**: A revamped search experience using AI to provide overviews and organize results into clusters, starting with dining and recipes and expanding to other categories.
  • 🗣️ **Live Speech Interaction**: An upcoming feature allowing users to have in-depth conversations with Gemini using Google's latest speech models, with real-time responses and adaptation to speech patterns.
  • 📱 **Android AI Integration**: Android is being reimagined with AI at its core, starting with AI-powered search, a new AI assistant, and on-device AI for fast, private experiences.
  • 📚 **Educational Assistance**: Android's AI capabilities are being used to assist students, such as providing step-by-step instructions for homework problems directly on their devices.
  • 📈 **Customization with Gems**: Users can now customize Gemini to create personal experts on any topic by setting up 'gems', which are simple to create and can be reused.
  • 📱 **Multimodality in Android**: The integration of Gemini Nano with multimodality allows Android devices to understand the world through text, sights, sounds, and spoken language.

Q & A

  • What is Project Astra and how does it improve AI assistance?

    -Project Astra is an initiative that builds on Google's Gemini model to develop agents capable of processing information faster. It achieves this by continuously encoding video frames, combining video and speech inputs into a timeline of events, and caching this information for efficient recall.

  • What are the capabilities of the new generative video model 'VO' announced at Google IO?

    -The VO model can create high-quality 1080p videos from text, image, and video prompts, capturing details in various visual and cinematic styles. It allows users to generate videos with specific shots, such as aerial views or time lapses, and edit them further with additional prompts.

  • What is the significance of the sixth generation of TPUs called Trillum?

    -Trillum, the sixth generation of Google's Tensor Processing Units (TPUs), offers a 4.7x improvement in compute performance per chip compared to its predecessor. This makes it the most efficient and performant TPU to date, enhancing the processing capabilities for AI tasks.

  • How has Gemini transformed Google Search?

    -Gemini has significantly transformed Google Search by enabling it to handle more complex and longer queries, including searches using photos. This has led to increased user satisfaction and a more dynamic and organized search experience, particularly in areas like dining and recipes.

  • What is Gemini Nano and how does it enhance mobile experiences?

    -Gemini Nano is an AI model that incorporates multimodality, allowing phones to understand the world through text, sights, sounds, and spoken language. By bringing this model directly onto devices, it provides a faster and privacy-focused user experience.

  • How does the new AI-powered search on Android work?

    -The AI-powered search on Android integrates directly with the operating system to provide instant answers and assistance. It allows users to access information quickly and interactively, enhancing the overall efficiency of using their devices.

  • What are 'Gems' and how do they customize the Gemini experience?

    -Gems are a feature in the Gemini app that allows users to create personal experts on any topic. Users can set up a Gem by writing instructions once, and then return to it whenever they need specific information or assistance related to that topic.

  • How does the live experience with Gemini enhance real-time interactions?

    -The live experience with Gemini utilizes Google's latest speech models to better understand spoken language and respond naturally. Users can interact with Gemini in real-time, making it more responsive and adaptable to their conversational patterns.

  • What advancements are being made with on-device AI in Android?

    -Android is integrating AI directly into its operating system, which allows for new experiences that operate quickly while keeping sensitive data private. This integration helps in creating more personalized and efficient user interactions.

  • What new capabilities does the video FX tool offer?

    -The video FX tool is an experimental feature that explores storyboarding and generating longer scenes. It utilizes the VO model to allow unprecedented creative control over video creation, making it a powerful tool for detailed video editing and production.

Outlines

00:00

🚀 Project Astra and AI Advancements

The first paragraph introduces Google IO and highlights the extensive use of Gemini models by over 1.5 million developers for debugging, gaining insights, and developing AI applications. It discusses the integration of Gemini's capabilities across various Google products, including search, photos, workspace, Android, and more. The session also presents Project Astra, which builds on Gemini to develop agents that process information faster by encoding video frames and combining video and speech input. The paragraph concludes with the announcement of Google's newest generative video model, 'vo,' and the sixth generation of TPU, 'trillion trillum,' which offers significant improvements in compute performance.

05:04

🔍 AI-Enhanced Search and Personalization

The second paragraph focuses on the transformative impact of Gemini on Google search, where it has facilitated billions of queries through a generative search experience. Users are now able to search in new ways, including using photos to find information. The paragraph details the upcoming launch of an AI-driven search experience that will provide dynamic and organized results, tailored to the user's context. It also introduces a new feature for personal customization of Gemini, called 'gems,' which allows users to create personal experts on any topic. Furthermore, the paragraph discusses the integration of AI into Android, with a focus on improving the user experience through AI-powered search, a new AI assistant, and on-device AI capabilities.

10:05

📱 AI in Android and Multimodality

The third paragraph emphasizes the integration of Google AI directly into the Android operating system, which is the first mobile OS to include a built-in on-device Foundation model. This integration aims to enhance the smartphone experience by bringing Gemini's capabilities to users' pockets while maintaining privacy. The paragraph also mentions the upcoming expansion of capabilities with Gemini Nano, which will include multimodality, allowing the phone to understand the world through text, sound, and spoken language. The speaker humorously acknowledges the frequent mention of AI during the presentation and provides a count of AI references.

Mindmap

Keywords

💡Gemini models

Gemini models refer to a set of advanced AI tools used by developers for debugging code, gaining insights, and building AI applications. In the video, Google highlights the widespread use of these models across various Google products, indicating their importance in the development of next-generation AI applications.

💡Project Astra

Project Astra is a new initiative in AI assistance that builds upon the capabilities of Gemini models. It involves developing agents that can process information more quickly by encoding video frames continuously and combining video and speech inputs into a timeline for efficient recall. This project is showcased as a significant step towards faster and more integrated AI systems.

💡TPUs (Tensor Processing Units)

TPUs are specialized hardware accelerators designed to speed up machine learning tasks. The sixth generation of TPUs, named 'Trillion TPU,' is mentioned as delivering a 4.7x improvement in compute performance per chip. This advancement is crucial for supporting the complex AI models and applications discussed in the video.

💡AI Overviews

AI Overviews is a feature that provides users with a comprehensive summary of information based on their queries. It is part of Google's search generative experience, which has been used to answer billions of queries in new and complex ways. The feature is set to be launched for everyone in the US, enhancing the search experience with AI-driven insights.

💡Live using Google's latest speech models

This refers to a new interactive experience where users can have in-depth conversations with Gemini using voice commands. Gemini's ability to understand and respond naturally to voice inputs, even allowing interruptions, represents a significant improvement in real-time AI communication.

💡Gems

Gems are customizable features within the Gemini app that allow users to create personal experts on any topic. They are simple to set up and can be written once for repeated use. Gems exemplify the personalization aspect of AI, enabling users to tailor the technology to their specific needs.

💡Android with AI at the core

This phrase describes the integration of AI into the Android operating system to enhance user experience. The video mentions three breakthroughs: AI-powered search, Gemini as a new AI assistant, and on-device AI for fast, private experiences. This integration aims to make Android devices more intuitive and responsive.

💡Gemini Nano

Gemini Nano is an upcoming model of AI technology that will be included in Android devices, starting with Pixel phones. It is designed to understand the world through multiple modalities, including text, sights, sounds, and spoken language, thereby providing a more integrated and natural interaction with the device.

💡Video FX

Video FX is an experimental tool mentioned in the video that allows users to create high-quality 1080p videos from text, image, and video prompts using the new generative video model called 'vo'. This tool signifies Google's exploration into creative applications of AI, offering users greater control over video creation.

💡Contextual AI

Contextual AI refers to AI systems that can understand and adapt to the context in which they operate. In the video, it is mentioned in relation to Gemini becoming more context-aware, providing suggestions and assistance based on the current situation or activity. This enhances the utility of AI by making it more attuned to user needs.

💡AI-organized search results

This concept involves organizing search results using AI to cluster and present information in a more useful and intuitive manner. The video script describes how this feature uncovers interesting angles and organizes results into helpful categories, enhancing the user's ability to find relevant information.

Highlights

Gemini models are used by more than 1.5 million developers for debugging code, gaining insights, and building AI applications.

Project Astra is an AI assistance initiative that processes information faster by encoding video frames and combining them with speech input into a timeline for efficient recall.

Google's new generative video model, 'vo', creates high-quality 1080p videos from text, image, and video prompts in various visual and cinematic styles.

The sixth generation of TPUs, called Trillion, offers a 4.7x improvement in compute performance per chip over the previous generation.

Google is offering CPUs and GPUs to support any workload, including their first custom ARM-based CPU with industry-leading performance and energy efficiency.

Google Search has been transformed with Gemini, allowing users to search in new ways, including with photos, and receive more complex query responses.

A fully revamped AI overview experience is being launched for Google Search in the US, with plans for global expansion.

Google is introducing a new feature that allows users to customize Gemini for their needs and create personal experts on any topic.

Android is being reimagined with AI at its core, starting with AI-powered search, Gemini as a new AI assistant, and on-device AI for fast, private experiences.

Circle the search feature helps students by providing step-by-step instructions for solving problems directly on their devices.

Gemini is becoming context-aware to anticipate user needs and provide more helpful suggestions in real-time.

Google is integrating AI directly into the OS, starting with Android, to elevate the smartphone experience with built-in on-device Foundation models.

Android will be the first mobile operating system to include a built-in on-device Foundation model, starting with Pixel later this year.

Gemini Nano, the latest model, will feature multimodality, allowing phones to understand the world through text, sights, sounds, and spoken language.

Google has been testing the new search experience outside of labs, observing an increase in search usage and user satisfaction.

Live using Google's latest speech models allows for more natural conversations with Gemini, including the ability to interrupt and adapt to speech patterns.

Project Astra will bring speed gains and video understanding capabilities to the Gemini app, enabling real-time responses to user surroundings.

Google counted the number of times 'AI' was mentioned during the presentation as a playful nod to the focus on artificial intelligence.

Transcripts

00:00

welcome to Google IO it's great to have

00:02

all of you with us more than 1.5 million

00:05

developers use Gemini models across our

00:08

tools you're using it to debug code get

00:11

new insights and the build build the

00:14

next generation of AI

00:16

applications we've also been bringing

00:18

Gemini's breakthrough capabilities

00:20

across our products in powerful ways

00:23

we'll show examples today across search

00:26

photos workspace Android and more today

00:30

we have some exciting new progress to

00:31

share about the future of AI assistance

00:34

that we're calling project Astra

00:36

building on our Gemini model we

00:38

developed agents that can process

00:40

information Faster by continuously

00:42

encoding video frames combining the

00:44

video and speech input into a timeline

00:46

of events and caching this for efficient

00:49

recall tell me when you see something

00:51

that makes

00:53

sound I see a speaker which makes sound

00:56

do you remember where you saw my glasses

01:00

yes I do your glasses were on the desk

01:03

near a red

01:08

apple what can I add here to make this

01:11

system

01:14

faster adding a cach between the server

01:17

and database could improve

01:20

speed what does this remind you

01:25

of shringer cat today I'm excited to

01:28

announce our newest most capable

01:30

generative video model called

01:37

vo vo creates high quality 1080p videos

01:41

from text image and video prompts it can

01:44

capture the details of your instructions

01:46

in different Visual and cinematic Styles

01:49

you can prompt for things like aerial

01:50

shots of a landscape or a time lapse and

01:53

further edit your videos using

01:54

additional prompts you can use vo in our

01:57

new experimental tool called video FX

02:00

we're exploring features like

02:02

storyboarding and generating longer

02:04

scenes vo gives you unprecedented

02:07

creative control core technology is

02:10

Google deep mind's generative video

02:12

model that has been trained to convert

02:15

input text into output

02:18

video it looks good we are able to bring

02:21

ideas to life that were otherwise not

02:24

possible we can visualize things on a

02:26

time scale that's 10 or 100 times faster

02:28

than before today we are excited to

02:31

announce the sixth generation of tpus

02:33

called

02:39

trillion trillum delivers a 4.7x

02:42

Improvement in compute performance per

02:44

chip over the previous generation it's

02:47

our most efficient and performant TPU

02:50

today we'll make trillum available to

02:53

our Cloud customers in late

02:55

2024 alongside our tpus we are proud to

02:59

offer CPUs and gpus to support any

03:01

workload that includes the new Axion

03:04

processes we announced last month our

03:06

first custom arm-based CPU with

03:08

industry-leading performance and Energy

03:11

Efficiency we are also proud to be one

03:13

of the first Cloud providers to offer

03:16

envidia Cutting Edge Blackwell gpus

03:19

available in early 2025 one of the most

03:22

exciting Transformations with Gemini has

03:24

been in Google search in the past year

03:27

we answered billions of queries as part

03:30

of her search generative experience

03:32

people are using it to search in

03:34

entirely new ways and asking new types

03:37

of questions longer and more complex

03:40

queries even searching with photos and

03:44

getting back the best the web has to

03:46

offer we've been testing this experience

03:48

outside of labs and we are encouraged to

03:52

see not only an increase in search usage

03:54

but also an increase in user

03:56

satisfaction I'm excited to announce

03:58

that we will begin will'll begin

04:00

launching this fully revamped experience

04:03

AI overviews to everyone in the US this

04:05

week and we'll bring it to more

04:07

countries soon say you're heading to

04:10

Dallas to celebrate your anniversary and

04:12

you're looking for the perfect

04:14

restaurant what you get here breaks AI

04:17

out of the box and it brings it to the

04:19

whole

04:20

page our Gemini model uncovers the most

04:22

interesting angles for you to explore

04:25

and organizes these results into these

04:27

helpful

04:28

clusters like like you might never have

04:30

considered restaurants with live

04:32

music or ones with historic

04:35

charm our model even uses contextual

04:38

factors like the time of the year so

04:40

since it's warm in Dallas you can get

04:42

rooftop patios as an

04:44

idea and it pulls everything together

04:46

into a dynamic whole page

04:49

experience you'll start to see this new

04:52

AI organized search results page when

04:54

you look for inspiration starting with

04:57

dining and recipes and coming to movies

04:59

music books hotels shopping and more I'm

05:03

going to take a video and ask

05:05

Google why will does not stay in

05:10

place and in a near instant Google gives

05:14

me an AI overview I guess some reasons

05:17

this might be happening and steps I can

05:19

take to troubleshoot so looks like first

05:22

this is called a tonger very helpful and

05:25

it looks like it may be unbalanced and

05:27

there's some really helpful steps here

05:29

and I love that because I'm new to all

05:31

this I can check out this helpful link

05:33

from Audio Technica to learn even more

05:36

and this summer you can have an in-depth

05:38

conversation with Gemini using your

05:40

voice we're calling this new experience

05:44

live using Google's latest speech models

05:48

Gemini can better understand you and

05:50

answer naturally you can even interrupt

05:53

while Gemini is responding and it will

05:55

adapt to your speech

05:57

patterns and this is just the beginning

06:00

we're excited to bring the speed gains

06:02

and video understanding capabilities

06:05

from Project Astra to the Gemini app

06:08

when you go live you'll be able to open

06:10

your camera so Gemini can see what you

06:13

see and respond to your surroundings in

06:16

real

06:17

time now the way I use Gemini isn't the

06:21

way you use Gemini so we're rolling out

06:23

a new feature that lets you customize it

06:25

for your own needs and create personal

06:28

experts on any any topic you want we're

06:31

calling these gems they're really simple

06:34

to set up just tap to create a gem write

06:37

your instructions once and come back

06:39

whenever you need it we've embarked on a

06:41

multi-year journey to reimagine Android

06:44

with AI at the core and it starts with

06:48

three breakthroughs you'll see this

06:51

year first we're putting AI powered

06:54

search right at your fingertips creating

06:57

entirely new ways to get the answers you

07:00

need second Gemini is becoming your new

07:04

AI assistant on Android there to help

07:06

you any time and third we're harnessing

07:10

on device AI to unlock new experiences

07:13

that work as fast as you do while

07:16

keeping your sensitive data private one

07:19

thing we've heard from students is that

07:21

they're doing more of their schoolwork

07:23

directly on their phones and tablets so

07:26

we thought could Circle the search be

07:29

your perfect study

07:31

buddy let's say my son needs help with a

07:33

tricky physics word problem like this

07:36

one my first thought is oh boy it's been

07:40

a while since I've thought about

07:42

kinematics if he stumped on this

07:44

question instead of putting me on the

07:46

spot he can Circle the exact part he's

07:48

stuck on and get stepbystep

07:51

instructions right where he's already

07:53

doing the work now we're making Gemini

07:56

context aware so it can anticipate what

07:59

you're trying to do and provide more

08:01

helpful suggestions in the Moment In

08:04

other words to be a more helpful

08:06

assistant so let me show you how this

08:08

works and I have my shiny new pixel 8A

08:11

here to help

08:16

me so my friend Pete is asking if I want

08:19

to play pickle ball this weekend and I

08:21

know how to play tennis sort of I had to

08:23

say that for the demo uh but I'm new to

08:25

this pickle ball thing so I'm going to

08:27

reply and try to be funny and I'll say

08:29

uh is that like tennis but with uh

08:33

pickles um this would be actually a lot

08:35

funnier with a meme so let me bring up

08:37

Gemini to help with that and I'll say uh

08:40

create image of tennis with Pickles now

08:45

one you think you'll notice is that the

08:46

Gemini window now hovers in place above

08:49

the app so that I stay on the

08:51

flow okay so that generates some pretty

08:53

good images uh what's nice is I can then

08:55

drag and drop any of these directly into

08:57

the messages app below so like so and

09:00

now I can ask specific questions about

09:03

the video so for example uh what is is

09:08

kind type the two bounce rule because

09:11

that's something that I've heard about

09:12

but don't quite understand in the game

09:14

by the way this us signals like

09:16

YouTube's captions which means you can

09:18

use it on billions of videos so give it

09:20

a moment and there and get a nice

09:24

distinct answer the ball must B once on

09:26

each side of the Court uh after a serve

09:28

so instead of trolling through this

09:30

entire document I can pull up Gemini to

09:33

help and again Gemini anticipates what I

09:36

need and offers me an ask this PDF

09:39

option so if I tap on that Gemini now

09:42

ingests all of the rules to become a

09:44

pickle ball expert and that means I can

09:47

ask very esoteric questions like for

09:49

example are

09:51

spin uh

09:54

serves allowed and there you have it it

09:57

turns out nope spin serves are not

10:00

allowed so Gemini not only gives me a

10:03

clear answer to my question it also

10:05

shows me exactly where in the PDF to

10:07

learn more building Google AI directly

10:09

into the OS elevates the entire

10:11

smartphone experience and Android is the

10:14

first mobile operating system to include

10:16

a built-in on device Foundation model

10:20

this lets us bring Gemini goodness from

10:21

the data center right into your pocket

10:24

so the experience is faster while also

10:27

protecting your privacy starting with

10:29

pixel later this year we'll be expanding

10:32

what's possible with our latest model

10:34

Gemini Nano with

10:36

multimodality this means your phone can

10:39

understand the world the way you

10:40

understand it so not just through text

10:42

input but also through sites sounds and

10:46

spoken language before we wrap I have a

10:49

feeling that someone out there might be

10:51

counting how many times you have

10:53

mentioned AI today

10:56

[Applause]

10:59

and since the big theme today has been

11:01

letting Google do the work for you we

11:03

went ahead and counted so that you don't

11:07

have

11:08

[Applause]

11:18

to that might be a record in how many

11:21

times someone has said AI

Rate This

5.0 / 5 (0 votes)

Etiquetas relacionadas
AI InnovationGoogle IOGemini ModelProject AstraGenerative AIVideo Model VOAI AssistanceTPU GenerationCloud ComputingSpeech RecognitionAndroid AICustom CPUNVIDIA GPUSearch EnhancementAI PersonalizationMultimodal Interaction
¿Necesitas un resumen en español?