Hey ChatGPT, Summarize Google I/O

Waveform Podcast
17 May 2024113:19

Summary

TLDRThe transcript captures a lively discussion from the Waveform podcast, where hosts Marquez, Andrew, and David delve into the latest tech news. They critique Apple's new iPad Pro, focusing on its thinner design and unique camera bump, and discuss the potential of the new tandem OLED display. The hosts also explore OpenAI's GPT-4 and Google's AI advancements, including Google I/O's updates on Google Photos, search, and workspace features. They express mixed opinions on the practicality of AI's current capabilities and its future implications for content creation and user experience.

Takeaways

  • ๐Ÿ“ฑ The new iPad Pro is significantly thinner and lighter, making it the thinnest Apple device ever made, measuring 5.1 mm in thickness.
  • ๐Ÿ” The iPad Pro features a new tandem OLED display, which offers both the benefits of OLED and super high brightness, a first for tablets.
  • ๐Ÿ–ฅ The new Apple Pencil Pro is only compatible with the latest iPad Pro, potentially due to the redesigned charging area to accommodate the relocated front-facing webcam.
  • ๐Ÿ“ธ The iPad Pro's camera bump is non-uniform, with lenses of different sizes, which is a departure from Apple's usual design language.
  • ๐ŸŽ™๏ธ The podcast hosts discuss the potential impact of AI on content creation, including concerns about AI-generated content replacing human-written material.
  • ๐Ÿค– OpenAI's event introduced GPT-4, a multimodal model that can respond faster and understand context from both text and images, although it still struggles with interruptions and conversational nuances.
  • ๐Ÿ”Š Google IO focused on integrating AI into various Google services, such as Google Photos and Gmail, aiming to provide more contextual and generative search results.
  • ๐Ÿ“ˆ Google introduced new AI models like Gemini 1.5 Pro, which doubles the context window for more accurate responses, and Imagine 3 for better image generation.
  • ๐Ÿ“ฑ Google is working on bringing AI capabilities to Android with features like scam detection and a floating window for real-time interaction with apps.
  • ๐Ÿ›๏ธ The potential for AI to transform e-commerce and content creation is highlighted, with Google showcasing how it can help users find products and information more efficiently.
  • ๐ŸŒ Concerns are raised about the future of web content and SEO as AI-generated summaries could potentially reduce the need for users to visit websites directly.

Q & A

  • What is the new iPad Pro's biggest change in design according to the podcast hosts?

    -The new iPad Pro is substantially thinner, being the thinnest Apple device ever made with a screen, at 5.1 mm thin.

  • What is the new feature of the iPad Pro that is also affecting the new Apple Pencil Pro compatibility?

    -The front-facing webcam has been moved from the narrow side to the long side of the iPad Pro, which is also where the Apple Pencil Pro is supposed to charge, leading to a redesign of both the charging mechanism and the pencil itself.

  • What is the name of the new AI model introduced by OpenAI, and what does the 'O' in the name stand for?

    -The new AI model introduced by OpenAI is called GPT-40, where the 'O' stands for Omni, signifying its multimodal capabilities.

  • How does the new iPad Pro's display technology differ from its predecessors?

    -The new iPad Pro features a tandem OLED display, which is a technology that provides both the benefits of OLED and super high brightness, offering deeper blacks and better outdoor visibility.

  • What is the purpose of the tandem OLED display in the new iPad Pro?

    -The purpose of the tandem OLED display is to provide a combination of brightness and contrast ratios that have not been available in tablets before, offering better performance outdoors and in bright conditions.

  • What is the new feature in Google Photos that allows users to ask contextual questions about their photos?

    -Google Photos now allows users to ask contextual questions, such as 'What's my license plate number?' or 'Show me Lucia's swimming progress,' and the AI will understand the context and display relevant photos or information.

  • What is the new generative AI feature in Google Search that is being rolled out to all users?

    -The new generative AI feature in Google Search creates a personalized experience by generating extra information, tiles, and pages based on the user's search queries, aiming to provide a more interactive and contextual search experience.

  • What is the update to Gmail that was introduced during Google IO, and how does it utilize Gemini?

    -The update to Gmail introduces the ability to ask questions about emails and receive generated responses, as well as suggested replies powered by Gemini, allowing for better email organization and tracking.

  • What is the new AI model introduced by Google called, and what is its main purpose?

    -Google introduced a new model called Gemini 1.5 Pro, which has doubled the context window to 2 million tokens, allowing for more in-depth and contextual understanding and responses.

  • What is the new feature in Google Maps that combines search, maps, and reviews to provide more specific suggestions?

    -Google Maps now offers a feature that allows users to ask multi-step questions, such as 'Find the best yoga or pilates studio in Boston and show the details on their intro offers and walking distance from Beacon Hill,' providing a more tailored and convenient search experience.

Outlines

00:00

๐ŸŽ™๏ธ Introduction and First Impressions of the iPad

The hosts introduce themselves and react to the new 13-inch iPad. They discuss its bright screen, thin design, and express excitement about starting the podcast. They joke about using AI to create podcast content and mention recording two episodes. The iPad's new features, such as its lightweight design and lack of an ultra-wide camera, are highlighted.

05:01

๐Ÿ“ฑ Detailed Discussion on iPad Features

The hosts delve deeper into the iPad's features, noting its thinner design and the absence of the ultra-wide camera. They express mixed feelings about the new design choices, particularly the uneven camera bump, which they find uncharacteristic of Apple's usual aesthetics. They compare the new iPad to previous models and discuss its improved display and overall performance.

10:02

๐ŸŒŸ iPad's Enhanced Display and Practical Use

The conversation continues with a focus on the iPad's display improvements. The new OLED display is praised for its brightness and contrast, making it ideal for various uses like emails, web browsing, and streaming. The hosts share personal experiences using the iPad and discuss the significance of the display's advancements in the tech industry.

15:03

๐ŸŽต Music Industry Insights and Logic Pro Update

The hosts address the Logic Pro update for iPad, specifically the new stem splitter feature. They discuss its potential impact on music production and share feedback from a music industry professional who finds it beneficial. The conversation includes thoughts on the AI's role in enhancing creative tools and the practical applications of such technology.

20:04

๐Ÿ’ฌ OpenAI Event and GPT-4.0 Overview

The hosts review the recent OpenAI event, focusing on the announcement of GPT-4.0. They discuss its multimodal capabilities, speed improvements, and conversational nature. The event's demos, including real-time interactions and the AI's ability to understand facial expressions, are analyzed. The potential and limitations of AI in mimicking human conversation are debated.

25:05

๐Ÿค– AI's Conversational Abilities and Future Prospects

Further discussion on GPT-4.0's conversational abilities, including its interruptions and inflection recognition. The hosts highlight the AI's strengths and weaknesses in mimicking human interactions. They explore the broader implications of AI in daily life and its potential to revolutionize how we interact with technology.

30:06

๐Ÿ” Real-World AI Applications and Concerns

The hosts explore practical AI applications, such as assisting with math problems and providing real-time information. They discuss the challenges of making AI feel natural and the importance of fact-checking AI outputs. Concerns about AI's ability to understand and verify information are raised, emphasizing the need for continuous improvement in AI models.

35:06

๐Ÿ“– Personal AI Use Cases and Reflections

The conversation shifts to personal use cases for AI, such as using it as a virtual book club partner. The hosts reflect on the benefits and limitations of AI in providing companionship and assistance. They consider the broader impact of AI on social interactions and the importance of human connections in the digital age.

40:08

๐Ÿ› ๏ธ AI's Practicality in Everyday Tasks

The hosts discuss AI's practical applications in everyday tasks, such as planning trips and managing daily activities. They highlight the convenience of AI in providing quick and accurate information, while also considering the potential downsides of over-reliance on technology. The balance between human effort and AI assistance is debated.

45:10

๐Ÿ”ฌ Google IO Event Analysis and Reflections

The hosts review the Google IO event, noting its lack of excitement compared to previous years. They recall past events that featured groundbreaking announcements and discuss the shift in focus towards AI and enterprise solutions. The hosts express nostalgia for Google's earlier, more innovative approach to tech conferences.

50:10

๐Ÿ“ธ Google Photos and AI Integration Enhancements

The hosts discuss updates to Google Photos, including the ability to ask contextual questions and receive relevant photos. They appreciate the improved search functionality and the potential for AI to enhance photo management. The conversation covers practical examples, such as finding specific photos quickly and efficiently.

55:11

๐Ÿ”Ž Google Search Generative Experience

The hosts analyze the new Google Search Generative Experience, which provides detailed answers and suggestions. They express concerns about the potential impact on traditional websites and the accuracy of AI-generated content. The shift from keyword-based search to more natural language interactions is explored.

00:11

๐Ÿ“œ The Future of Search and Content Creation

The conversation continues with a discussion on the future of search engines and content creation. The hosts debate the implications of AI-generated search results and the potential decline of traditional websites. They consider the importance of fact-checking and the role of AI in shaping how we access information.

05:12

๐Ÿ“… Gemini 1.5 Pro and Enhanced AI Models

The hosts discuss the introduction of Gemini 1.5 Pro, highlighting its expanded context window and faster processing. They explore its integration into Google Workspace and the potential for AI to improve productivity tools. The conversation includes thoughts on the future of AI-powered assistants and their role in various industries.

10:13

๐Ÿ“น Project Astra and AI-Powered Glasses

The hosts review Project Astra, which demonstrates real-time image and video analysis using AI. They discuss the potential applications of AI-powered glasses and the implications for privacy and security. The conversation covers the benefits of live video feed analysis and the challenges of integrating AI into everyday devices.

15:14

๐Ÿ“ง AI in Google Workspace and Email Management

The hosts explore new AI features in Google Workspace, including email summarization and receipt organization. They discuss the potential for AI to streamline administrative tasks and improve efficiency. The conversation highlights the benefits and limitations of AI-powered productivity tools.

20:15

๐Ÿ“Š Personalized AI Assistants and Customization

The hosts discuss the introduction of personalized AI assistants, or 'gems,' in Google Workspace. They debate the merits of specialized AI models versus a single omniscient AI. The potential for customized AI to enhance user experiences is explored, along with the challenges of managing multiple AI agents.

25:16

๐Ÿš€ Multi-Step Reasoning and Trip Planning with AI

The conversation shifts to multi-step reasoning in Google Search and AI-powered trip planning. The hosts appreciate the ability to receive detailed, contextual recommendations but express concerns about losing control over the planning process. The balance between convenience and personalization in AI-driven services is discussed.

30:19

๐Ÿ“ง Enhancements and Frustrations with Gmail's AI Features

The hosts discuss recent updates to Gmail, including AI-powered features like email summarization and receipt tracking. They express frustration with Gmail's search functionality and the limitations of AI-generated responses. The need for improved contextual search and user control in email management is emphasized.

35:21

๐Ÿ” AI Integration in Google Chrome and Android

The hosts review the integration of Gemini Nano into Google Chrome and Android, highlighting features like scam detection and contextual assistance. They discuss the potential benefits and privacy concerns associated with AI monitoring and real-time interaction. The balance between convenience and user privacy is considered.

40:21

๐Ÿ” Google's Strategic Position in the AI Race

The hosts reflect on Google's strategic approach to AI, emphasizing their extensive integration of AI into existing products. They discuss the company's cautious rollout of AI features and the challenges of balancing innovation with user trust. The conversation includes thoughts on the future of AI and Google's role in shaping it.

45:22

๐ŸŽฅ Reflections on AI and Technological Progress

The hosts conclude with reflections on the broader implications of AI in technology. They discuss the potential for AI to revolutionize various industries and the importance of responsible development. The conversation emphasizes the need for continuous improvement and thoughtful integration of AI into everyday life.

Mindmap

Keywords

๐Ÿ’กAI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the context of the video, AI is the central theme, with discussions about AI advancements, models, and applications in various Google products.

๐Ÿ’กGoogle IO

Google IO is an annual developer conference held by Google, focusing on the latest developments in technology and software products. The event is a platform where Google announces new initiatives and updates related to its AI technologies, as well as other Google services.

๐Ÿ’กGemini

Gemini, within the context of the video, refers to a new version or model of Google's AI technology. It is discussed as being integrated into various Google services to enhance functionality and user experience, with features like summarizing emails, generating responses, and providing information.

๐Ÿ’กMultimodal

Multimodal in AI refers to systems that can process and analyze information from multiple modes of input, such as text, voice, images, and video. In the video, multimodal AI is mentioned in relation to Google's advancements, suggesting that their AI can understand and respond to a combination of these inputs.

๐Ÿ’กTokenization

Tokenization in the context of AI and machine learning is the process of converting words, phrases, or other elements into tokens, which are discrete units representing those elements. It is used to prepare text for machine learning models. In the video, tokenization is mentioned in relation to the capacity of Gemini's AI model.

๐Ÿ’กAI-generated content

AI-generated content refers to any content, such as text, images, or music, that is created by AI algorithms without human intervention. In the video, this concept is discussed with regards to Google's new features that allow for the creation of content based on user prompts.

๐Ÿ’กScam Detection

Scam Detection is a security feature that identifies and warns users about potential fraudulent activities, such as scam phone calls. In the context of the video, Google's AI technology is being utilized to provide scam detection, enhancing user safety and security.

๐Ÿ’กGoogle Photos

Google Photos is a photo sharing and storage service developed by Google. In the video, it is discussed how Google Photos is being updated to use AI for more contextual searching and understanding of photos, allowing users to find specific images more easily.

๐Ÿ’กGoogle Workspace

Google Workspace, formerly known as G Suite, is a collection of cloud computing, productivity, and collaboration tools developed by Google. The video mentions the integration of Gemini into Google Workspace apps, enhancing productivity and collaboration through AI-powered features.

๐Ÿ’กLive Shopping

Live Shopping refers to the trend of online shopping through live streaming, where products are showcased and sold in real-time. The video discusses how Google is incorporating AI into live shopping experiences, suggesting a new direction for e-commerce and affiliate marketing.

๐Ÿ’กAI Overlords

The term 'AI Overlords' is a humorous or metaphorical way to refer to the increasing influence and capabilities of AI in various aspects of life. In the video, it is used in a light-hearted context to discuss the potential future roles of AI in decision-making and personal assistance.

Highlights

Introduction of a new iPad Pro with a thinner and lighter design, being the thinnest Apple device ever made at 5.1 mm.

The new iPad Pro features a brighter display, utilizing tandem OLED technology for better outdoor visibility and contrast.

The iPad Pro's new Apple Pencil Pro is only compatible with the latest iPad Pro and iPad Air, possibly as a strategic move to encourage upgrades.

Discussion of the non-uniform camera bump design on the iPad Pro, which deviates from Apple's typical aesthetic.

The iPad Air receives updates with features previously exclusive to iPad Pros, such as the M2 chip and relocated webcam.

Open AI's event introduced GPT-4, a multimodal model capable of processing various types of data and interactions.

GPT-4 is designed to have faster response times and can understand context from both text and images.

Google IO focused on AI advancements, with an emphasis on machine learning and its integration into various Google services.

Google introduced updates to Google Photos that allow for more contextual searching and understanding of photo content.

The integration of Gemini, Google's AI model, into Google Workspace apps aims to enhance productivity and user experience.

Google's new music generation model, Soundbox, and video generation model, Imagine 3, demonstrate advancements in creative AI capabilities.

Google's AI developments include the ability for multi-step reasoning in Google Search, providing more direct and useful answers.

Google's efforts in generative AI are positioned as a shift towards a future where users receive direct answers rather than navigating through search results.

The potential impact of generative AI on content creation and the sustainability of ad revenue for websites is discussed.

Google's AI updates include features for Android, such as Gemini Nano with multimodality, and scam detection in phone calls.

The debate over the use of AI in trip planning and the preference for user control versus AI-generated itineraries.

Google's introduction of specialized AI agents, Gems, which have ultra-deep knowledge of specific topics, versus a single omniscient agent.

Transcripts

00:00

the screen is pretty dope it's really

00:02

bright it is no that's the actual

00:04

biggest this is the 13 in right Yep this

00:07

is pretty sick we should go live we

00:09

should start the Pod damn it's good

00:11

first reaction for David damn are we

00:13

recording this okay well there it is

00:16

okay that's the that's the intro the

00:18

exactly can you guys wow this is really

00:21

freaking

00:23

[Music]

00:27

thin yo what is up people of the

00:29

internet welcome back to the waveform

00:31

podcast we're your hosts I'm Marquez I'm

00:33

Andrew and I'm David and we're all

00:36

actually three

00:38

ai ai ai and we're going to be doing

00:41

this podcast uh by training it on all

00:43

previous waveform episodes

00:46

so whatever whatever we say will just be

00:49

a remix of things we've already said and

00:52

uh it'll be great thanks for tuning in

00:54

just kidding uh wish we have two

00:56

episodes to record today and that would

00:58

take a lot of no we're we got a I AI to

01:01

talk about today there's a lot of

01:03

acronyms that I'm that I'm mixing up but

01:05

IO happened open AI had an event a lot

01:08

of AI things were talked about and uh we

01:10

should we should summarize it yeah cuz I

01:13

didn't watch any we can just use Gemini

01:14

to summarize it oh yeah true yeah that

01:16

would actually be perfect but first wait

01:18

I have the iPad here it was iPad week

01:20

last week oh right uh I was away so I

01:22

didn't get to fully uh do my review yet

01:25

but I have been using it and well this

01:27

is the 13 I just hit the mic really hard

01:30

this is the it would have been harder

01:31

with the old heavy one though true yeah

01:33

this is thinner and lighter than ever it

01:35

is uh you this is the first time you

01:36

guys are like seeing it in person cuz

01:38

you like left immediately after it came

01:40

in basically yeah I like took it out the

01:42

box and then like packed up and flew to

01:43

Columbia so like you guys didn't get to

01:45

see it guys look at this cool iPad I

01:47

have okay bye see you later but um you

01:50

know getting it back I have been I one

01:52

of those people who has been using an M1

01:54

iPad Pro for how how years however long

01:57

it's been since it came out and set this

02:00

one up mirror it to the old one so it

02:02

literally has that old wallpaper I don't

02:04

know if you guys remember this wallpaper

02:05

from M1 it's like years old and I've

02:08

been using it the exact same way and

02:11

doing all the exact same things which is

02:13

which is emails web browsing YouTube wow

02:18

uh some Netflix offline Netflix on the

02:21

plane oh wait what were you watching um

02:24

drive to survive I'm very behind don't

02:26

judge me uh and yeah Twitter buying

02:30

things on Amazon like I got some widgets

02:32

going it's just the same thing as before

02:35

powerful and it's just thinner and

02:38

that's that's mainly you crushing a

02:40

bunch of artistic tools with it oh of

02:42

course yeah everything just shoved into

02:44

this iPad no it's that's here I'll let

02:46

you guys hold it if you guys don't want

02:47

to actually experience I know David

02:49

wanted to see it I still have a huge

02:51

issue this is the most oh you weren't

02:54

here last week when we talked about it

02:55

right it is the most like non-uniform

02:59

camera bump I feel like I've ever seen

03:00

out of apple and it feels so onapple

03:02

like not a single circle on there is the

03:04

same diameter as a different one yeah I

03:05

never thought I didn't think about it

03:07

that much so they got rid of the ultra

03:08

wide camera didn't really talk about it

03:10

g they didn't even mention that at all

03:11

whenever they get rid of something oh

03:12

they they never want want to say it out

03:14

loud and then so now we just have the

03:15

regular the lar and the The Flash and

03:19

then whatever that tiny cutout is maybe

03:21

that's a there is a microphone mic oh

03:23

the mic other one this is lar I think

03:25

yeah big one is lar is this big Flash

03:28

and then I don't even know what that is

03:30

yeah none of them are the same size

03:32

though correct it feels like the models

03:35

that we get yeah there's usually at

03:37

least one that's the same size it could

03:39

have at least gone with the LG velvet

03:40

thing and made it like big medium small

03:43

you know that Co but instead it's like

03:45

medium large small it's damn it just

03:48

doesn't feel like an apple camera bump

03:51

yeah it's unesthetic the screen does

03:53

look dope though yeah okay so the actual

03:55

biggest differences so if you've seen

03:56

The Impressions video already you know

03:57

the new differences but there are some

03:59

interesting yeah there's some

04:01

interesting choices with this iPad one

04:03

of them being that it is substantially

04:05

thinner and it's the thinnest Apple

04:09

device ever made with a screen that's my

04:13

oh yeah cuz no the Apple card or yeah

04:16

they've made thinner objects before yeah

04:19

but this is the thinnest device yeah the

04:21

thinnest device they've ever made and it

04:23

is 5.1 mm thin do you know how much

04:25

thinner it is than the last one probably

04:28

about 2 mm thinner 2 mm I don't know

04:31

what I'm I'm American so that means

04:32

nothing it's not a lot but it is it is

04:35

visually noticeable what is that in

04:37

Subway sandwiches I I'm American so I I

04:40

can't do it's one piece of lettuce one

04:43

piece of lettuce I have the actual

04:44

answer it's hilarious it's a 20th of an

04:47

inch thinner a a 20th of an

04:50

inch I have a do you know they keep

04:53

comparing it to the iPad Nano clip I

04:57

think it was and they're saying it is

04:58

thinner not the clip one they're

05:00

comparing it to the long one the long

05:02

Nano the long Nano I thought I saw a

05:05

different one where they were comparing

05:06

it to the clips both they're comparing

05:09

it to the seventh gen the clip I believe

05:10

was the Sixth Gen yeah the seventh gen I

05:13

saw something the other day but are they

05:14

including the camera bump

05:17

in the uh on the iPad or are they only

05:20

counting the sides I don't think they're

05:22

including the camera bump yeah wait here

05:25

iPod Shuffle third gen oh it does have a

05:28

clip this has a clip but they counting

05:30

the clip no I don't think you count well

05:32

there's no screen on it but that's not

05:33

what they

05:34

said I don't think they counted the clip

05:36

in all the videos it's just the aluminum

05:38

housing up against it that's the body of

05:40

it mhm I just think it's an interesting

05:42

choice because they decided to actually

05:44

make it thinner again like we got years

05:45

and years of iPads that were totally

05:47

thin enough right and then this year

05:49

coming out and making a big deal about

05:50

it being thinner was really fascinating

05:52

to me cuz I thought we were done with

05:54

that yeah Johnny IV's gone Johnny IV's

05:55

gone but here we are and to me there's a

05:59

lot of choices about that this iPad that

06:01

feel Tim Cook and I'm going to try to

06:04

explain this in the review but here's

06:06

maybe the most Tim Cook choice of them

06:08

all there's a new Apple pencil Pro right

06:11

mhm the new Apple pencil Pro is only

06:14

compatible with the newest iPad Pro that

06:16

you're holding or iPad Air yeah now why

06:19

is that why is that true because they

06:22

moved the webcam the front facing webcam

06:25

from the narrow side to the long side

06:27

which is where it belongs but now that's

06:29

write exactly where that pencil is

06:32

supposed to charge so now there's both

06:34

the webcam and the apple pencil what are

06:37

you trying to do intive the brightness

06:39

oh it's on the right you have to pull

06:41

from the from the right pull pull from

06:43

the battery oh yeah okay so so okay so

06:49

the apple pencil now charges in the same

06:51

exact spot but there's also now webcam

06:53

there so they had to arrange the magnets

06:56

and the way it charges those components

06:57

differently on that side right so it

06:59

doesn't work the same way as the old

07:02

ones why didn't they just move the apple

07:04

pencil to charge on the other side of

07:07

the iPad like the bottom yeah because of

07:10

the the folio the Poco pins for the why

07:14

why just move it why might put like one

07:17

to the left and one to the right on the

07:19

top magnets actually it'd be pretty yeah

07:22

that's a good idea I think it's a sneaky

07:24

great smart way of making sure if you

07:27

want to use any features of this new

07:29

apple pencil you need to buy the newest

07:32

iPad I think that's true so yeah that

07:36

happened it's like one step more than at

07:38

least the like new pixel coming out and

07:40

being like only available on Pixel and

07:42

then 6 months later being like okay it's

07:44

actually available on all at least this

07:46

has somewhat of a hardware difference

07:49

which is why it strikes me as very Tim

07:50

Cook cuz supply chain guy would find a

07:51

way to make sure nothing changes and the

07:53

other ones and now it just works with

07:55

the new ones it's sort of like uh how

07:57

the M2 MacBook Pros sold really really

08:01

badly because the M1 was so good it's

08:04

similar in that the iPad is so iPad and

08:08

what more iPad could you add to the iPad

08:10

that they have to like have a better

08:12

accessory that you have to have the new

08:14

iPad to be able to use yeah cuz you know

08:18

it's thinner cool but no one's going to

08:19

buy it cuz it's thinner exactly and the

08:21

good display on the last generation was

08:24

almost as good as this display yeah so I

08:26

can talk about the display in a second

08:27

the other Tim Cook thing is the new iPad

08:29

Air

08:30

uh is just a bunch of newer parts that

08:34

used to be an iPad Pros it's like it has

08:35

the M2 now it has like the webcam on the

08:37

side again and all these things that are

08:39

like oh we we we now have a better iPad

08:41

Air full of things that we already saw

08:44

in iPads so that's very efficient very

08:48

efficient but yeah the new display it's

08:50

just brighter that's the main thing

08:52

you'll notice it is obviously a really

08:53

interesting Tech to make it happen it's

08:55

this this tandem OLED display which is

08:58

not something I've ever seen in a a

08:59

shipping Gadget before which is cool

09:01

it's in cars I in cars yeah it's in car

09:04

displays yeah we we went down this

09:06

Rabbit Hole a little bit ago because

09:08

Apple really wanted everyone to believe

09:10

like this is we we just invented the

09:12

tandem OLED display but if you go on

09:16

like display makers websites and display

09:18

industry journals like the first concept

09:22

displays by LG were coming out in like

09:25

2017 2018 they entered mass production

09:28

in with LG

09:29

in

09:30

2019 of a specific of a specific stacked

09:34

OLED yeah referred to in as tandem OLED

09:38

and it's the same thing and then Samsung

09:40

just began as far as I could tell

09:42

Samsung just began their mass production

09:44

for iPad this year um and I couldn't

09:48

find any like leaks or like confirmation

09:51

like this product uses an LG tandem OLED

09:55

display but in LG's marketing materials

10:00

on the industrial side of things they're

10:02

very clear like we are shipping these

10:04

things like hot cakes in our Automotive

10:06

departments we also had a viewer sorry

10:09

Zach reach out and say that honor has a

10:11

product from a few months ago with a

10:13

durable double layer OLED screen yeah

10:15

it's it's so in the industry that LG

10:19

actually makes a 17in tandem OLED

10:22

folding touchcreen that I wouldn't be

10:26

surprised if is in products like the the

10:30

okay I could not find any sources for

10:32

this but there's only one thing out

10:33

there with a 17inch folding touch OLED

10:36

screen yeah the Asus Zenbook fold

10:38

whatever and I'm like is that is that a

10:42

tan OLED is was Asus that is interesting

10:44

anyway this is all like Ellis's tinfoil

10:46

hat conspiracy display industry nonsense

10:49

so take it all the great can that be a

10:50

new segment of the podcast yeah tinfoil

10:52

hat segment I will say the purpose of

10:54

the tandem OLED display is to give you

10:57

both the benefits of OLED and super High

10:59

brightness so in tablets we straight up

11:01

have not had this combination of

11:03

brightness and contrast ratios yet so

11:06

I'll give them credit for that like the

11:08

we've had oleds in that huge also super

11:11

thin what is it called Galaxy tab

11:14

pro4 or whatever just gigantic tablet we

11:17

reviewed and it's awesome inside it's

11:19

this super huge bright OLED and it feels

11:21

like you're holding this magical piece

11:23

of glass but it has like 500 600 in its

11:27

Max brightness cuz it's not as bright

11:30

get bright so here we are we have this

11:32

new awesome thing that is better at Max

11:36

brightness better Outdoors better on a

11:38

plane with the window open right that

11:40

type of thing great yeah it's the exact

11:42

same brightness as the mini LED display

11:44

on the last iPad Pro 12.9 in model and

11:49

now deeper blacks yes1 in bigger because

11:54

they're off love that because they're

11:56

pixels that are off so infinite contrast

11:59

ratio yeah so I've been using it I don't

12:01

know I'm going to put my review together

12:02

I don't think it's going to be a

12:03

standard review I think it's more just

12:04

me diving into my maybe tinfoil hat

12:07

theories about some of the weird choices

12:09

why is it thinner this year why is the

12:11

apple pencil only compatible with this

12:13

one this year why do they do all these

12:15

things why they get rid of the ultra

12:16

wide camera there's a bunch of

12:18

interesting choices that they made with

12:19

I think that is a better angle to go

12:21

with for the review because what else

12:23

are you going to say exactly yeah it's

12:25

the same like I saw someone did a review

12:27

that wasn't at all about the device

12:29

really it was like what does iPad OS

12:31

need for me to actually want to buy the

12:33

M4 iPad Pro I would also be into that

12:36

which is a better video yeah like and

12:38

it's also such an evergreen that's been

12:40

like the question on our minds for like

12:41

four years now it's like this is a

12:43

really powerful thing that's still an

12:46

iPad so what do we wanted to do MH okay

12:50

sick that's the iPad I just wanted to

12:51

throw one more thing in from last

12:54

week I did because last week I very

12:58

dismissively was like stem splitter is

13:01

boring and I uh sick got drinks with a

13:05

friend of mine who is you know working

13:08

in the upper echelons of the music

13:09

industry which I am not um and he

13:13

graciously informed me that I was uh

13:15

pretty wrong and that stem s splitter uh

13:18

works way better than previous attempts

13:21

and it's integrated really well into

13:23

logic and as like a utility um it's

13:27

actually like a really cool thing to

13:29

have built into logic maybe not as a

13:32

creative tool for making things

13:34

necessarily but if you're a professional

13:37

he was like this is very very cool so

13:39

thank you unnamed friend yeah but yeah

13:43

so sorry I wrote it off if you're

13:45

working in music maybe you should check

13:47

this out and not listen to me yeah also

13:48

for those that maybe didn't watch the

13:50

podcast last week or or forgot about it

13:53

it's basically that during the iPad

13:55

event they announced logic pro two for

13:58

iPad yeah and also an update to logic I

14:00

think it's called logic 11 now something

14:03

like that but they're uh they're

14:05

introducing a stem splitter feature

14:06

where you can add any music file and it

14:09

will separate the tracks I got a live

14:11

demo of it and this is one of those

14:13

things where the demo was so good that I

14:16

was like this is definitely tailored

14:19

exactly for this feature and I don't

14:20

know how well it will work but in that

14:22

demo it blew my mind at how well it

14:24

worked and it was like here we gave it

14:26

this song and we added we did the St

14:29

splitter thing and it separated it out

14:30

and here's the drums and here's the

14:33

vocals and here's the bass and here's

14:34

the different instruments all split out

14:36

through the power of AI and apple

14:38

silicon and you play back just the

14:40

vocals and it's like it sounds like I'm

14:41

just listening to a recording of just

14:44

the vocals and yeah super cool tool to

14:47

use and to me it always struck me like I

14:49

don't use logic but it struck me as just

14:50

like getting rid of some plugins that

14:52

we're doing that in the past it's just

14:53

like yeah we just built it in now it's

14:55

like when Final Cut does a better

14:57

tracking tool you're like oh well that's

14:58

better than I was using so great so I

15:00

just I I would love to see it on other

15:02

tracks also a question could you just

15:04

remove one of the stems and then content

15:06

ID wouldn't like go off I think well

15:09

Content ID is kind of like

15:11

sophisticatedly Trend uh trained on more

15:14

than just like the exact set of sounds I

15:16

think it would still go off I think it

15:18

would just because of some of the

15:19

examples I've seen in the past of like a

15:21

car driving by and like the radio is

15:24

playing and they've gotten hit with

15:25

stuff like that I've heard people have

15:27

hummed songs and it gotten copy

15:29

really yeah that's I think it's a great

15:32

question but I have a feeling it goes on

15:35

the it's a little more cautious when it

15:38

comes to it and would probably still hit

15:39

it okay that could be a short how many

15:42

parts of a song can we delete before we

15:44

get maybe we don't want to like destroy

15:46

the channel testing make a new channel

15:48

to test that yeah cool Jerry is still

15:51

out on uh chromag glow uh I haven't I

15:55

haven't used like I don't have any

15:57

projects here where I'm like doing stuff

15:58

like that so I haven't used it everyone

16:00

I've talked to who is using it for work

16:02

says the CPU usage is higher than they

16:04

expected for like a apple native thing

16:08

uh which might mean there actually is

16:09

some AI stuff happening yeah but uh that

16:12

would be my guess cuz it's probably

16:13

really uh AI focused and M4 is

16:16

specifically tailored to that because

16:17

they have all the neural cores there's I

16:20

there's an email in uh in Apple's inbox

16:23

that's me being like please explain this

16:24

to me okay yeah and chromag glow was the

16:28

AI feature where they basically sampled

16:29

a bunch of old really famous instruments

16:32

and tried to extract the vibes from it

16:35

so that you can The Vibes to allegedly

16:38

straight VI extractor Vibe extractor

16:40

that's what they should have called it

16:41

dude way much more tell yeah telling

16:44

yeah well speaking of an event that was

16:46

lacking a lot of

16:48

Vibes open AI had their event on Tuesday

16:52

so I didn't get Monday Monday it's

16:54

Wednesday yeah thank you so I didn't get

16:56

to watch it no how did it go

17:00

well it was interesting it was

17:03

interesting yeah I would say the the

17:05

high level stuff uh they announced a new

17:07

version of GPT 4 there was this big

17:11

thing where everyone was thinking they

17:12

were going to unveil their own search

17:13

engine the day before google.io that

17:15

ended up not being true wild um Sam also

17:18

tweeted that's not true like the day

17:21

before which is funny but they did

17:23

unveil something called GPT

17:25

40 which I think is a terrible name okay

17:28

uh also looks weird does the o stand for

17:30

something yes it stands for Omni I had

17:32

to do a lot of digging to find this on

17:34

the day of okay and it's because it's

17:36

now a multimodal model so Omni is

17:39

supposed to mean everything it does

17:41

everything Omni yeah cool but it it's

17:44

kind of weird cuz it looks like 40 it's

17:46

a lower quo it's a lower quo that feels

17:49

weird it can't like smell it's not on me

17:52

yet that's true they just didn't go over

17:54

that part yet yeah so does native M

17:57

multimodo which is good because the only

17:59

other model I believe that was natively

18:01

moim Modo was Gemini mhm um it is much

18:05

faster than gbt 4 they say that it can

18:09

respond in as fast as 232 milliseconds

18:12

which is probably like the most ideal

18:15

possible case during the event they were

18:18

sort of talking to it naturally and

18:20

talking back and forth and you know had

18:21

that Scarlet Johansson voice when and

18:24

when asked when they asked Miram morti

18:27

like is this car Jo voice she's like

18:29

what are you talking about we what no

18:32

did you see the tweet that was like we

18:34

know this isn't Scarlett Johansson's

18:36

voice because none of her movies are on

18:37

YouTube for it to rip from bur yeah um

18:42

and so they did like this demo where

18:43

they were talking back and forth to it

18:45

pretty naturally and it works pretty

18:47

well it definitely feels a lot more like

18:50

the way that a human would speak to you

18:52

oh a bedtime story about robots and love

18:56

I got you covered gather around Barrett

19:00

they did have to like interrupt it

19:01

multiple times because it has the same

19:03

problem that all assistants have where

19:04

it doesn't just tell you your answer and

19:06

move and stop it just tells you your

19:08

answer and then keeps talking a bunch I

19:10

would say the like the interrupting

19:12

thing was also something that was like a

19:13

feature that they were describing which

19:15

I I think at the core is pretty cool

19:17

like in a general conversation it's a

19:19

good bandaid yeah people get interrupted

19:22

or like sometimes you chime in on things

19:24

people say we interrupt each other all

19:25

the time so you know that's just part of

19:27

natural say real quick disagree disagree

19:31

real quick real and it did a pretty good

19:33

job at it except for sometimes I felt

19:35

like when they were talking to it was a

19:37

little too cautious of thinking it was

19:39

getting interrupted where there were

19:41

legit Parts where it sounded like the

19:42

audio was cutting out because they would

19:44

ask it a question it would start

19:46

answering and then I don't know if they

19:47

like moved or like somebody in the

19:49

audience said something but it would

19:50

like cut out and then stop and they

19:52

would have to ask it again or it would

19:53

cut out then nothing would happen and it

19:55

would start talking again it felt like

19:57

it was trying to like

19:59

be courteous of the person asking it and

20:01

be like too conversational to the

20:03

detriment of itself which was like cool

20:07

but not great yeah this is this all

20:09

comes back to the super highlevel

20:13

original theory that I've said many

20:16

times before about these AI chat models

20:19

is they don't know what they're saying

20:23

they are saying things because they are

20:24

trained on things and can say things

20:26

that are similar that look like they

20:27

should be correct but once they have the

20:29

sentence that they spit out they have no

20:31

way of knowing they don't have a way of

20:34

like understanding if they just said a

20:36

true fact or or not they don't have any

20:38

way of verifying they don't know what

20:40

they're saying so if you were

20:41

conversational then you could actually

20:43

look at what you're about to say and

20:45

then trim it because you realize oh this

20:47

is like extra and we're in a

20:48

conversation but it doesn't have any of

20:50

that ability so I feel like the best the

20:52

biggest advantage to these like okay 40

20:54

is cool and it's like faster and it's

20:56

multimodal and all that but the ADV the

20:58

the the Advan that I'm looking forward

21:00

to the most is an llm that can actually

21:04

fact check itself and understand what

21:07

it's saying and do something about that

21:09

so the funny thing is in Gemini there's

21:11

a button that you can press like a

21:12

Google button that it gives you an

21:14

output and then you can press the Google

21:16

thing fact no it fact checks itself does

21:18

it yeah like cuz it's it goes straight

21:21

from what the model thinks from the

21:23

internet and there's like a double check

21:25

button where it references more sources

21:28

and then gives you a bunch of links and

21:30

then can verify if it was true or not

21:32

well it's supposed to I think it's going

21:35

to still be on the person I think it is

21:36

too because like when you have Gemini on

21:38

the phone now there's a Google it button

21:39

so if I if I go like what is the Marquez

21:42

browny's podcast called and it says it's

21:45

called The Verge cast the it could just

21:48

tell you that and then give you a Google

21:50

button and then you hit Google and then

21:52

the results show up and it's like oh he

21:54

has been on the vergecast but the it's

21:55

called way for like you don't it's the

21:57

human that still has to the fact check

21:59

yeah so if it could if it could fact

22:01

check itself and be like oh my bad I

22:03

remember I just told you that it was

22:04

wave for or one thing it's actually the

22:06

other thing that that's what I want but

22:09

I don't think any of them do that yeah

22:11

and I really want I don't know if

22:13

they're ever going to be able I mean

22:14

maybe eventually it feels like it's just

22:16

a layer on top of the llm the llm is

22:18

already doing a lot of work and it's

22:19

super cool what it can do but I just

22:20

want that like okay now take your output

22:22

feed it back into another layer where I

22:25

can go okay I told you the truth yeah

22:27

like how high can you get that

22:28

confidence interval right yeah some

22:30

other very weird things that it can do

22:32

um there's like a vision version where

22:34

you can talk to it while it uses your

22:36

front-facing camera and it's supposed to

22:38

be able to be able to understand your

22:40

facial expressions as well as the

22:42

intonation in your voice to better

22:45

understand what you're asking in the

22:47

context of what you're asking and they

22:49

did a bunch of demos that were very like

22:52

what is my facial expression which is

22:54

like you know it's kind of like pushing

22:57

what do I look like the borders of

22:59

I feel like a lot of those examples were

23:00

like it did a pretty good job of doing

23:02

what they wanted but they always had to

23:04

ask it specifically like if we were

23:06

having a

23:07

conversation mid conversation I wouldn't

23:09

be like David look at my facial

23:11

expression and get the hint here like

23:14

inste they're like it can see your

23:16

facial expression while we're talking

23:17

and the guy goes look at me and look at

23:19

my facial expression what is it and it's

23:21

like that's not conversation I think

23:23

that so we saw this issue with like the

23:25

rabbit R1 as well right like all of the

23:27

reviews are like what is this what is

23:29

this and what is that like with the D

23:31

perhaps what is this perhaps what is

23:33

this uh and nobody's gonna do that in

23:35

the real world but the way that everyone

23:38

thinks that you need to test it is like

23:41

if this doesn't work then the whole

23:43

thing falls apart so you have to test

23:45

like the least common denominator at

23:47

first so I think that's why they were

23:48

showing it is they proving it can

23:51

understand your facial expressions but I

23:53

still want to see more of a real world

23:56

use case where my facial expression

23:58

changing changes the output of the model

24:00

yeah is that like it uses it as context

24:02

or something that's what it says but

24:04

they never showed any demos where it

24:06

should and it's whole point one of the

24:07

big things they talked about which I

24:08

actually think it did a pretty good job

24:10

with was like inflection in the voice

24:12

and so like through quicker responses

24:15

interrupting an inflection that's

24:17

supposed to make it feel more

24:18

conversationally based and and one thing

24:20

I was thinking about was like sometimes

24:23

it did a really good job but the problem

24:25

with trying to replicate a human it just

24:27

reminds me of like walking in video

24:30

games like we have been doing that for

24:32

so long and Graphics have gotten so good

24:34

and it's still so obvious when like a

24:36

video game character is walking and it

24:38

doesn't look anything like a human right

24:40

so like this thing will hit a couple

24:43

little marks where it's like oh that

24:44

kind of did sound like less of a robot

24:46

but more and then it'll just do

24:48

something that sounds exactly like an AI

24:51

and it will I don't I think we're so far

24:53

away from it being a true like to really

24:55

confuse someone I'm picturing it like

24:57

you ask it for the weather and it sees

24:59

that you're happy and it goes it's

25:00

raining outside bring an umbrella and it

25:02

says sees someone who's like kind of sad

25:05

and says what's the weather and it goes

25:07

it's a little worse than normal out like

25:10

it's totally fine like it tries to

25:12

comfort you yeah I'll say one thing I

25:14

thought that was impressive was like the

25:17

way it fixed its mistakes sometimes or

25:21

or like the way it had I'll use an

25:23

example so like normally when you ask an

25:26

llm or whatever something and it's wrong

25:29

it'll just go like my mistake or just

25:30

like the most Cann respon is of like my

25:33

bad the same question and it gets it

25:35

wrong again yeah yeah so there's a point

25:37

in this where he asks it to look at his

25:39

facial expression and its response was

25:42

uh seems like I'm looking at a picture

25:43

of a wooden surface oh you know what

25:45

that was the thing I sent you before

25:47

don't worry I'm not actually a table um

25:49

okay so so take a take another look uh

25:52

that makes more

25:55

sense look again and tell me my facial

25:57

expression where normally it would just

25:59

be like my mistake and it said but

26:01

instead it responded with like oh that

26:02

makes more sense that makes more sense

26:04

like I get it now and then answered so

26:06

like it is semi- using the context of

26:08

what's going on and making up these

26:11

little things that it doesn't need to

26:12

but that is a little tiny step that does

26:15

make it feel more conversation I feel

26:16

like they didn't harp on that because

26:18

every time it did that was when it was

26:20

screwing something up which they

26:21

probably didn't want to linger on but

26:23

like that actually to me was the most

26:25

impressive that said like I feel like

26:26

people are so used to them screwing

26:28

things up that if you can screw things

26:30

up in a more natural human way then

26:32

that's kind of impressive it does feel

26:34

more like the conversation and not just

26:36

like ask you question you respond kind

26:38

of crap I also think that they took like

26:40

I don't think they took this from rabbit

26:42

because they probably don't give a crap

26:44

about rabbit at all um I don't even

26:46

think about you well yeah something the

26:49

R1 did that the Humane AI pen didn't do

26:52

was it would go like hm I guess let me

26:54

look that up for you and it saying words

26:57

the filler time

26:58

it so what in the back does seem it Tak

27:01

that and it did that a lot every time

27:03

they would ask it a question it would be

27:05

like so they'd be like how tall is the

27:08

Empire State Building whatever and like

27:10

oh you're asking about the Empire State

27:11

Building sure it's blah blah blah blah

27:14

and it's like if you just said nothing

27:16

and just had a waiting time that would

27:18

feel that tension there would be like H

27:20

but because it's repeating it back to

27:22

you you don't feel any delay but then it

27:25

so you feel less delay but it feels more

27:27

AI

27:28

like that's that's that to me is like a

27:30

giveaway if I'm talking to a human and

27:31

I'm like bro how tall is the Empire

27:33

State Building and he goes you're asking

27:34

about the Empire State Building the

27:36

Empire State Building is and I'm like

27:37

why are you saying all this just say

27:39

them over you're stalling I see I think

27:41

if you sorry well yeah that more human

27:44

there's a part in there that is doing

27:46

what you're saying but I think they did

27:47

an even better job which was anytime

27:50

they asked a question that had multiple

27:52

questions or like points of reference in

27:55

the question you could almost tell it

27:57

was think of how to respond and then

28:00

could

28:00

respond respond while it was thinking of

28:03

the next response it was going to tack

28:04

onto that so this wasn't in theuh actual

28:07

event but they did a bunch of test

28:09

videos on their YouTube channel and one

28:11

of them was a guy saying he says like

28:14

also they all say hey chat GPT which I

28:16

hate I would like a name or something

28:18

like that's a long wake word um but he

28:21

said hey chat GPT I'm about to become a

28:23

father and I need some dad jokes for

28:26

like the future can I tell you some and

28:28

you can tell me if it's funny and you're

28:31

asking the AI jokes are funny bad

28:34

example but you could tell the two

28:36

responses it had cooked up ready to go

28:39

which made it feel quicker so after he

28:41

said that it responded with oh that's so

28:44

exciting congratulations on becoming a

28:46

new parent and you could tell that was

28:48

one response and then the next response

28:50

was like sure I'm happy to listen to a

28:52

joke and tell you if it's funny so you

28:54

could tell that while it was waiting to

28:55

figure out response to yeah had response

28:58

one loaded already and I think that that

29:01

is how that they're able to claim like

29:03

such faster models is that they're like

29:06

is is that they just use filler content

29:09

like they take the the important

29:11

information that they actually have to

29:12

parse through the the model and they

29:16

crunch that while they take the very

29:18

simple things and can fire that off to

29:20

you rapidly with like a conversational

29:22

that's way better filler though than

29:24

just like hm you want to know what the

29:26

Brooklyn Bridge is like I can tell you

29:27

what that is like it is blah blah blah

29:29

conations on becoming a father even

29:31

though I don't really want to hear that

29:33

from an AI yeah it's a little weird I'm

29:35

proud of

29:36

you dead

29:39

up God yeah yeah so yeah I mean I

29:44

actually I thought it was uh the event

29:46

in general was pretty good I'll give

29:49

them a which nice huge points for that

29:53

was 26 minutes I think it was way

29:55

shorter they didn't try to just like

29:57

keep it going

29:58

um they did a bunch of math with it

30:00

which was kind of cool because that's a

30:02

big thing that you know they've been

30:04

trying to do like if you're just

30:05

predicting then you're not actually

30:07

doing reasoning right and I'm sure it's

30:09

still just doing that but with higher

30:11

accuracy it does seem like every example

30:13

is like most like childlike task ever

30:18

like name this one thing or just like

30:20

read all of my code and tell me exactly

30:23

what it does there doesn't seem to be

30:24

the in between I think that mostly comes

30:26

down to poor examples that

30:29

all them you

30:31

said all Theos are like hey you show

30:34

this by looking like a total and

30:36

asking it the most basic question ever I

30:38

was thinking about this during the the

30:39

Google IO conference yesterday and I I

30:41

think this is every AI demo that you do

30:44

it's like you make your engineers like

30:45

look like total idiots because they have

30:48

to ask the most basic questions just

30:50

because they have to show that the AI

30:52

can do it they're like oh so like for

30:54

example that they had it work it they

30:57

had upt 40 work it through a math

31:00

problem that was what is 1 x + 1 = 4

31:04

like 3x plus 3x + 1al 4 and it was like

31:07

how do I solve this can you work it

31:09

through me step by step and it was like

31:11

well first what do we do when we want to

31:14

figure out an an exponent move all the

31:16

exponents to one side and so it

31:19

basically like just took it through the

31:20

basic math and it's funny because Google

31:22

at. did a lot of the same stuff where it

31:25

was like this could be useful for kids

31:27

this could useful for kids which

31:28

reminded me a lot of the the rabbit

31:31

thing that we talked about yeah um it's

31:34

not just that it's like cuz this has

31:35

been on my mind like it seems like as we

31:37

get deeper and deeper into this AI

31:39

assistant nightmare the the use case

31:42

examples are getting worse like like

31:45

they're like not only they're making

31:46

their own employees look kind of dumb

31:48

sometimes but then they'll try to like

31:50

balance it with the sort of like human

31:52

example like the hey can you teach me

31:53

some dad jokes like I know I say this

31:56

before on the pot but it's like what a

31:58

sad reality when you're like assuming

32:00

you you I'm assuming you have a spouse

32:02

you know you're about to welcome your

32:03

first child that's who I guess like you

32:05

want to impressed with it's like but

32:08

it's like what's the like oh no my

32:09

spouse like isn't funny at all like like

32:11

they they can't help me like Workshop

32:13

these jokes I need stupid computer or

32:15

like I actually kind of hate them and

32:16

would rather talk to them it's like what

32:19

are you talking about one of the Gemini

32:20

examples that they showed in the

32:22

original Gemini demo that they showed

32:24

again yesterday in like a Sizzle reel

32:26

was write a cute social Med post for

32:28

backer and it which is their dog and it

32:31

was like baer's having a great Saturday

32:35

#k and I'm like are you that Den okay

32:38

yeah I see are you really Outsourcing

32:41

that to an AI know what is wrong with

32:43

you that I think these these individual

32:46

examples have to be bad because there's

32:50

no one example you can give that will

32:52

apply to everyone watching so they're

32:54

trying to give like a Halo example that

32:56

app

32:58

no one individually it applies to no one

33:01

but the concepts of it are maybe

33:03

applicable because that's when you see

33:04

all the LinkedIn posts about like oh

33:06

this changed my like when you see write

33:08

a cute social media post about my dog on

33:11

one half of LinkedIn people are going it

33:13

can do copyrighting oh cool and on the

33:15

other half it's people going it can

33:17

write my school essay for me oh cool and

33:20

on the other half there's people going

33:22

it can write a letter for me like when I

33:24

need a condolences letter there there

33:26

are a bunch feel what you would

33:29

Outsource a cond a condolences like some

33:32

people would bro I just I just got back

33:35

from a tournament with I have a teammate

33:37

who is a ghost rider for executives at

33:40

large corporations so I'm not going to

33:42

say their name or the corporations but

33:44

many of these presidents of schools and

33:46

CEOs have to write letters and things

33:49

all the time to congratulate someone on

33:53

something or to write back to an alumni

33:55

to convince them to donate or all these

33:57

other

33:58

and they just don't have time to have

33:59

all that bandwidth and so they hire a

34:01

person to do it or there's someone going

34:04

oh the AI can do that now yes that's

34:05

super useful I like when I wrote to

34:07

Obama in 2012 and I got a letter back

34:09

that was signed to Barack Obama I was

34:11

like what yeah that's somebody's job

34:13

right now guess how many more letters

34:15

that person's going to that's one thing

34:17

but being like yo I'm sorry this person

34:19

you like died like having the computer

34:22

was like no I have a teammate I thought

34:23

he was going to be like who just

34:24

recently lost something and I really

34:26

didn't feel like writing this

34:28

no but no that's true I mean it's I'll

34:31

the examples are are very specific that

34:33

I'm giving but it's like they're trying

34:34

to give you like one Ultra generic

34:37

example that applies to no one so that

34:39

everyone else's more specific examples

34:41

that they wouldn't use on stage because

34:42

it's too small a group can kind of be

34:44

tied into it in a way so yes you're

34:47

going to get some really brutally

34:48

generic like write a caption for my dog

34:53

and they're like okay I just learned

34:55

that it can write I just learned that it

34:57

knows the subjects and it can just think

34:59

that I think that's what they're I guess

35:00

you're right

35:03

but I mean I think it can be that and

35:06

there still should be some someone

35:08

making more creative examples because

35:11

both of these events and we'll get into

35:12

the Google one later were just felt like

35:15

there was no wow factor it felt so

35:17

simple and it made all of the AI stuff

35:19

feel really boring and tedious okay I

35:22

got an example for you because we were

35:24

talking earlier and I was telling you

35:25

that I feel like after the last two

35:27

events I'm actually more optimistic

35:29

about the state of AI than I was before

35:31

these events because everything was so

35:33

mundane so because there was because

35:36

everything you're more optimistic about

35:38

Humanity living with AI and yes yes

35:41

exactly I'm more optimistic that it's

35:42

not going to take us over and like Dy oh

35:44

yeah wor because I feel like the trend

35:47

that they're all moving towards is

35:48

having the like broad example and then

35:50

everyone can kind of have their own

35:52

little slice of AI for their own

35:54

personal use case so for example this

35:56

got me thinking

35:58

I read books like I'm sure many people

36:00

do I know I'm flexing um but I like

36:04

doing book clubs but I don't know anyone

36:07

that's reading my same books at the same

36:08

time can I go online and find people

36:10

that are doing this sure but then David

36:13

and I have tried to have book clubs like

36:14

three times it's like when we were

36:16

reading the AI book The Google book like

36:18

two years ago yeah so it's like some

36:21

sometimes exactly that is that is my

36:23

problem sorry also no one else in my

36:26

life is really like super into fantasy

36:28

books like that can I find one person

36:29

and make sure they're reading the same

36:31

chapter that I am every week sure that's

36:33

annoying I just want to talk to someone

36:35

and be like that would be cool right and

36:37

having an AI know exactly where I am in

36:39

the book and being able to have a

36:41

conversation about characters and things

36:43

I was doing it the other day with Gemini

36:45

just to test it out because this thought

36:46

came to me and I'm reading a book right

36:48

now so I was like oh without spoiling

36:50

anything tell me about blah blah blah

36:51

and this character and what you think

36:53

and they came back with like legitimate

36:56

answers which was like pretty

36:58

interesting are you not worried it's

37:00

going to mess up the spoil part and just

37:02

be like oh great uh character a was so

37:06

good and then like but it's really said

37:08

when they di 25 Pages later I have to

37:10

say that now but at some point and I

37:12

think in the near future it can easily

37:14

know where I am in the book and know not

37:16

to you really want to have a book club

37:17

with a computer I don't want to have a

37:19

book club period I just sometimes want

37:21

to like talk about characters and stuff

37:23

with the computer with anyone I don't

37:26

know man I hate to break it to you

37:27

there's this thing called Reddit and any

37:29

discussion you want to have about a book

37:31

is already on there they're all spoilers

37:33

or spoiler-free but like it's not synced

37:36

up exactly the page that I'm up to

37:38

there's also and I have a I have a

37:40

tangent example that's like when you're

37:41

in an extremely specific case like when

37:43

you have sometimes it's tech support or

37:45

just like a product that you use and

37:47

you're like digging through forums like

37:48

I need to find someone with my exact

37:51

issue with these exact symptoms and

37:54

these exact bugs and like you can go to

37:56

a forum or Reddit and like type it out

37:58

and like wait for people or you can go

38:00

hey computer like look at all of the

38:02

people who have ever had any issues

38:04

related to this and then let me have a

38:05

conversation with you about those issues

38:07

and maybe I can figure this one out

38:09

because then you're sort of like

38:10

bringing in hopefully the most relevant

38:12

information and instead of having to

38:13

click click click through every single

38:14

comment you can sort of like talk to the

38:16

person who knows all the comments and

38:17

then when new issues get populated

38:19

throughout the Universe and they don't

38:21

get populated onto the internet the AIS

38:23

will no longer be able to solve your

38:24

problems I I agree like I think Mark has

38:27

is really right like that is like kind

38:29

of part of the dream of the AI the a

38:32

future what's so concerning is

38:34

increasingly and I I do also agree with

38:37

you Marquez like they have to give these

38:38

sort of weird broad examples to satisfy

38:40

all these different camps but it does

38:42

feel like increasingly there's this

38:44

message being subtly projected at us at

38:47

the events that's like you know what's

38:49

so

38:50

exhausting syp sympathy love uh being an

38:55

emotional being that let's fuing have

38:58

the computer do it I think that's just

38:59

again poor creative examples like they

39:03

could think of so many better examples

39:04

where this would actually be like useful

39:06

and you know how the iPad Mini

39:09

specifically targets Pilots yes and like

39:12

I'm listening you don't really know

39:14

about that except I'm sure that the

39:16

pilot Community is always like super we

39:19

love iPad minis yeah but if the whole

39:21

event was about Pilots you tune out I

39:23

don't know like I I feel like I'm

39:25

interested in how can A specific group

39:28

of people use this in a specific way you

39:32

know because like I can at least

39:33

sympathize I can at least empathize well

39:35

I guess empathize is not the right word

39:37

but I can understand like oh yeah that

39:40

is being helpful to part of the human

39:42

race if you're a company like apple you

39:44

need everyone to picture themselves as

39:47

that part I was just going to use apple

39:48

as an example for not doing Apple watch

39:51

Ultra their examples were like scuba

39:53

diver extreme like hiker or Trail Runner

39:57

and like yeah and that's still sold to

40:00

hundreds of thousands of people who will

40:01

never do any of because it's

40:03

aspirational marketing yeah that's

40:04

that's the pickup truck effect that's

40:06

like this thing is

40:07

built everything yeah whereas I think

40:11

yeah the pickup truck effect 100% people

40:12

like but what if I need it at some point

40:14

what if I want to help my friends

40:16

move like driving over rocks like built

40:20

T nobody you live in Brooklyn hey man I

40:23

have a gravel driveway

40:25

I gra there are leaves on my street

40:29

sometimes it snowed once I need the

40:31

clearance yeah that's that is very much

40:33

the challenge the prepper mentality of

40:35

America we should we have to take a

40:36

break I think we do okay I just need to

40:38

finish the event real quick okay uh

40:40

irony super ironic um there's a Mac app

40:43

coming for chat GPT only a Mac app which

40:46

is hilarious cuz Microsoft basically

40:48

kind of owns open AI not really but sort

40:50

of and they also sorry I'm going to butt

40:52

in just because of that sentence they

40:54

open the whole thing with like we are

40:56

trying our goal is to bring chat GPT

40:59

anywhere you want except unless you have

41:02

a Windows computer I guess which I think

41:04

is because Microsoft like their whole

41:07

next big move is co-pilot everywhere

41:09

like there's literally a co-pilot key on

41:11

all the new Windows computers like is

41:13

there move already I think that

41:15

basically like whatever open AI does

41:17

with chat GPT is just going to be

41:20

co-pilot with a it's it's a skin that's

41:23

it's called co-pilot but it's basically

41:25

just chat GPT so but it is awkward that

41:28

they have all this marketing that's like

41:30

chat gbt is everywhere except for our

41:32

biggest

41:33

funer um they said it's coming to

41:37

Windows later this year which is going

41:39

to be super awkward it's basically going

41:41

to be like the Google assistant in

41:42

Gemini thing because there's going to be

41:43

co-pilot and then there's going to be an

41:44

open like chat GPT app on Windows as

41:47

well right which they're the same

41:49

product basically

41:51

it's a branding issue um that was

41:54

basically it that's all I wanted to say

41:55

okay I thought it was funny well we got

41:57

we got another event to talk about but

41:59

we'll we'll get to that after the break

42:00

basically the same event with a

42:01

different

42:02

name and eight times longer yeah yeah

42:06

but uh since we are taking a break we

42:08

should also take some time for

42:12

[Music]

42:14

trivia the lights work again are they

42:16

different colors they go by our voice

42:18

depend on how

42:20

loud right the the lights have a mind of

42:23

their own honestly they AI run they are

42:25

AI run uh

42:27

and in this case uh AI stands for

42:32

Marquez can you please

42:34

stop right now people in their cars are

42:36

really mad sorry about that and the

42:38

people watching the dishes sh anyway so

42:41

after the break we're going to get into

42:42

all of our good old Google IO discussion

42:45

but I was reading the Google Blog as I

42:48

do from time to time called the keyword

42:50

very silly um and I learned something

42:53

even sillier than the name the keyword

42:55

and that's we all like know what Google

42:58

iio like the io stands for like input

43:00

output yeah Google on the keyword

43:02

actually lists two alternate

43:04

explanations for why they chose IO and I

43:07

will give you a point for each of those

43:09

explanations that you can give me is

43:12

each explanation just what the I and

43:14

what the O is or is there like more to

43:16

it it's there's more they're not

43:18

acronyms there's more to it but not like

43:21

that much they're they're backronyms

43:23

right uh wait like IO is the backr like

43:27

does it stand for something uh one of

43:30

one of them yes I think I know one of

43:32

them and then the other one is more gray

43:36

than that you guys shouldn't have asked

43:38

you shouldn't have asked it that's your

43:40

you're the trivia Master you got to

43:42

decide what questions your stands for

43:44

I'm

43:45

out which is exactly how I felt when it

43:48

started

43:49

yesterday well that's a good segue we'll

43:51

take a quick ad break we'll be right

43:53

back I'm out

43:55

[Music]

44:05

support for waveform comes from Kota are

44:07

all the tools and tabs bouncing around

44:09

on your desktop stressing you out Koda

44:11

can help you get organized with an

44:12

all-in-one workspace so it Blends the

44:14

flexibility of docs the structure of

44:16

spreadsheets the power of applications

44:19

and the intelligence of AI to make work

44:21

a little less work Kota is designed to

44:23

help your remote colleagues get on the

44:25

same page no matter what time zone

44:26

you're in and if you feel like you're

44:27

always playing catchup with kot's

44:29

extensive planning capabilities you can

44:31

stay aligned by managing your planning

44:33

Cycles in one location while setting and

44:35

measuring objectives and key results

44:37

with full visibility across teams plus

44:40

you can customize the look and feel of

44:41

Koda with hundreds of templates to get

44:42

inspired by in kota's gallery so if you

44:45

want a platform that can Empower your

44:46

team to collaborate effectively and

44:48

focus on shared goals you can get

44:50

started with Coda today for free so head

44:52

over to coda.io swave that's c o FY y.

44:59

iWave to get started for free cod.

45:02

iWave welcome back everybody as you may

45:05

have noticed we just kind of alluded to

45:07

the fact that google.io was

45:09

Tuesday this was uh arguably in my most

45:13

humble opinion because I'm the humblest

45:15

person on this podcast in this current

45:18

moment nice correct no I'm anyway uh one

45:22

of the most soulless Google iOS I have

45:26

ever watched W um I was thinking about

45:29

this this morning I was like remember

45:31

when Google iio was Sergey Brin jumping

45:34

out of a helicopter with Google Glass on

45:36

and Landing yeah in San Francisco live

45:40

and remember that live and then we got

45:42

Project Ara with the which was the

45:45

modular smartphone with all the bits and

45:47

pizza I mean we got IO announcement we

45:50

got like Project Loon which was like

45:52

bringing internet to like random

45:54

countries are all these things that you

45:57

so far dead yeah yeah it's still it's

46:00

still fun at iio though yeah like I

46:03

remember being like year Starlet Starin

46:07

Starline we at least got that yeah I

46:09

just remember being like in in high

46:11

school and being like wow Google is such

46:14

a cool company I'm so I want to work

46:16

there so I wanted to work there so bad I

46:18

was like everything they do is just a

46:20

moonshot idea it might not work out but

46:22

it doesn't matter because it's all

46:24

funded by search which is cool it's like

46:26

they have an infinite money machine that

46:28

keeps turning and because of that they

46:29

can just do whatever the heck they want

46:31

and this year it kind of felt like they

46:33

were pivoting to be a BB company I'm not

46:36

going to lie they talked about AI models

46:38

constantly they talked about the price

46:40

that they're charging developers to

46:41

access their AI models which is

46:43

something Microsoft would do which is

46:45

something open AI would do but that is

46:47

not something that Google used to do at

46:48

Google IO yeah IO io's changed I was

46:52

definitely this year felt like this it

46:55

felt like you know the like part in

46:58

every IO where they talk about all their

46:59

servers and tpus and all that and

47:02

there's like that exact same graphic

47:04

every single year first of all that

47:06

graphic got used probably 45 times

47:09

whoever made that is not getting paid

47:10

enough for how much they use the

47:11

likeness of it but it felt like that

47:13

like end 25 minutes of every IO where

47:16

where you're like all right all the cool

47:17

stuff happened like when are we getting

47:19

out of there that felt like the entire

47:21

event it was like the most low

47:24

energy IO I've ever seen I mean I've

47:26

only been here covering it for like

47:28

seven years but just all the things

47:31

nothing they announced had this like

47:32

real wow factor there was very few times

47:35

where my Twitter timeline was all like

47:37

this is so cool they just announced this

47:39

like nobody really had that one thing

47:41

where it was really and we've had like

47:43

last year visualized roots on maps was

47:45

this really cool visual example we've

47:47

had like the chain link fence that I

47:49

reference all the time like yes there

47:51

are things that did not come out I that

47:53

was IO but it was cool and it had that

47:56

like wow moment the crowd seemed out of

47:58

it this year almost all of the

48:00

announcers I felt just also felt low

48:03

energy except for Samir he did a great

48:05

job and felt as high energy as normal

48:07

but like yeah I don't know and the whole

48:10

event felt like it was dragging from the

48:11

first terrible tayor Swift joke they

48:13

made in like the first one minute and

48:15

then they proceeded to make a bunch of

48:17

other bad Taylor Swift jokes that really

48:19

felt like Gemini wrote it but yeah this

48:22

might be a silly question cuz I didn't

48:23

watch it when you said Samir do you mean

48:25

like from YouTube no umid I'm forgetting

48:28

is like yeah the guy from Android and

48:31

what was a bummer was he was basically

48:32

like new things coming to Android and

48:34

then just had to say the exact same

48:36

Gemini things just in like basically a

48:38

mobile version Samir Sam he's the

48:40

president of Android ecosystem and he

48:43

had his like same energy which just made

48:44

all of the presenters around him feel

48:46

low energy everyone felt really low I

48:48

don't know what the what was going on

48:50

but it felt low energy I think a perfect

48:52

way to wrap up kind of that we weren't

48:54

the same people feeling this is Ben at 9

48:57

Google posted a title an article this

48:59

morning that said so Google made a

49:01

10-minute recap mhm and his article says

49:04

Google's 10-minute iio recap is somehow

49:06

just as tedious as the full event and IO

49:08

usually isn't tedious until the last

49:10

like 20 to 30 minutes like it's usually

49:12

like cool cool cool wow I didn't even

49:15

think of the fact that you could do that

49:16

with machine learning wow you can get

49:18

rid of a chain Ling fence you can't but

49:20

still like all of this stuff that

49:23

genuinely blew my mind that I feel like

49:24

we also used to see when every pixel

49:26

would drop yeah there would always be

49:28

one or two cool AI features that were

49:30

like wow this is I'm so excited for this

49:33

year and there was almost there was like

49:35

a couple things that were like I'm glad

49:37

Gemini's doing that now I can name like

49:39

three which we'll get into which we'll

49:41

get into but everything else felt really

49:43

corporate it felt B2B which was really

49:46

weird it was surprising to me because

49:48

they also made the distinct choice to

49:51

separate the pixel 8A announcement out

49:53

of IO right so we had a separate pixel

49:55

8A happen before like a week two weeks

49:57

before IO yeah and then to me that was

50:00

like oh io's stacked this year we don't

50:02

even have room for the pixel 8 not and

50:05

so that's why it's surprising and I I a

50:07

lot of people say on Twitter that like

50:09

oh it's just because there's not a lot

50:10

of Hardware stuff that's tomorrow but

50:12

like they have done cool visually

50:15

interesting things software wise before

50:17

end with AI before that's not the reason

50:20

it for so it was just not a good IO this

50:22

year but I want to do two cool things

50:25

about it while we then continue to um be

50:28

mean about it for the rest of this

50:29

episode um I'm sad we didn't go because

50:32

Google iio swag is always fantastic and

50:35

the swag they had this year looked great

50:37

um I posted some photos of it so like uh

50:40

I think they did a great design this

50:42

year the tote looks awesome the kwck

50:44

looks awesome the water bottle sick I'm

50:45

really sad we missed that um sad if

50:47

that's the most exciting part well no

50:49

the most exciting part was Mark reier

50:52

opening as the DJ um he did a fantastic

50:54

job if anything his Vibes are too

50:57

Immaculate that everything after him

50:59

feels boring he tried to do what he

51:01

always does in in at the beginning of

51:02

shows and he was just trying to bring

51:03

his same energy and I texted him

51:05

afterwards I'm like bro I'm so sorry you

51:07

had to deal with that crowd and he was

51:09

like I did what I could my favorite part

51:11

is that again as an American units of

51:13

measurement really confuse me and so

51:15

when they when they measured when they

51:17

measured Gemini's contextual ability in

51:21

tokens no in number of Cheesecake

51:23

Factory menus worth of words right I was

51:26

like Gemini can hold 95 Cheesecake

51:30

Factory menus worth of context at one

51:33

time I was like that's it that seems

51:35

like have you been to the cheesec

51:37

factory factory is the menu really big

51:39

it's a book it's a book Adam could have

51:41

a book club that no one would join him

51:42

with I need to go to a restaurant that

51:44

has like three options I don't want I

51:46

want never go to menu for three weeks

51:49

now I'm on the 70th page yeah yeah okay

51:53

yeah okay so besides Mark's Immaculate

51:56

entrance in which he had a for viewers

51:59

that and listeners that don't know who

52:01

Mark is he's like he makes music on the

52:03

spot sort of person and they had him use

52:05

the music LM thing to try to like

52:07

generate music and then play with it

52:10

he's like an improv genius yeah he's an

52:12

improv it was very tough uh cuz the

52:15

crowd was tough the music LM didn't work

52:18

that well and the beats that it was

52:19

giving him were not that funky a lot of

52:22

problems he wears a lot of robes they

52:23

had custom Google IO Mark revier robes

52:25

and he shot them out of a cannon um and

52:29

then and then St dark came on stage in

52:31

the energy whip boom into the

52:34

ground uh okay so there were actually

52:37

some interesting things and they

52:38

actually did start off with a couple

52:40

pretty interesting things they started

52:42

off with which with what I think was one

52:44

of the most interesting things which was

52:46

an update to Google photos where now you

52:48

can just contextually ask Google photos

52:51

to show you certain pictures and also

52:53

ask questions about your photos uh so

52:56

you can say and I actually used Google

52:58

photos to find my license plate a

53:00

million times last year all the time all

53:03

the I've never now in Google photos you

53:05

now I memorize my license plate but in

53:07

Google photos you can now say what's my

53:10

license plate number again and it'll

53:12

bring up pictures of your license plate

53:13

yeah which is cool you can say show me

53:16

Lucia's how Lucia's swimming has

53:18

progressed and it'll bring up a bunch of

53:21

photos and videos you took of your

53:22

daughter swimming so it can understand

53:25

the context of okay this is what you're

53:27

asking and then map that to what it

53:30

tagged the photos as of being and I

53:33

think that's actually really sick I do

53:35

think the license plate example was

53:37

perfect because they were like normally

53:38

you would just search license plate and

53:40

now every license plate in your every

53:42

car you've ever taken a photo of shows

53:44

of and then you say my license plate

53:46

number it's like oh this is a car that I

53:47

see pretty often so it must be your

53:50

license plate let me find one picture

53:53

that has it and only show you that one

53:54

picture so you don't have to scroll

53:55

through a bunch mhm that's cool yeah I

53:59

like this and I think because the

54:02

results are specifically pulling from

54:04

your photos it can avoid

54:07

hallucinating because it's going to give

54:08

you the exact Source it also because

54:10

yeah it just has to show you an image

54:12

it's not generating something cuz before

54:14

like so if I wanted to find my license

54:15

plate like I had this actually for

54:16

example I was in the back of a car and I

54:18

was going to go to the airport and

54:19

they're like I just need your passport

54:20

number and your signature and I was like

54:21

okay here's my signature what is I don't

54:22

have my it's in the trunk but I just

54:24

pulled up Google photos and I just

54:25

searched passport and I got the latest

54:27

photo that I took of my passport and I

54:29

just got it from there yeah instead

54:31

theoretically I I would just ask Google

54:33

photos what's my passport number and it

54:36

would give me my passport number and as

54:38

long as I also can see that it's

54:40

referencing an image of my passport and

54:42

not some other random photo I have of a

54:44

passport I think I'm good it doesn't it

54:46

doesn't give you a text output it just

54:48

shows you the photo the one picture so I

54:50

think that actually solves the is I

54:52

think you could probably ask Gemini at

54:54

some point what's my passport number it

54:56

would pull it up and then it would

54:57

probably potentially reference the photo

54:59

but right now this Google photos update

55:01

it's just an easier way to like ask

55:04

Google photos to show you specific

55:06

pictures that's NE it kind of feels like

55:08

which I like more than generative Ai No

55:10

I agree it feels like the magic of what

55:13

Google search has been where everyone

55:14

makes the joke of like what's that song

55:16

that goes ba ba ba ba ba it's like you

55:18

can ask it some pretty crazy questions

55:20

and now in Google photos you can just be

55:22

way more specific and it can find things

55:25

easier for you Google photo search has

55:27

always been one of my favorite features

55:30

ever just like being able to search

55:33

through different things and it can tell

55:35

kind of what they are and create albums

55:37

or look at places that you've been to

55:39

and now being able to go a little more

55:41

into that where maybe I'm like what was

55:43

that trail I I hiked in Glacier National

55:46

Park in 2019 and like if I took a

55:48

picture of the trail sign it'll probably

55:50

come up as like Cracker Lake Trail and

55:52

then that's awesome and I don't have to

55:54

just search through every single thing

55:56

that was in Montana yeah I often will

55:58

think oh yeah I went on that trip I

56:00

think it was in 2019 so I'll search like

56:02

mountains 2019 and I have to sort

56:04

through all the mountains that I took a

56:06

picture of in 2019 until I find it maybe

56:08

it's not that year so now I can ask it

56:10

like was show me the mountains when I

56:13

was in Italy the last time and yeah it

56:16

can still do sort of those things right

56:18

now but it's just a lot more contextual

56:20

which beneficial it's it's helping

56:22

because a lot of people didn't even know

56:23

you could search Google photos for

56:25

things and it would find them very

56:26

specifically so it's just making this

56:27

prompt engineering less of a skill you

56:30

just type what you're thinking and it

56:31

can find it which is why I think they

56:33

specifically use the promp what's my

56:35

license plate number again because it

56:37

sounds more human like the way that I

56:39

would probably prompt that is my license

56:41

plate and it would bring it up you know

56:45

whereas a normal person says what's my

56:47

license PL guy because I think the

56:48

ultimate goal is like be able to have

56:50

natural speech Computing with these

56:52

computers with every prompt yeah I think

56:55

our generation grew up learning how to

56:57

speak Google which is keyword it's a

56:59

keyword language you have to know how to

57:02

Google and you're basically just picking

57:04

out keywords we've talked about this

57:05

like prompt engineering is the skill

57:07

that we all learned yeah and now it's it

57:11

wants the old people to be able to do it

57:14

yeah where young people just go like

57:15

write me some code and it just does it

57:17

you just don't have to know how to do it

57:18

right there's a lot of other stuff so

57:20

we're just going to go through it a

57:22

little bit randomly I kind of went

57:23

linearly through the KE note so if it

57:27

jumps around a little bit that's

57:28

Google's fault no one remembers exactly

57:30

how the keynote went so everyone

57:32

hallucinate that the AI hallucinate that

57:35

uh okay Google search generative

57:37

experience which was a Google Labs

57:40

feature that you could opt into for the

57:41

last year since Google iio 2023 which I

57:44

have been opted into for a while and has

57:46

annoyed the heck out of me I've been

57:48

using it as well I had an example

57:50

recently where the AI thing gave me one

57:52

answer and then the top suggested Google

57:54

thing told me a different answer yeah

57:56

thaten to me yeah so so that was an

57:59

optin feature that I forgot was op in

58:00

because I would have turned it off a

58:01

long time ago if I remembered that uh it

58:04

is now rolling out for everybody and it

58:07

also now generates a bunch of extra

58:11

information and tiles and Pages for you

58:14

and in my opinion it was a very bad look

58:17

because basically the entire visible

58:20

screen was all generative Ai and I

58:24

didn't see any links

58:26

anywhere yeah which is bad yeah it was

58:29

basically like creating a bunch of

58:31

almost like Windows 8 tiles of like all

58:34

potentially different things you might

58:35

want whether it's the sale link for

58:37

something with prices or the reviews of

58:40

a restaurant just like all in these

58:41

different tiles and you didn't see a

58:43

single link on the page I also just want

58:45

to like paint Mark has the picture of

58:48

when they announced it the way they

58:49

announced it as you know the huge screen

58:51

on the io stage yeah it was all white

58:54

and had the Google search bar but it

58:56

didn't say the word Google above it and

58:59

all of us kept looking at each other

59:00

like are they going to rename search are

59:03

they going to change or is this going to

59:04

say like I don't think any of us thought

59:05

it would actually but we we were all

59:07

pretty sure it was going to say Google

59:09

powered by Gemini and I think part of

59:12

that hype is why none of this stuff felt

59:14

that cool cuz we're all like they're

59:15

going to do something big they're going

59:16

to do something big they're going to

59:17

rename it they're going to change how

59:19

this looks and then they just didn't

59:21

yeah um I think they're trying to

59:23

completely reframe in your mind what it

59:25

means to to search something now you

59:28

they don't want you to have to search

59:29

something and then find the information

59:31

yourself and actually what they kept

59:32

saying over and over again was let

59:35

Google do the Googling for you which was

59:38

super

59:40

weird it's like so like never leave

59:43

Google it's yeah it was basic because I

59:45

think that what they were afraid of is a

59:47

lot of people use chat GPT as a search

59:49

engine even though it's not a search

59:51

engine I've heard crazier things yeah

59:54

you know how many people use Tik Tok as

59:55

a search engine a lot of people more

59:57

than you I mean it's like using I I use

60:00

YouTube as a search engine cuz it's

60:02

better you is the second largest search

60:03

engine in the world yeah I like watching

60:04

videos of people answering my questions

60:06

and doing things specifically I like

60:08

people dancing to my

60:11

questions this is how you change a tire

60:14

forget Kora dances yeah yeah that that

60:18

feels like a big part of it is is Google

60:20

has historically been kicking people

60:21

into links and then they leave Google

60:23

and they go find their answer somewhere

60:24

else but if Google can just like siphon

60:26

all the content out and service the

60:27

thing that you were looking for in the

60:28

first place then Google helped you not

60:30

the site yeah and then that site never

60:32

gets any credit even though the site is

60:34

the one that had the information which I

60:35

just want to point to the fact that like

60:37

not that long ago like two to three

60:39

years ago Google got in serious trouble

60:41

for like scraping some information and

60:44

putting it on the sidebar of Google just

60:46

saying like these sites say this this

60:48

this and everybody freaked out and then

60:50

Google got in trouble for it and now

60:52

they're just completely siphoning away

60:55

sites together to the point where there

60:57

is now a button called Web so you know

61:00

how when you Google something it'll be

61:02

like images videos news there's now

61:05

going to be a button called Web where

61:08

you press the web button and it actually

61:11

shows you the links oh wait you it

61:14

wasn't a scroll away

61:15

before uh I you probably can keep

61:18

scrolling but there's so many things

61:19

there but like do you know how there

61:21

yeah there's shopping Maps photos now

61:23

there's one that's web so it's only web

61:25

links and hate to say I'm excited for

61:29

this because Google search has become

61:30

such a pain in the ass that I like just

61:33

want to look at different links it

61:34

basically gets rid of everything that

61:36

Google has added in the last five years

61:38

just it's like old Google it's the opt

61:41

out of the Reddit redesign for Google

61:43

enhancement site

61:45

yeah yeah uh that's hilarious I find

61:49

that hilarious somebody tweeted uh that

61:52

that's the funniest April Fool's joke

61:54

that Google has done

61:56

um yeah it yeah it's just a weird thing

62:00

um Nei from The Verge refers to this

62:03

whole redesign as Google Day Zero which

62:06

is basically like what happens when a

62:09

search is just a generative like

62:12

everything is generated and there is

62:14

just zero incentive to go to websites

62:17

like we've talked about this forever

62:20

because we always were like this is

62:21

eventually going to become the thing and

62:23

currently right now because they haven't

62:25

like ful rolled out this thing it'll

62:27

give the little generative AI preview

62:30

but you still have links but all most of

62:32

the examples that they showed at IO were

62:34

like the full page was generated and

62:37

once you hit a point where the full page

62:38

is generated that's when you start to

62:40

hit the existential thread of like how

62:42

are websites going to make money anymore

62:45

yeah this is again why uh I think this

62:48

this llm stuff needs fact checking buil

62:51

in uh I just Googled it real quick

62:53

Google's mission statement is to

62:55

organize the world world's information

62:56

and make it universally accessible and

62:59

useful and if you think about it if

63:02

you're Google yeah for years and years

63:04

and years you've been collecting and

63:06

organizing all of these links and then

63:09

people go to

63:11

you and they say I want to know where

63:14

this thing is on the internet and then

63:15

you show them the list and then they

63:17

leave and you've been doing that for so

63:20

so long that you have all this

63:21

information if you're Google probably

63:24

what seems like a natural graduation is

63:26

all right the Internet it's here it is

63:29

what it is it's this giant thing now we

63:31

can do is learn all of it summarize it

63:34

all for ourselves and now when you ask

63:36

me a question I can just reference it

63:38

and hand you exactly the information you

63:39

need and you never even have to go

63:41

through that mess of links ever again

63:43

people will just this is a crazy

63:45

statement kids will not really learn the

63:49

skill of navigating through search

63:51

results yeah not at all that's another

63:53

thing that they won't really that's also

63:55

scary cuz that makes me think that kids

63:57

are just going to believe the first

63:58

thing they see made by a generative Ai

64:02

and that's why you need it to be fact

64:04

checkable it needs to be ver verifying

64:06

itself because yeah the skill still of

64:08

like seeing the Google It button and

64:09

going yeah I'm going to verify this one

64:10

and like looking through the results

64:12

that's still a skill that we have but if

64:14

you never have that skill and you just

64:15

go to Google the thing and it just

64:17

surfaces the thing you just believe it

64:19

that could be pretty dangerous here's a

64:20

question for everyone just on topic of

64:22

this when's the last time you went to

64:24

the second page of a Google search

64:26

all the time I do it all the time it's

64:28

pretty you know you're in for one yeah

64:31

it's like 50% of the time 50% of the

64:33

time I wouldn't say 50 for me but when

64:35

I'm doing research I go through like

64:37

five pages I got a long tail I get a lot

64:39

of stuff on the first page but then

64:41

every once in a while I get something on

64:42

the like 36th page and it's wild that

64:45

there's probably people who don't

64:46

realize that there's like the 10 and

64:49

then like it keeps going after that like

64:51

yeah keeps going wait quick question

64:53

Marquez with what you were saying isn't

64:55

that the best C well I guess user

64:59

experience customer UI in theory it is

65:02

if it's correct yeah and who's going to

65:04

write content this is if you have a

65:06

magic box that tells you all the answers

65:08

nobody can make the content that it's

65:09

scraping from this is the we've been

65:11

talking about this for months I know

65:13

that's the problem that's the fun part

65:14

but nobody thought about this does the

65:15

incentive to create content could

65:17

Disappear Completely when you don't get

65:19

stop going to websites and they don't

65:21

make any AD Revenue then yeah is then

65:24

things go under is the fastest way also

65:27

the best way for consumers like yeah

65:29

you'll get the information faster

65:31

because you don't have to click through

65:32

websites but also sometimes I think it's

65:34

I think it's also a what about the

65:36

journey I think there's also a tale to

65:38

this where like At first the fastest way

65:41

is best but then when it gets to a more

65:43

indepth like think of someone who's

65:45

about to buy something yeah exactly like

65:48

if you're I'm about to make a big

65:49

purchase I'm to buy a laptop real quick

65:51

if I just Google like what are some good

65:53

laptops for $1,000 then I just set my

65:56

frame for the first page but then on the

65:58

second or third weekend where I'm like

66:00

I'm about to spend the money now I'm

66:01

going to really dive in no there it's

66:03

like something I Google almost every

66:05

week chicken drumsticks oven temp time

66:09

don't need don't need more than 15

66:11

seconds for that one you know what I

66:12

mean but yeah like if I if I wanted if I

66:14

wanted to go on a little little internet

66:16

journey I think there's times though

66:17

when also you don't have time for the

66:19

journey if that makes sense like chicken

66:21

drumsticks on attempt time like I'm

66:23

hungry I think about the time I was out

66:26

and CLA calls me it's like a pipe burst

66:28

in the downstairs and we need something

66:31

what do we do with it and I just ran to

66:33

Lowe's and I was like what's the best

66:35

thing and instead I'm sitting there

66:37

where like I wish it could just be like

66:40

is it a small leak is it this and I can

66:41

it'll give me that temporary fix right

66:43

there and I don't have to go through 20

66:45

Pages inside of Lowe's while my

66:48

basement's flooding yeah I think there's

66:50

some fast things where that is the ideal

66:52

scenario but yes the journey sometimes

66:54

is fun because you learn about things on

66:56

the journey that turns into actual

66:58

usable information in your own brain

67:01

yeah that we get to use right imagine

67:03

that imagine that okay imagine

67:06

experiences we got to move on cuz

67:07

there's so much there's so much stuff um

67:09

Gemini 1.5 Pro which you and I have been

67:12

beta testing for what feels like months

67:15

at this point they doubled the context

67:17

window to 2 million tokens uh and now

67:20

they're just spouting millions of

67:21

Cheesecake Factory menus yeah they're

67:23

just flexing on every single other

67:25

company that they have the most tokens

67:27

which y wow I still don't understand

67:29

tokens tokens at all they're vbucks a

67:31

word is like a token it's like

67:32

tokenization of a word so you can map it

67:34

to other words and they just cost money

67:36

Transformers Transformers Transformers

67:37

cuz people make fun of me for saying

67:38

that a lot um Power which costs money

67:41

they cost power Mone they're called

67:42

tokens cuz it's like it's the smallest

67:44

you and this is like the dumbest

67:46

possible way of explaining it but it's

67:47

like it's the smallest you can break

67:49

down a piece of information in a data

67:52

set to then have it be evaluated across

67:55

the

67:56

to every other token exactly so like in

67:58

a sentence like you would break down

68:00

each word into a token and then analyze

68:03

those words as independent variables

68:05

tokenization CU you're like in an image

68:07

like a pixel could be a token or a

68:09

cluster of pixels could be a token okay

68:11

so then quick question when they say

68:12

when they say 2 million tokens do they

68:14

mean that I can like do a 2 million word

68:17

word yes okay got it oh so it's per

68:20

individual query it can take up to 2

68:22

million tokens yes okay that's the

68:24

context so the window is basically like

68:27

how much information can I throw at this

68:29

because theoretically in these models

68:31

the more information you give it the

68:33

more the more accurate they can be okay

68:35

okay remember the dolly prompts that

68:36

were like give me a man in an astronaut

68:39

suit with a red handkerchief around his

68:40

NE be more blah blah you can just keep

68:43

going okay right yeah cool okay um now

68:46

they are also embedding Gemini into a

68:48

lot of Google workspace stuff so you can

68:50

have Gemini as like an additional person

68:53

in your meeting that's like taking notes

68:55

for you you and you can interact with

68:56

during Google meet meetings they should

68:58

call it Go

69:00

pilot why cuz it's Google like it's like

69:04

co-pilot but Google oh oh come on was it

69:07

that

69:08

bad was that bad I'm picturing on a

69:11

killed by Google website in like three

69:13

months yeah what did sorry that just

69:15

reminded me what did Mark call

69:17

music gos oh yeah Google Loops gloops

69:22

gloops yeah yeah he called it gloops at

69:24

one point which they should was the best

69:26

part of a yeah uh they introduced a new

69:28

model called Gemini 1.5 flash which is a

69:32

lighter weight faster cheaper model that

69:34

handles lighter weight queries

69:36

hooray Microsoft is so scared um we got

69:41

project okay so project Astra is what I

69:44

think is basically their answer to like

69:47

the Humane and rabbit thing except it's

69:49

better because we always knew it would

69:51

be the demo they

69:53

showed the demo they show was basically

69:56

on a pixel it has a live feed of your

69:58

surroundings so on Humane or rabbit you

70:00

have to take a photo and then it

70:02

analyzes the photo and talks about it on

70:04

this one it was basically a real time

70:07

image intake where it was taking in a

70:09

video feed with this person walking

70:11

around with their pixel and they could

70:12

just be like they were just kind of like

70:14

what does this code do and it'd be like

70:16

oh it does this does this okay cool uh

70:19

well what what what where could I buy

70:20

this teddy bear oh the teddy bear can be

70:22

bought on Amazon for $1499 cool cool

70:24

cool uh over and then they did this

70:27

casual thing where they like switched to

70:29

these smart glasses that had uh cameras

70:32

on them which was also strange cuz they

70:34

were like where did I leave my glasses

70:36

and it remembered where but they never

70:37

showed it in that initial feed so are

70:40

they remembering it previously or so

70:41

here was the weird thing yeah they they

70:42

said like where did I leave my glasses I

70:44

was like it's on the side table it only

70:46

knew that because the person was walking

70:48

around with their pixel camera open like

70:51

for 5 minutes and it happened to see it

70:53

in the corner while they were walking

70:55

around

70:56

obviously in the real world I think this

70:58

was basically the same thing where it's

70:59

like in the far off future if you had a

71:02

Humane AI pin that was constantly taking

71:05

in all of your video feed information

71:07

all the time it would always know where

71:09

you left all your stuff because it was

71:10

constantly watching which nobody wants

71:13

um so that's the convenience though

71:15

think of the convenience the storage

71:18

yeah remembering everything that Google

71:21

just put on their servers yeah th it on

71:23

YouTube so I think this is just a demo

71:25

to show that yeah they they can do what

71:28

human and rabbit are doing but way

71:30

faster and way better and it's a video

71:32

feed instead of a ph and because it's

71:34

the live video feed they also did this

71:36

thing where it's like you could draw in

71:38

the video feed and be like what is this

71:40

thing and like an arrow to it so if like

71:42

for some reason you can't just describe

71:43

the thing in the viewfinder as well you

71:45

can it's basically Circle to search

71:47

through live M multimodal like which is

71:50

something that open AI was basically

71:52

demoing too on Monday um cuz you could

71:55

you could Point your phone at things and

71:56

it would help you through math problems

71:58

as you were solving it yeah so it was

72:01

cool uh they didn't talk about you know

72:03

if it's ever going to come to anything

72:05

ever they just demoed it they said it

72:07

was a project kind of like those like

72:09

translation classes that just never

72:10

became a thing and I think they were

72:12

trying to make like a nod to those by

72:14

saying like yeah we've got we're working

72:16

on stuff but they're probably never

72:18

going to release anything it kind of

72:19

also made me feel like because this was

72:22

just like another blip on the radar

72:24

during all of

72:26

made me kind of feel like the Humane pin

72:28

and the rabbit pin needed to make the

72:29

hardware versions because everyone just

72:31

like kind of moved right past this even

72:33

though it's doing the exact same things

72:35

better than both of those but since

72:37

they're not a shiny product everyone's

72:39

like cool it's just something in iio

72:40

they're basically like yeah you can do

72:42

this on a phone moving on better yeah I

72:45

don't think about you Child's Play yeah

72:48

yeah all right so that was already a lot

72:50

um there's a lot more Google IO stuff uh

72:52

either you're welcome or I'm sorry or

72:54

you're not read that or not listen to

72:57

all that but we got to do trivia because

72:59

that'll help your brain break up this

73:01

episode

73:04

so hey what is this is this on purpose

73:09

it is yeah I tried to use Google's music

73:12

effects to recreate our normal trivia

73:15

music um fun it got none of the things I

73:19

put in my prompt in here like not a

73:20

single one oh I guess I asked for

73:23

drums terrible sounds like is that drums

73:26

or snapping I asked for 6 seconds of

73:29

interstitial music intended to

73:30

transition from the main segment of a

73:32

podcast to the trig the trivia segment

73:34

that happens before an ad the track

73:36

should feature a hammonded organ and

73:38

drums and be bright Punchy and fun uh

73:41

that's not wrong I would say bright

73:43

Punchy and fun yeah but where's the

73:44

organ where's the six second let's chill

73:47

out a little bit give it a chance this

73:49

is Google small startup company I was

73:52

wondering he told me earlier like let me

73:53

handle the trivia music and I got

73:55

something cooked up and I was like what

73:56

what is he going to do and now we know

73:58

all right second trivia question so a

74:02

bunch of people were talking about how

74:03

the open AI voice sounded like Scarlett

74:06

Johansson in the movie Her Like David

74:08

mentioned what was the name of the AI

74:11

that she voiced in that movie I don't

74:13

watch movies

74:15

okay well you guys are going to get some

74:17

points uh yeah it's been a while since

74:21

I've seen this film so much for that

74:23

yeah cool all right I've never even

74:26

heard of that movie what perfect me NE

74:28

serious serious Andre go home we can't

74:33

we can't tell Andrew anything about this

74:35

movie we have to get through the the the

74:37

trivia answers and not spoil a single

74:38

thing and then Andrew you need to go

74:40

home and you need to watch

74:42

her no

74:45

[Music]

74:55

um okay notebook LM which used to be

74:58

called project Tailwind which Adam was

75:00

particularly excited about it's now more

75:03

focused towards education but

75:05

effectively it can take in information

75:07

from your Google Drive like photos and

75:09

documents and videos and basically

75:11

create like a model of these AIS that

75:14

can help you with specific things

75:15

related to your specific information and

75:17

the way that they showed it was

75:18

basically a live podcast um where they

75:22

had these two virtual tutors they were

75:25

kind of like both talking separately

75:27

being like okay Jimmy so we're going to

75:29

be talking about gravity today so you

75:31

know how something's 1.9.8 m/ second and

75:34

then the other AI squared and the other

75:36

AI would be like not only is it that but

75:39

if you dropped an apple it would drop at

75:40

the same speed as a rock um and then you

75:43

can like call in almost and then ask

75:45

then you become the new member of this

75:47

conversation ask a questions and they'll

75:49

respond to you interesting it was some

75:51

virtual tutors and they were very

75:53

realistic very similarly to open ai's 40

75:56

model that felt very realistic I felt

75:58

like this week is the week where all of

76:00

the companies were like we are no longer

76:02

robotic we have can have humanoid like

76:05

voices that talk in humanoid like ways

76:08

great so that was interesting okay uh I

76:10

would actually love to play with that to

76:11

see if it's any good it was kind of cool

76:14

that there was like two virtual AIS that

76:16

were sort of like talking to you at the

76:17

same time but also interacting with each

76:19

other didn't it also just like pause and

76:21

not finish the

76:23

question um um I think that he

76:26

interrupted it oh he interrupted CU he

76:27

was basically saying I'm helping Jimmy

76:29

with his math homework hey Gemini 1 like

76:33

what do you think about this and be like

76:34

wow great question and then it just like

76:36

paused and didn't finish answering the

76:37

question yeah okay probably just a bad

76:39

uh demo on on stage yeah um okay they

76:43

also introduced I thought it was called

76:45

imen but it's IM imag imagine 3 which IM

76:52

I thought it was imagin cuz image

76:53

generation but it's imagine which is

76:55

more like imagine it's like a triple on

76:57

it probably like still is that but just

76:59

a imine funny better way of saying it

77:01

yeah uh basically their third generation

77:04

Dolly esque model with better photo

77:07

creation yay cool um music AI soundbox

77:11

which they had Mark reer at the

77:14

beginning use the music AI soundbox to

77:17

create these beats and they also had

77:18

like a childish Gambino ad where he was

77:21

talking about it I think uh he was

77:24

talking about video stuff later wasn't

77:27

he talking about Veil yeah you're

77:29

talking about Veil cuz he was put up

77:30

there as a

77:32

uh why can't I think of the word film

77:36

like he was doing movies not music which

77:39

I thought was funny but got it got it

77:40

okay yeah so the music generation was

77:42

you know better music yay they basically

77:45

are just like updating all these small

77:47

AI things that they were like I can do

77:49

this too and better than last time yes

77:51

which is where we go with vo or yeah I

77:55

think it's vo um ve ve I don't know I

77:59

don't know it probably is vo I'm just

78:00

saying that because it's already a word

78:01

in Spanish so oh what does it mean in

78:03

Spanish like IC basically IC like from

78:07

7el like 2C like

78:10

I oh okay well that would make sense

78:15

yeah yeah it can create 1080p video from

78:17

text image and video prompts uh you can

78:20

further edit those videos with

78:22

additional prompts they had testing

78:24

extending the scenes which was cool that

78:26

was the coolest part of it I think which

78:28

I think was like a direct shot at uh run

78:31

Runway runaway because that can only do

78:33

like 30 second video clips right now I

78:35

think possibly 1 minute well I thought

78:37

it was cool because it wasn't just that

78:39

it extends that is I think it was you

78:41

could put a clip in and say make this

78:44

longer right and then it would make the

78:46

clip longer by just like it it basically

78:49

is like content aware fill where you

78:51

need a photo that's bigger but it does

78:53

it on a video I think that's awesome

78:55

there are so many times where you're

78:56

like found footage this doesn't really

78:58

roll the be enough of the b-roll that I

79:00

need here like if this could be five

79:02

more seconds I do remember I remember

79:04

right after Dolly came out people were

79:07

basically making these videos of like

79:09

how to make your your a roll setup

79:11

cooler with AI and it was basically like

79:13

they just had a very contained version

79:15

and then they generated a filled out

79:18

image of their office to make it look

79:20

like they were in a bigger space that's

79:22

sick which is kind of cool but now vo

79:24

can just do do it for you uh the nice

79:26

thing too is it maintains consistency

79:28

over time so if you have like a

79:30

character the character will look the

79:31

same and it doesn't do that random

79:33

stable diffusion thing that uh AI

79:35

generated video used to do where it

79:37

would like flash in and out and like

79:38

change shape slightly and keep moving

79:40

around it feels like we're only a year

79:43

or two away from like being able to

79:44

remove a fence from the foreground of a

79:46

photo I don't know dude only God can do

79:49

that I don't think that's that's nobody

79:52

knows how to do that uh and then wait

79:54

sorry real quick Adam and I were just

79:56

like factchecking some stuff and we

79:58

found something that's like too funny

79:59

not to share with you guys like

80:01

immediately this is um when Google

80:03

announced last year lra uh which is the

80:06

actual model that does a lot of their

80:08

generative audio look at this image that

80:11

they use I just sent it to you guys in

80:15

slack wait that's just the waveform logo

80:18

that's just like our it's literally our

80:19

waveform Clips background the colors are

80:22

the same and like gradients the same and

80:26

like it's kind of the same that's

80:27

exactly the same yo Google send us some

80:30

IO swag from this year to make up for

80:32

this cuz the same steo this from us you

80:33

stole this from us it's like right on

80:35

the top of the page wow I'm tagging Tim

80:38

in this and seeing it's literally the

80:39

purple into scarlet color we'll get back

80:42

to that and see C okay um all right uh

80:47

with the vo thing they said it's going

80:49

to be available to select creators over

80:52

the coming weeks which is weird that you

80:54

have to to be a Creator to apparently

80:56

use it and I think what they're trying

80:58

to do is control the positive narrative

81:01

by giving limited access to to artists

81:04

who will carefully that will say

81:07

positive things about it which is fun

81:11

uh super fun okay the pain in your voice

81:15

as you said

81:16

that this actually is kind of cool we

81:19

found it there's some nuggets in this

81:21

needle stack you know what I'm saying so

81:24

nugget we wait you mean there's some

81:26

needles in this hay stack needles in

81:28

this nugget stack I don't mean that

81:30

there's some nuggets in this in this

81:32

needle stack if you like hey that's why

81:34

I said nuggets like chicken nuggets so

81:37

they have multi-step reasoning in Google

81:39

Search now which is cool this is

81:41

actually something that Andrew was

81:42

specifically calling out a couple weeks

81:44

ago this is so close to my Google Maps

81:47

idea I feel like but it's not quite yeah

81:49

okay do you want to describe it yeah

81:51

it's pretty much it's kind of like using

81:55

Google search and maps and reviews and

81:57

putting it all together so you can have

81:59

a more specific question or suggestion

82:02

that you want so their's was like I want

82:04

a yoga studio that's within walking

82:07

distance of Beacon Hill is what they

82:09

said within a half an hour walk of

82:10

Beacon Hill and it's going to bring up

82:13

different yoga studios that are within

82:15

those parameters that you set and then

82:18

based on the ratings the higher ratings

82:20

in there so it's an easier thing for you

82:21

to choose rather than just being a I

82:24

guess it's not that much different than

82:26

just searching on maps where but in maps

82:28

you would have to see like oh that's 20

82:30

blocks away from me this is actually

82:33

like Gathering that into what it deems

82:35

as walkable in half an hour so yeah the

82:38

the specific example they used was find

82:40

the best yoga or pilates studio in

82:42

Boston and show the details on their

82:43

intro offers and walking distance from

82:46

Beacon Hill and so it generate again

82:48

this is like the generated page so it's

82:50

like kind of scary in a lot of ways but

82:52

it pulls up like four that are within

82:55

the 30 minute walking distance from

82:56

Beacon Hill and it shows the distance

82:59

and it shows their intro offers and it

83:01

shows their ratings and that's that's

83:03

really cool I can't deny that I will

83:05

will say like a real world example of

83:07

this is last Saturday Saturday cuz

83:11

Friday night there was an insane Eclipse

83:13

all over North America not Eclipse there

83:15

was a Borealis CU apparently like

83:18

geom you didn't everyone in New York and

83:21

New Jersey miss it man you would have

83:22

had to get anywhere else in the United

83:23

States yeah we tried I was depressed I

83:26

got I like got into bed after getting

83:28

dinner with Ellis on Friday night and it

83:29

was 12:30 in the morning and I opened

83:31

Instagram and I was like what the heck

83:33

is happening and what did we do the next

83:34

day we drove to Monto but the whole

83:37

morning I was like Googling like uh

83:40

what's the weather here okay and then I

83:42

had to go on Google Maps and I'd be like

83:44

how far is that okay and I had to look

83:46

at the radar of it again and so I was

83:48

jumping back and forth between these

83:49

tabs like crazy I was going back Google

83:51

Maps and like these weather radar sites

83:54

and like how clear is the sky during the

83:57

that part of the year so if I was able

84:00

to just if you could trust these

84:02

generative uh answers ask the question

84:05

like what is the closest dark sky Park

84:08

that won't rain today that I'm most

84:11

likely to see the aurora borealis from

84:14

and it would just tell me the number one

84:16

answer and I could trust it which is the

84:18

biggest thing that would have been a lot

84:20

easier than me jumping and forth between

84:21

all these apps cuz this legit took me

84:23

like 3 hours to figure out where I

84:24

wanted to go this poses an interesting

84:26

question on if it would be able to use

84:28

real time information and real-time

84:30

inform information that Google provides

84:33

but they didn't show examples like that

84:34

because what you're saying would need to

84:36

know weather at a certain time that it's

84:38

knowing and updating where everything

84:40

they showed is stuff that could have

84:42

been posted months ago like yoga studio

84:45

walk intro offers I guess is more closer

84:48

to real time but like if you wanted to

84:49

do that or if I wanted to say and this

84:52

is something that's available in Google

84:53

what's the the closest four plus Star

84:57

restaurant in a 20-minute walk that is

84:59

not busy right now MH that like I

85:02

wouldn't have to wait for yeah could it

85:04

pull the busy time thing that they have

85:06

in Google in reviews and would it pull

85:08

that I would hope but I don't know they

85:11

should have showed it if it can pull

85:12

that live in there but I don't know if

85:15

they would be able to do this but yeah

85:16

this is so this is so close to me being

85:18

able I just want this to be able to use

85:21

in Android auto on maps with my voice

85:24

voice and say stuff back to me through

85:27

the car speakers and be able to do a

85:30

little more within a route um I feel

85:32

like this feels like a step close to

85:34

that I'm excited for it this was very

85:36

close to the thing that you mentioned

85:37

the other yeah yeah yeah we're almost

85:39

there I'm almost able to find the

85:41

closest Taco Bell serving breakfast on

85:43

my route next next to a third wave

85:44

coffee shop that doesn't add 30 minutes

85:46

yeah that doesn't which is pretty much

85:48

exactly what you asked for I know it is

85:49

I know so close yeah we'll see if that

85:51

if that works um Gmail now you might

85:55

think that the updates to Gmail would be

85:58

things that I would care about

86:02

mhm no no so all I want in Gmail is

86:07

better contextual search because right

86:10

now you can Search keywords and that's

86:12

basically it and you have to sort

86:13

through all of the different things and

86:14

you have to sometimes it just doesn't

86:16

show the email even though it has the

86:18

exact words that you're searching for is

86:20

it safe to say gmail search function is

86:23

Google's worst search function out of

86:25

anything that they do I would say so I

86:28

think it is impossible do you know how

86:29

many times I try and find my Mileage

86:31

Plus number in my Gmail but I just get

86:33

400 United promotional emails that have

86:37

never been opened it's like if I'm

86:38

searching for something I probably want

86:41

it to be something that's been opened

86:42

and read before this needs the exact

86:44

same update as Google photos just got

86:46

yes yeah which is well tell me my

86:48

Mileage Plus number and then it goes

86:50

here's what it is and here's the email

86:52

that showed it okay so that's a perfect

86:53

analogy because the Google The Google

86:56

photos thing you ask it a question and

86:59

then it brings you the photo that you

87:00

want to see right the update to Gmail

87:02

that they just added is you ask you ask

87:05

it a question about your emails and it

87:08

generates you a response it doesn't

87:11

bring you to the email that you need to

87:12

see it just tells you about your emails

87:15

which I don't trust because I just want

87:17

to see the email email yeah yeah so it

87:20

can summarize an email chain which is

87:23

like

87:25

I guess sure maybe how long your email

87:27

chains I know Corporate email chains are

87:28

probably really long and possibly

87:30

annoying still don't trust a generated

87:33

answer of things you can ask Gemini

87:36

questions about your email um okay uh it

87:40

has suggested replies that are generated

87:43

by Gemini which is not that different

87:44

from the suggested replies it already

87:46

has right now except that now it's

87:47

suggesting like full replies instead of

87:49

just like hey Martha as like the

87:51

beginning of the reply or whatever um

87:54

one of the examples that they got on

87:56

something you can do in Gmail with

87:58

Gemini is they asked it to organize and

88:01

track their receipts so it extracted

88:04

receipts from emails that they got from

88:06

a specific person put all the receipts

88:08

in a folder and drive and then created a

88:11

Google Sheets document that in a bunch

88:13

of cells like organized the receipts by

88:16

category damn that was awesome that was

88:19

cool it was cool but it was like so

88:20

specific and neat that it can do all

88:22

that and it still probably can't find my

88:24

Mileage Plus number yes I bet the rabbit

88:26

could do that if I took a picture of

88:28

every receipt I've ever had I think this

88:29

is cooler though because it can

88:31

show sorry I missed the sarcasm

88:34

completely there yeah I love that that

88:36

was the number one thing that people

88:37

were Amazed by with the rabbit they were

88:39

like it can make tables spreadsheets of

88:41

simple spreadsheets hey the Humane pin

88:44

can do that soon sometime in sometime in

88:47

the fut I think the biggest problem with

88:50

with Gmail's contextual search right now

88:51

is the signal to noise ratio is terrible

88:53

like you were saying like there's one

88:55

email that says your Mileage Plus number

88:57

all of the other ones are promotional

88:59

yeah signal is one noise is

89:02

999,000 yeah maybe they just did a bad

89:04

job at explaining it I'm hoping Gemini

89:06

for Gmail search could be really awesome

89:09

cuz I need it so bad I'm so sick of

89:11

digging through my email and just not

89:13

being able to find things that are

89:15

literally things I or a friend typed me

89:17

I've legitimately just thought about

89:19

just nuking my whole email and just

89:21

starting a new one start over because

89:22

I'm like I can't find that's not a bad

89:24

idea I think about it a lot but it's and

89:26

I wish you could do that with phone

89:27

numbers but you're just getting someone

89:29

else's like phone number that they're

89:31

reusing so kind of pointless you're

89:32

going to get spam anyway yeah yeah I

89:34

remember like sorry this is random

89:36

moving moving a couple years ago and

89:38

getting solicitation mail being like how

89:41

is this possible that I'm getting mail

89:42

this is a new address it doesn't make

89:44

any sense anyway yeah yeah okay I would

89:47

like that to happen so they're they're

89:48

creating this new workspace side panel

89:52

that's going to be going into a lot of

89:53

the Google workspace apps like Gmail and

89:56

like Google meet and like Google Chat

89:59

was a which is part of meet but no one I

90:01

don't think ever has used I take a

90:03

minute and be like what's Google Chat I

90:05

forgot that was I forgot it existed yeah

90:08

um because it's a chat Clan by Google

90:10

and you just don't use them because they

90:12

oh hangs oh right yeah oh no sorry wait

90:15

was it yeah Alo wait was it it Google

90:18

Messenger that's what it was oh mess

90:20

messages by Google oh messages or is it

90:23

Android messages I thought it was

90:26

[Music]

90:28

inbox all right uh yeah so so that side

90:32

panel is how you're going to interact

90:34

with Gemini in order to interact with

90:36

all of your Google stuff what I found

90:38

kind of frustrating about this is that

90:41

it's only in Google workspace stuff oh

90:44

and to me this is their version of Apple

90:47

lockin of like we're giving you all

90:49

these Gemini features but you can only

90:52

really use it in Google

90:54

products so but also like if you have

90:57

like a regular account that's not a

90:58

workspace account is that does that

90:59

still work no you can still use it like

91:01

that's part of Google workspace I

91:03

believe as me as a person with Google

91:07

Calendar and Gmail and three or four

91:10

other Google services I can you can use

91:12

Gemini through that okay yeah yeah um

91:14

they introduced this thing called a

91:16

Gemini powered teammate uh if you use

91:19

Google Chat forgot about this chip yeah

91:22

chip so why do they name it stop stop

91:26

naming I actually named it Chad before

91:29

they named it chip and then they were

91:30

like chip and I was like Chad was to be

91:32

fair that it wasn't that's not what its

91:34

name is you can name it and they named

91:37

it chip as just a joke in the thing got

91:39

it yeah so the way this worked is

91:41

imagine a slack Channel where you're all

91:44

talking about stuff you're all like

91:45

sending files and documents and stuff

91:48

you have an additional teammate that you

91:50

can name whatever you want we should

91:51

name ours Chad if we ever do it uhhuh

91:54

but you can say like hey Chad what was

91:58

the PDF that I sent to Tim last month I

92:00

can't really find it and Chad goes I'll

92:02

go find that for you and then it comes

92:03

back and it gives it to you and it can

92:05

ask information about it so it's

92:07

basically an employee a personal

92:09

assistant I'm on board yeah yeah so but

92:12

that's like in only in Google Chat which

92:14

literally no one has ever used in

92:16

history this is going to be dead in a

92:17

week I did not know that this was a

92:19

product yeah like did most of them yeah

92:23

on Gmail you can see the little chat

92:26

like you can rotate between Gmail and

92:28

Google Chat like whenever you want which

92:31

is a thing that you can like get rid of

92:32

or po out of that I always I I've never

92:35

I know oh look invite everybody in our

92:39

work yeah I'm pretty sure only Google

92:41

uses this so I'm not sure how helpful

92:44

this is going to be okay but it's an

92:46

interesting idea marz the last time we

92:47

talked and it was

92:48

2020 I didn't know I use this yeah this

92:52

isn't Hangouts what it probably was

92:54

Hangouts when we were talking chats yeah

92:57

oh H did we say anything fun well this

93:01

one says I haven't used this since 2013

93:02

so oh my that means it was just Gchat

93:06

right just I don't know anymore I don't

93:09

know who knows I can't believe we will

93:11

live in this hellscape um it's basically

93:13

like a a live like slackbot doing things

93:18

for you yeah yeah it's a slackbot that

93:20

is like contextual and can grab

93:23

different files that you've used which

93:25

pretty cool yeah it's really sad that

93:27

it's only in Google chat but yeah you

93:29

know but also makes sense I guess

93:31

they're not going to build it for slack

93:32

correct first to show off at IO yeah uh

93:35

okay um they have a new thing called

93:38

gems which in JS which is basically gpts

93:44

you know how chat GPT has a custom GPT

93:47

you can build that's like this is the

93:49

math tutor GPT and it only talks about

93:51

math M which like there's an existen

93:54

question here of like would you rather

93:55

do have like one AI agent that knows

93:57

literally everything and is omniscient

93:59

or would you have individualized agents

94:01

I'd rather just have an omniscient agent

94:03

that's just me but you can have Js which

94:07

uh are specialized Google Gemini Geminis

94:12

with ultra deep knowledge of specific

94:13

topics and yes well sort of no it's the

94:16

same knowledge as regular Gemini it's

94:18

not deeper than regular Gemini knowledge

94:21

I don't understand the I still don't

94:22

understand this that much yeah maybe

94:25

it's less prone to hallucinating if it

94:27

doesn't have a bunch of extra input

94:28

maybe maybe there's more well it has the

94:30

same input though cuz it's the same

94:32

model but it doesn't but it doesn't have

94:34

all the other knowledge oh of other

94:36

topics to hallucinate about I don't know

94:39

it's it's it's the same as the custom

94:41

gpts where you use chat GPT to create a

94:44

custom GPT so it's like it that input at

94:48

the beginning has to be correct and for

94:51

I don't it's I don't know the funny

94:52

thing about this was

94:54

when the lady on stage announced

94:56

it it was really awkward because she was

94:59

like now you don't use Gemini in the

95:02

same way that I use Gemini and we

95:05

understand that and that's why we're

95:07

introducing gems and there was like a 4

95:10

second silence and she like laughed

95:13

there had to have clearly been on the

95:14

teleprompter like hold for ause it was

95:17

just too deep into like you said this

95:20

soulless event it was not her fault it

95:23

was just like it was not a great naming

95:25

scheme everyone was out of it at this

95:27

point there were multiple camera cuts of

95:29

people yawning in the audience and I

95:31

just think she was waiting for an

95:33

Applause and it did not come she had

95:35

like an awkward chuckle I felt really

95:37

bad about that yeah yeah uh

95:41

okay some more stuff trip planning which

95:44

is the stupidest I I just h ah they're

95:49

building out the trip planning stuff for

95:51

some reason um and now you can like talk

95:54

to Gemini and tell it about the trip

95:56

that you want it'll make you a trip and

95:58

then you say oh but I'm allergic to

96:01

peanuts so don't bring me to a

96:02

restaurant that likes penut it goes oh

96:04

okay and it swaps it out for you for a

96:06

chocolate factory and so it's like

96:07

modular um I don't want to give up

96:10

control to an AI but maybe people do I

96:13

don't I can only assume they're so into

96:14

this because of like the way they're

96:16

making some sort of affiliate money on

96:18

all of these different travel things

96:21

like I don't yeah I don't want to just

96:23

like say plan me a trip here and then I

96:26

have I know nothing about the trip I'm

96:27

going to yeah and that's what it wants

96:29

to do there are some psychos who would

96:31

do that I guess there are you've seen

96:33

those Tik toks of like I spun around and

96:35

threw a dart at a globe and when it

96:37

landed on Madagascar I booked my flight

96:39

like that's but then you want to go and

96:41

have a good time off the cusp you don't

96:42

want it to plan your entire itinerary

96:45

somebody would do it it's funny cuz when

96:49

video actually that honestly would be an

96:52

awesome Studio aome I liked the old chat

96:55

GPT version of just like help me find

96:57

like I had to give you the place and

96:59

specifics of it and like you would help

97:01

throw a few suggestions like a hike at

97:03

Rocky Mountain National Park tell me the

97:05

top five that are over five miles that

97:07

felt good now we're at the point where

97:09

it's like can you walk me down this

97:11

hiking trail and tell me what I'm going

97:13

to see I don't like this part I like the

97:16

old not this new B GPT oh my God uh okay

97:21

I think we're losing Marquez so we got

97:22

to wrap this up so CL we're giving him

97:24

the full experience six more

97:27

features I might die can we use Gemini

97:29

to summarize this event if you're bored

97:32

of this episode use Gemini to summarize

97:35

this episode but then leave it running

97:36

in the background I you I can summarize

97:39

this episode the rest of this for you

97:41

ready okay stuff that Google already did

97:45

is now powered by Machine learning

97:47

that's it

97:48

well more different types of machine

97:51

learning basically sure cool

97:54

yeah now it works 15% worse and more

97:58

nurly nurly with the brain stuff look

98:00

like like like what we're going to get

98:02

to next Gemini on Android what are they

98:04

using it for scam detection going to

98:06

work 15% worse uh yeah Circle they're

98:10

doing like a circle to search hover sort

98:12

of

98:13

thing 15% worse yeah I want to say I'm

98:16

pretty sure they showed how you can buy

98:19

shoes with Gemini five different times

98:22

during this event they're very into live

98:24

shopping yeah yeah I think Google's

98:26

really trying to amp up the live

98:28

shopping affiliate stuff because I think

98:29

they see the writing on the wall for

98:31

like search not making money anymore in

98:33

the distant future so they're like how

98:35

can we just like create new revenue

98:36

streams well they're killing off their

98:39

their ad streams by summarizing all the

98:41

websites that they sell ads on so now

98:43

they're just going to make the affiliate

98:45

marketing through the right the the

98:47

e-commerce that just appears directly

98:49

under the search cuz Google AdSense is

98:51

like oh my God y they're the backbone of

98:54

the web and they're also throttling it

98:57

and making sure it doesn't work anymore

98:59

all right Gemini and Android cool

99:02

there's this Gemini hover window now

99:04

that can hover above apps so you can

99:06

like ask it questions about the app and

99:08

interact with the app and do that kind

99:09

of stuff um seems helpful sometimes

99:13

annoying other times if it was one of

99:15

those Android things where it's like

99:16

kind of pinned to the right corner and

99:18

you could press it and then appear when

99:20

you want it to be yep that'd be nice I'm

99:22

not really sure how they're integrating

99:23

it but you know what this is giving me

99:25

the vibe up I feel like Google doesn't

99:28

know what to use AI for and it reminds

99:31

me of like that too it reminds me of

99:34

like when Apple makes a new thing and

99:36

they're like developers will figure out

99:37

what to do with it yeah like Google

99:39

doesn't know what to show the AI doing

99:40

so it's just doing a bunch of things it

99:42

already does and they're just like

99:44

treating it as a new language almost

99:46

where they're like developers and you

99:47

guys will eventually figure it out and

99:50

make it useful can I have a Counterpoint

99:52

to that go I Google knows exactly what

99:54

their AI can do and just like with Bing

99:58

and like old chatbots that went off the

100:01

rails Google is way too scared of the

100:03

power that they have and what people

100:05

could do with it that they have to show

100:06

the most minuscule safe examples of

100:09

things and release it in a very safe way

100:12

because they know how much power they

100:13

have buying sneakers buying sneakers

100:15

yeah make us some money don't do

100:18

anything that would hurt the entire our

100:20

entire brand I think that's that's my

100:23

take on yeah but at this point everyone

100:25

already kind of sees them as losing to

100:27

open AI so like why not just do it they

100:30

know that they actually are winning and

100:32

it's only a matter of time and they know

100:35

they're not behind so much better

100:37

positioned than open a and and this

100:39

event really kind of showcased that

100:41

because it showcased all the Google

100:42

workspace stuff that they could actually

100:44

use Gemini in um this isn't a joke

100:46

though we still do have five more things

100:49

to cover I'm going to go quick promise

100:52

we get through okay so in the Gemini and

100:54

Android floating window thing you could

100:55

be watching a YouTube video you can be

100:56

asking questions about the YouTube video

100:58

while you're watching it which is semi

101:01

cool however it hallucinates and gets

101:04

stuff wrong um someone gave an example

101:07

they talked about pickle ball again

101:09

because that's all the tech Bros talk

101:10

about and

101:12

uh the Google executive who is now in

101:15

the pickle ball because of course you

101:17

are um asked was like is it illegal to

101:20

put a spin on the pickle ball and it was

101:22

like yes you cannot do spin on the

101:23

pickle ball and some pickle ball person

101:25

on Twitter was like what what yes you

101:29

can um so that's fun uh it now has AI

101:33

powered scam detection which is kind of

101:36

creepy uh because it is consist it's

101:39

listening to your phone call when you

101:42

pick it up and it's listening to what

101:44

the other thing on the end of the line

101:47

is saying because it might not be a

101:48

person might be a robot by then it's too

101:50

late and it will tell you it will so the

101:52

example they gave was like someone who

101:55

had answered a call and was like we just

101:56

need you to like click this link and

101:58

blah blah blah and then on the phone

102:00

there was a little popup that says like

102:02

this is probably a scam you should hang

102:04

up now I actually think that's awesome

102:06

great for old people except that it's

102:08

listening to you all the time which is

102:10

weird yeah which is weird and I imagine

102:12

they're going to probably do something

102:13

about like it's only on device and we

102:15

don't do anything with the information

102:16

and we get rid of it immediately after

102:19

mhm I think it would be super useful for

102:21

people who would fall for those games

102:22

because so many of them it takes 30

102:25

minutes to an hour some of them have to

102:27

go out and buy gift cards and stuff like

102:29

that and at that point it could be like

102:30

this is a scam hang up that could save

102:33

people like literally life savings

102:35

totally I think that's awesome yeah um

102:37

they're calling all of these new Gemini

102:40

in Android features Gemini Nano with

102:42

multimodality because we need more

102:44

branding um and it's yeah starting on

102:48

Pixel later this year uh I imagine it'll

102:50

get moved out to more Android phones

102:52

later they're definitely trying to like

102:54

build a fortress against apple and be

102:56

like look at all our AI stuff that only

102:59

Android phones have uh Google Chrome is

103:02

also getting Gemini Nano I'm not really

103:04

sure what that means you can do basic

103:07

stuff in it that you can do with Gemini

103:09

Nano now if you use Chrome that was sort

103:12

of passive they're not that scared of

103:14

other browsers right now they still have

103:15

way too much market share so last year

103:17

they also introduced this thing called

103:18

synth ID because when they showed off

103:21

the image generation they basically

103:23

talked about this open standard that

103:25

they want to create where it bakes in

103:27

this hidden Watermark within an AI

103:29

generated image because we all used to

103:31

be worried about Dolly stuff and now

103:32

we're worried about other AI things now

103:35

but now they're expanding syid to text

103:38

somehow um no idea how they're going to

103:40

do that but that's interesting and also

103:42

video uh so I don't think it's going to

103:45

be an interesting standard but it's just

103:46

their version of figuring out whether or

103:48

not something say I

103:49

generated they made a joke at the end of

103:51

IO where they were like we're going to

103:54

save you the trouble with how many times

103:56

we said AI okay we said it 120 times and

104:00

they were like ha haha funny and then

104:02

right as Sundar was getting off the

104:04

stage he said it one more time and the

104:05

ticker went up to 121 but then they said

104:07

it again so it was 122 I'm just saying

104:10

also I think that sure they can like

104:13

make fun of that and say all that stuff

104:15

but they said Gemini I think I think the

104:18

final count on Gemini was 160 times um

104:23

and my theory is that they're just

104:24

replacing AI with Gemini because now

104:28

they don't need to convince the stock

104:30

market that they're an AI company

104:32

anymore now they just want to bake

104:34

Gemini into your brain as hard as

104:36

physically possible so they just said it

104:37

over and over and over again um which

104:40

was funny it was funny yeah yeah so I

104:44

think um just to wrap this all up

104:46

because I'm sure everyone listening or

104:48

watching is begging for us to be done

104:50

with this I'm sorry um um Matt anini

104:53

from uh this is and Austin Evans Channel

104:56

I think had a great tweet to kind of

104:58

Explain why both of these events felt

105:01

not great um he said the problem with

105:03

overhyping AI is that all the stuff

105:04

we're seeing today doesn't feel new this

105:06

stuff is cool it's impressive but it's

105:08

barely catching up to what we were told

105:10

AI was potentially capable of two years

105:12

ago so it feels almost like no

105:14

advancements have been made I thought

105:16

that was a pretty good summary of

105:18

everything like we've watched so much in

105:20

the past of like this is the stuff we're

105:22

going to do look how amazing it is and

105:24

now we're getting the stuff it actually

105:25

is doing and it hasn't made it to that

105:27

point I also think they just throw

105:28

around technical jargon with like model

105:31

sizes and token like parameterization

105:34

and that yeah they had a whole T they

105:35

talked about water cooling they're like

105:37

we have state-of-the-art water cooling

105:39

in our new TPU Center and it was like

105:43

who cares like who are you trying to

105:45

tell this to yeah big water big one yeah

105:49

um so yeah we can we can we can start to

105:52

wrap it up so sorry so Marquez are you

105:54

going to go back and watch the events I

105:55

don't think I'm going to watch

105:56

it I do think I got the idea that like

105:59

Google is trying to make sure everyone

106:01

knows that they've been an AI company

106:03

for super long and then on the other

106:05

side of the spectrum is Apple where

106:06

they're like okay we don't need to say

106:10

this over and over and over and over

106:11

again but we do need people to

106:14

understand that we have at least been

106:16

doing something AI related and they're

106:18

like these are opposite ends of the

106:20

spectrum of like how in your face they

106:22

want to be about that and both are fine

106:28

both are fine I don't really have a a

106:31

horse in the race I'm just hoping they

106:32

just keep making their stuff better well

106:34

that is a great place to end this

106:36

podcast which means it's a great place

106:38

to do our trivy

106:42

answers how do you have all of them I

106:44

don't know wait for

106:47

tri but before we get started quick

106:50

update on the score one thing at a time

106:54

in second place we have a tie with eight

106:58

points between Andrew who's

107:01

currently carrying the one and what does

107:04

that mean David uh they both have eight

107:07

points wait Marquez got something right

107:10

last week yeah I didn't watch it oh

107:12

right we didn't update you Marquez

107:15

technically was closer than you last

107:18

week on the uh how many times did Apple

107:21

say AI so he actually

107:23

stole your points he actually got the

107:26

exact number yeah so you it did you get

107:28

it from Quinn's tweet yeah he did I'm

107:29

pretty

107:30

sure right thanks Quinn um so uh yes

107:34

unfortunately Andrew your lead has been

107:36

temporarily taken by Marquez brownley um

107:40

uh who has nine points is this the

107:42

closest trivia we've had this deep into

107:44

a trivia I think so yeah yeah it's wow

107:47

it's usually my fault that it's not

107:49

close but but question one Google IO

107:52

happened yesterday uh a few days from

107:54

the time this is going to get published

107:56

but yesterday from recording and we all

107:59

know IO stands for input output classic

108:03

uh Tech thing we also know it sometimes

108:05

stands for Indian Ocean if you watched

108:07

any of our uh domain name specials yeah

108:10

um but according to Google's Blog the

108:12

keyword IO actually has two more

108:15

alternate meetings to

108:17

Google what are they and I'm giving one

108:21

point

108:23

for correct answer two possible

108:28

points I have zero

108:35

idea I feel like one of them you would

108:37

never get the other one you might get I

108:40

think both of them you'll both

108:41

definitely

108:43

get not if I don't write it think oh

108:46

interesting interesting

108:49

strategy all right Andrew what you put

108:55

audio listeners something happened uh uh

108:59

who should go next D David David all

109:02

right I wrote two answers information

109:05

operation oh clearly and I also wrote

109:08

I'm out but that is hilarious thanks

109:12

Marquez what do you got I wrote IO being

109:15

like the one and zero of

109:17

binary oh it's so you're so close like

109:21

so close yeah but his second answer is

109:23

the best and Google IO for in and-

109:26

out because every time we go we got to

109:28

go I almost want to give him a point for

109:30

that because that's better than whatever

109:31

they could have thought of no um so uh

109:34

the first thing they listed is the

109:36

acronym innovation in the

109:39

open sure yeah right um but according

109:43

according according to the keyword what

109:46

first prompted them to use IO prompt in

109:50

the original Google IO was they wanted

109:53

it to reference the number Google which

109:56

begins with a one and looks like an i

109:58

and is followed by a zero it's

110:00

technically followed by 99 more zeros

110:03

after that zero but that would make for

110:05

a really long event

110:08

title Fair funny so now you guys know

110:13

that okay cool okay next question what

110:17

was the name of the AI in the movie Her

110:21

which we recently learned Andrew have

110:23

not seen so this is going to go

110:25

lovely vr10 Creator be me 2013 midnight

110:30

Santa Cruz California you're a hippie

110:33

you're at a movie theater by yourself

110:35

sobbing at 1:30 in the morning when the

110:37

movie gets out actually it was probably

110:39

closer to 215 what is happening you sit

110:41

on a bus for 45 minutes on the way back

110:43

to campus and your entire life has

110:45

changed and now you're sitting in Carney

110:47

New Jersey and you could never see Wen

110:49

Phoenix in the same light

110:53

Marquez what' you write

110:55

Siri nope you wrot Alexa I wrot Paula

110:59

Paula yeah no uh I wrote Samantha

111:03

correct let's go

111:06

Samantha who would have guessed the

111:08

person who saw the movie knew the answer

111:09

to that I actually was thinking really

111:11

hard I would have thought that you guys

111:13

would have seen the movie I don't know I

111:14

thought it was a pretty

111:17

stovie dude you don't know me very Sam

111:19

Sam Alman has seen this movie there was

111:22

a funny uh article that was written

111:25

about the open AI event that said I it

111:30

was called like I am one I'm once again

111:32

pleading that our AI overlords watch the

111:36

rest of the movie yeah be and it was

111:39

about how they were like referencing her

111:40

at the open AI event but the ending of

111:43

her is supposed to be

111:47

like no spoilers anyway it was funny it

111:50

was haha we're wrapping this up because

111:53

as little do you know we're recording

111:54

another episode directly after this I'm

111:56

hot and sweaty so yeah so okay yeah

111:59

thanks for watching and subscribing of

112:02

course and for sticking with us and for

112:03

learning about Google IO alongside us

112:06

and we'll catch you guys next week did

112:07

all the people who subscribed last week

112:09

make you feel

112:10

better yes it worked no I feel I feel

112:13

pretty good it worked yeah yeah the io

112:15

and Google iio also stands for like And

112:18

[Music]

112:19

subscribe don't ask wait for was

112:22

produced by Adam Molina and Ellis Ren

112:24

we're partner with VOX media podcast

112:25

Network and oural music was by V

112:43

Sil I don't think there's no you might

112:45

be the only person in the world who and

112:48

who like doesn't even know the plot of

112:50

this movie though like you know what

112:51

this movie is about right neither of you

112:53

oh my God oh my God you guys this is

112:56

like you are the two perfect people to

112:58

watch this movie with no context I want

113:00

to stay as the perfect person to watch

113:02

this with no content my all of the AI

113:05

things we've been talking about would

113:06

make so much can you picture can you

113:07

picture Andrew coming in after sitting

113:09

through yesterday's Google IO and then

113:11

watching her for the first time and

113:13

being like I get it bro her formed 90%

113:17

of my personality

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceGoogle I/OApple TechnologyAI InnovationsTech TrendsFuture PredictionsUser ExperienceSearch Engine OptimizationMachine LearningTech Podcast