Hey ChatGPT, Summarize Google I/O
Summary
TLDRThe transcript captures a lively discussion from the Waveform podcast, where hosts Marquez, Andrew, and David delve into the latest tech news. They critique Apple's new iPad Pro, focusing on its thinner design and unique camera bump, and discuss the potential of the new tandem OLED display. The hosts also explore OpenAI's GPT-4 and Google's AI advancements, including Google I/O's updates on Google Photos, search, and workspace features. They express mixed opinions on the practicality of AI's current capabilities and its future implications for content creation and user experience.
Takeaways
- ๐ฑ The new iPad Pro is significantly thinner and lighter, making it the thinnest Apple device ever made, measuring 5.1 mm in thickness.
- ๐ The iPad Pro features a new tandem OLED display, which offers both the benefits of OLED and super high brightness, a first for tablets.
- ๐ฅ The new Apple Pencil Pro is only compatible with the latest iPad Pro, potentially due to the redesigned charging area to accommodate the relocated front-facing webcam.
- ๐ธ The iPad Pro's camera bump is non-uniform, with lenses of different sizes, which is a departure from Apple's usual design language.
- ๐๏ธ The podcast hosts discuss the potential impact of AI on content creation, including concerns about AI-generated content replacing human-written material.
- ๐ค OpenAI's event introduced GPT-4, a multimodal model that can respond faster and understand context from both text and images, although it still struggles with interruptions and conversational nuances.
- ๐ Google IO focused on integrating AI into various Google services, such as Google Photos and Gmail, aiming to provide more contextual and generative search results.
- ๐ Google introduced new AI models like Gemini 1.5 Pro, which doubles the context window for more accurate responses, and Imagine 3 for better image generation.
- ๐ฑ Google is working on bringing AI capabilities to Android with features like scam detection and a floating window for real-time interaction with apps.
- ๐๏ธ The potential for AI to transform e-commerce and content creation is highlighted, with Google showcasing how it can help users find products and information more efficiently.
- ๐ Concerns are raised about the future of web content and SEO as AI-generated summaries could potentially reduce the need for users to visit websites directly.
Q & A
What is the new iPad Pro's biggest change in design according to the podcast hosts?
-The new iPad Pro is substantially thinner, being the thinnest Apple device ever made with a screen, at 5.1 mm thin.
What is the new feature of the iPad Pro that is also affecting the new Apple Pencil Pro compatibility?
-The front-facing webcam has been moved from the narrow side to the long side of the iPad Pro, which is also where the Apple Pencil Pro is supposed to charge, leading to a redesign of both the charging mechanism and the pencil itself.
What is the name of the new AI model introduced by OpenAI, and what does the 'O' in the name stand for?
-The new AI model introduced by OpenAI is called GPT-40, where the 'O' stands for Omni, signifying its multimodal capabilities.
How does the new iPad Pro's display technology differ from its predecessors?
-The new iPad Pro features a tandem OLED display, which is a technology that provides both the benefits of OLED and super high brightness, offering deeper blacks and better outdoor visibility.
What is the purpose of the tandem OLED display in the new iPad Pro?
-The purpose of the tandem OLED display is to provide a combination of brightness and contrast ratios that have not been available in tablets before, offering better performance outdoors and in bright conditions.
What is the new feature in Google Photos that allows users to ask contextual questions about their photos?
-Google Photos now allows users to ask contextual questions, such as 'What's my license plate number?' or 'Show me Lucia's swimming progress,' and the AI will understand the context and display relevant photos or information.
What is the new generative AI feature in Google Search that is being rolled out to all users?
-The new generative AI feature in Google Search creates a personalized experience by generating extra information, tiles, and pages based on the user's search queries, aiming to provide a more interactive and contextual search experience.
What is the update to Gmail that was introduced during Google IO, and how does it utilize Gemini?
-The update to Gmail introduces the ability to ask questions about emails and receive generated responses, as well as suggested replies powered by Gemini, allowing for better email organization and tracking.
What is the new AI model introduced by Google called, and what is its main purpose?
-Google introduced a new model called Gemini 1.5 Pro, which has doubled the context window to 2 million tokens, allowing for more in-depth and contextual understanding and responses.
What is the new feature in Google Maps that combines search, maps, and reviews to provide more specific suggestions?
-Google Maps now offers a feature that allows users to ask multi-step questions, such as 'Find the best yoga or pilates studio in Boston and show the details on their intro offers and walking distance from Beacon Hill,' providing a more tailored and convenient search experience.
Outlines
๐๏ธ Introduction and First Impressions of the iPad
The hosts introduce themselves and react to the new 13-inch iPad. They discuss its bright screen, thin design, and express excitement about starting the podcast. They joke about using AI to create podcast content and mention recording two episodes. The iPad's new features, such as its lightweight design and lack of an ultra-wide camera, are highlighted.
๐ฑ Detailed Discussion on iPad Features
The hosts delve deeper into the iPad's features, noting its thinner design and the absence of the ultra-wide camera. They express mixed feelings about the new design choices, particularly the uneven camera bump, which they find uncharacteristic of Apple's usual aesthetics. They compare the new iPad to previous models and discuss its improved display and overall performance.
๐ iPad's Enhanced Display and Practical Use
The conversation continues with a focus on the iPad's display improvements. The new OLED display is praised for its brightness and contrast, making it ideal for various uses like emails, web browsing, and streaming. The hosts share personal experiences using the iPad and discuss the significance of the display's advancements in the tech industry.
๐ต Music Industry Insights and Logic Pro Update
The hosts address the Logic Pro update for iPad, specifically the new stem splitter feature. They discuss its potential impact on music production and share feedback from a music industry professional who finds it beneficial. The conversation includes thoughts on the AI's role in enhancing creative tools and the practical applications of such technology.
๐ฌ OpenAI Event and GPT-4.0 Overview
The hosts review the recent OpenAI event, focusing on the announcement of GPT-4.0. They discuss its multimodal capabilities, speed improvements, and conversational nature. The event's demos, including real-time interactions and the AI's ability to understand facial expressions, are analyzed. The potential and limitations of AI in mimicking human conversation are debated.
๐ค AI's Conversational Abilities and Future Prospects
Further discussion on GPT-4.0's conversational abilities, including its interruptions and inflection recognition. The hosts highlight the AI's strengths and weaknesses in mimicking human interactions. They explore the broader implications of AI in daily life and its potential to revolutionize how we interact with technology.
๐ Real-World AI Applications and Concerns
The hosts explore practical AI applications, such as assisting with math problems and providing real-time information. They discuss the challenges of making AI feel natural and the importance of fact-checking AI outputs. Concerns about AI's ability to understand and verify information are raised, emphasizing the need for continuous improvement in AI models.
๐ Personal AI Use Cases and Reflections
The conversation shifts to personal use cases for AI, such as using it as a virtual book club partner. The hosts reflect on the benefits and limitations of AI in providing companionship and assistance. They consider the broader impact of AI on social interactions and the importance of human connections in the digital age.
๐ ๏ธ AI's Practicality in Everyday Tasks
The hosts discuss AI's practical applications in everyday tasks, such as planning trips and managing daily activities. They highlight the convenience of AI in providing quick and accurate information, while also considering the potential downsides of over-reliance on technology. The balance between human effort and AI assistance is debated.
๐ฌ Google IO Event Analysis and Reflections
The hosts review the Google IO event, noting its lack of excitement compared to previous years. They recall past events that featured groundbreaking announcements and discuss the shift in focus towards AI and enterprise solutions. The hosts express nostalgia for Google's earlier, more innovative approach to tech conferences.
๐ธ Google Photos and AI Integration Enhancements
The hosts discuss updates to Google Photos, including the ability to ask contextual questions and receive relevant photos. They appreciate the improved search functionality and the potential for AI to enhance photo management. The conversation covers practical examples, such as finding specific photos quickly and efficiently.
๐ Google Search Generative Experience
The hosts analyze the new Google Search Generative Experience, which provides detailed answers and suggestions. They express concerns about the potential impact on traditional websites and the accuracy of AI-generated content. The shift from keyword-based search to more natural language interactions is explored.
๐ The Future of Search and Content Creation
The conversation continues with a discussion on the future of search engines and content creation. The hosts debate the implications of AI-generated search results and the potential decline of traditional websites. They consider the importance of fact-checking and the role of AI in shaping how we access information.
๐ Gemini 1.5 Pro and Enhanced AI Models
The hosts discuss the introduction of Gemini 1.5 Pro, highlighting its expanded context window and faster processing. They explore its integration into Google Workspace and the potential for AI to improve productivity tools. The conversation includes thoughts on the future of AI-powered assistants and their role in various industries.
๐น Project Astra and AI-Powered Glasses
The hosts review Project Astra, which demonstrates real-time image and video analysis using AI. They discuss the potential applications of AI-powered glasses and the implications for privacy and security. The conversation covers the benefits of live video feed analysis and the challenges of integrating AI into everyday devices.
๐ง AI in Google Workspace and Email Management
The hosts explore new AI features in Google Workspace, including email summarization and receipt organization. They discuss the potential for AI to streamline administrative tasks and improve efficiency. The conversation highlights the benefits and limitations of AI-powered productivity tools.
๐ Personalized AI Assistants and Customization
The hosts discuss the introduction of personalized AI assistants, or 'gems,' in Google Workspace. They debate the merits of specialized AI models versus a single omniscient AI. The potential for customized AI to enhance user experiences is explored, along with the challenges of managing multiple AI agents.
๐ Multi-Step Reasoning and Trip Planning with AI
The conversation shifts to multi-step reasoning in Google Search and AI-powered trip planning. The hosts appreciate the ability to receive detailed, contextual recommendations but express concerns about losing control over the planning process. The balance between convenience and personalization in AI-driven services is discussed.
๐ง Enhancements and Frustrations with Gmail's AI Features
The hosts discuss recent updates to Gmail, including AI-powered features like email summarization and receipt tracking. They express frustration with Gmail's search functionality and the limitations of AI-generated responses. The need for improved contextual search and user control in email management is emphasized.
๐ AI Integration in Google Chrome and Android
The hosts review the integration of Gemini Nano into Google Chrome and Android, highlighting features like scam detection and contextual assistance. They discuss the potential benefits and privacy concerns associated with AI monitoring and real-time interaction. The balance between convenience and user privacy is considered.
๐ Google's Strategic Position in the AI Race
The hosts reflect on Google's strategic approach to AI, emphasizing their extensive integration of AI into existing products. They discuss the company's cautious rollout of AI features and the challenges of balancing innovation with user trust. The conversation includes thoughts on the future of AI and Google's role in shaping it.
๐ฅ Reflections on AI and Technological Progress
The hosts conclude with reflections on the broader implications of AI in technology. They discuss the potential for AI to revolutionize various industries and the importance of responsible development. The conversation emphasizes the need for continuous improvement and thoughtful integration of AI into everyday life.
Mindmap
Keywords
๐กAI
๐กGoogle IO
๐กGemini
๐กMultimodal
๐กTokenization
๐กAI-generated content
๐กScam Detection
๐กGoogle Photos
๐กGoogle Workspace
๐กLive Shopping
๐กAI Overlords
Highlights
Introduction of a new iPad Pro with a thinner and lighter design, being the thinnest Apple device ever made at 5.1 mm.
The new iPad Pro features a brighter display, utilizing tandem OLED technology for better outdoor visibility and contrast.
The iPad Pro's new Apple Pencil Pro is only compatible with the latest iPad Pro and iPad Air, possibly as a strategic move to encourage upgrades.
Discussion of the non-uniform camera bump design on the iPad Pro, which deviates from Apple's typical aesthetic.
The iPad Air receives updates with features previously exclusive to iPad Pros, such as the M2 chip and relocated webcam.
Open AI's event introduced GPT-4, a multimodal model capable of processing various types of data and interactions.
GPT-4 is designed to have faster response times and can understand context from both text and images.
Google IO focused on AI advancements, with an emphasis on machine learning and its integration into various Google services.
Google introduced updates to Google Photos that allow for more contextual searching and understanding of photo content.
The integration of Gemini, Google's AI model, into Google Workspace apps aims to enhance productivity and user experience.
Google's new music generation model, Soundbox, and video generation model, Imagine 3, demonstrate advancements in creative AI capabilities.
Google's AI developments include the ability for multi-step reasoning in Google Search, providing more direct and useful answers.
Google's efforts in generative AI are positioned as a shift towards a future where users receive direct answers rather than navigating through search results.
The potential impact of generative AI on content creation and the sustainability of ad revenue for websites is discussed.
Google's AI updates include features for Android, such as Gemini Nano with multimodality, and scam detection in phone calls.
The debate over the use of AI in trip planning and the preference for user control versus AI-generated itineraries.
Google's introduction of specialized AI agents, Gems, which have ultra-deep knowledge of specific topics, versus a single omniscient agent.
Transcripts
the screen is pretty dope it's really
bright it is no that's the actual
biggest this is the 13 in right Yep this
is pretty sick we should go live we
should start the Pod damn it's good
first reaction for David damn are we
recording this okay well there it is
okay that's the that's the intro the
exactly can you guys wow this is really
freaking
[Music]
thin yo what is up people of the
internet welcome back to the waveform
podcast we're your hosts I'm Marquez I'm
Andrew and I'm David and we're all
actually three
ai ai ai and we're going to be doing
this podcast uh by training it on all
previous waveform episodes
so whatever whatever we say will just be
a remix of things we've already said and
uh it'll be great thanks for tuning in
just kidding uh wish we have two
episodes to record today and that would
take a lot of no we're we got a I AI to
talk about today there's a lot of
acronyms that I'm that I'm mixing up but
IO happened open AI had an event a lot
of AI things were talked about and uh we
should we should summarize it yeah cuz I
didn't watch any we can just use Gemini
to summarize it oh yeah true yeah that
would actually be perfect but first wait
I have the iPad here it was iPad week
last week oh right uh I was away so I
didn't get to fully uh do my review yet
but I have been using it and well this
is the 13 I just hit the mic really hard
this is the it would have been harder
with the old heavy one though true yeah
this is thinner and lighter than ever it
is uh you this is the first time you
guys are like seeing it in person cuz
you like left immediately after it came
in basically yeah I like took it out the
box and then like packed up and flew to
Columbia so like you guys didn't get to
see it guys look at this cool iPad I
have okay bye see you later but um you
know getting it back I have been I one
of those people who has been using an M1
iPad Pro for how how years however long
it's been since it came out and set this
one up mirror it to the old one so it
literally has that old wallpaper I don't
know if you guys remember this wallpaper
from M1 it's like years old and I've
been using it the exact same way and
doing all the exact same things which is
which is emails web browsing YouTube wow
uh some Netflix offline Netflix on the
plane oh wait what were you watching um
drive to survive I'm very behind don't
judge me uh and yeah Twitter buying
things on Amazon like I got some widgets
going it's just the same thing as before
powerful and it's just thinner and
that's that's mainly you crushing a
bunch of artistic tools with it oh of
course yeah everything just shoved into
this iPad no it's that's here I'll let
you guys hold it if you guys don't want
to actually experience I know David
wanted to see it I still have a huge
issue this is the most oh you weren't
here last week when we talked about it
right it is the most like non-uniform
camera bump I feel like I've ever seen
out of apple and it feels so onapple
like not a single circle on there is the
same diameter as a different one yeah I
never thought I didn't think about it
that much so they got rid of the ultra
wide camera didn't really talk about it
g they didn't even mention that at all
whenever they get rid of something oh
they they never want want to say it out
loud and then so now we just have the
regular the lar and the The Flash and
then whatever that tiny cutout is maybe
that's a there is a microphone mic oh
the mic other one this is lar I think
yeah big one is lar is this big Flash
and then I don't even know what that is
yeah none of them are the same size
though correct it feels like the models
that we get yeah there's usually at
least one that's the same size it could
have at least gone with the LG velvet
thing and made it like big medium small
you know that Co but instead it's like
medium large small it's damn it just
doesn't feel like an apple camera bump
yeah it's unesthetic the screen does
look dope though yeah okay so the actual
biggest differences so if you've seen
The Impressions video already you know
the new differences but there are some
interesting yeah there's some
interesting choices with this iPad one
of them being that it is substantially
thinner and it's the thinnest Apple
device ever made with a screen that's my
oh yeah cuz no the Apple card or yeah
they've made thinner objects before yeah
but this is the thinnest device yeah the
thinnest device they've ever made and it
is 5.1 mm thin do you know how much
thinner it is than the last one probably
about 2 mm thinner 2 mm I don't know
what I'm I'm American so that means
nothing it's not a lot but it is it is
visually noticeable what is that in
Subway sandwiches I I'm American so I I
can't do it's one piece of lettuce one
piece of lettuce I have the actual
answer it's hilarious it's a 20th of an
inch thinner a a 20th of an
inch I have a do you know they keep
comparing it to the iPad Nano clip I
think it was and they're saying it is
thinner not the clip one they're
comparing it to the long one the long
Nano the long Nano I thought I saw a
different one where they were comparing
it to the clips both they're comparing
it to the seventh gen the clip I believe
was the Sixth Gen yeah the seventh gen I
saw something the other day but are they
including the camera bump
in the uh on the iPad or are they only
counting the sides I don't think they're
including the camera bump yeah wait here
iPod Shuffle third gen oh it does have a
clip this has a clip but they counting
the clip no I don't think you count well
there's no screen on it but that's not
what they
said I don't think they counted the clip
in all the videos it's just the aluminum
housing up against it that's the body of
it mhm I just think it's an interesting
choice because they decided to actually
make it thinner again like we got years
and years of iPads that were totally
thin enough right and then this year
coming out and making a big deal about
it being thinner was really fascinating
to me cuz I thought we were done with
that yeah Johnny IV's gone Johnny IV's
gone but here we are and to me there's a
lot of choices about that this iPad that
feel Tim Cook and I'm going to try to
explain this in the review but here's
maybe the most Tim Cook choice of them
all there's a new Apple pencil Pro right
mhm the new Apple pencil Pro is only
compatible with the newest iPad Pro that
you're holding or iPad Air yeah now why
is that why is that true because they
moved the webcam the front facing webcam
from the narrow side to the long side
which is where it belongs but now that's
write exactly where that pencil is
supposed to charge so now there's both
the webcam and the apple pencil what are
you trying to do intive the brightness
oh it's on the right you have to pull
from the from the right pull pull from
the battery oh yeah okay so so okay so
the apple pencil now charges in the same
exact spot but there's also now webcam
there so they had to arrange the magnets
and the way it charges those components
differently on that side right so it
doesn't work the same way as the old
ones why didn't they just move the apple
pencil to charge on the other side of
the iPad like the bottom yeah because of
the the folio the Poco pins for the why
why just move it why might put like one
to the left and one to the right on the
top magnets actually it'd be pretty yeah
that's a good idea I think it's a sneaky
great smart way of making sure if you
want to use any features of this new
apple pencil you need to buy the newest
iPad I think that's true so yeah that
happened it's like one step more than at
least the like new pixel coming out and
being like only available on Pixel and
then 6 months later being like okay it's
actually available on all at least this
has somewhat of a hardware difference
which is why it strikes me as very Tim
Cook cuz supply chain guy would find a
way to make sure nothing changes and the
other ones and now it just works with
the new ones it's sort of like uh how
the M2 MacBook Pros sold really really
badly because the M1 was so good it's
similar in that the iPad is so iPad and
what more iPad could you add to the iPad
that they have to like have a better
accessory that you have to have the new
iPad to be able to use yeah cuz you know
it's thinner cool but no one's going to
buy it cuz it's thinner exactly and the
good display on the last generation was
almost as good as this display yeah so I
can talk about the display in a second
the other Tim Cook thing is the new iPad
Air
uh is just a bunch of newer parts that
used to be an iPad Pros it's like it has
the M2 now it has like the webcam on the
side again and all these things that are
like oh we we we now have a better iPad
Air full of things that we already saw
in iPads so that's very efficient very
efficient but yeah the new display it's
just brighter that's the main thing
you'll notice it is obviously a really
interesting Tech to make it happen it's
this this tandem OLED display which is
not something I've ever seen in a a
shipping Gadget before which is cool
it's in cars I in cars yeah it's in car
displays yeah we we went down this
Rabbit Hole a little bit ago because
Apple really wanted everyone to believe
like this is we we just invented the
tandem OLED display but if you go on
like display makers websites and display
industry journals like the first concept
displays by LG were coming out in like
2017 2018 they entered mass production
in with LG
in
2019 of a specific of a specific stacked
OLED yeah referred to in as tandem OLED
and it's the same thing and then Samsung
just began as far as I could tell
Samsung just began their mass production
for iPad this year um and I couldn't
find any like leaks or like confirmation
like this product uses an LG tandem OLED
display but in LG's marketing materials
on the industrial side of things they're
very clear like we are shipping these
things like hot cakes in our Automotive
departments we also had a viewer sorry
Zach reach out and say that honor has a
product from a few months ago with a
durable double layer OLED screen yeah
it's it's so in the industry that LG
actually makes a 17in tandem OLED
folding touchcreen that I wouldn't be
surprised if is in products like the the
okay I could not find any sources for
this but there's only one thing out
there with a 17inch folding touch OLED
screen yeah the Asus Zenbook fold
whatever and I'm like is that is that a
tan OLED is was Asus that is interesting
anyway this is all like Ellis's tinfoil
hat conspiracy display industry nonsense
so take it all the great can that be a
new segment of the podcast yeah tinfoil
hat segment I will say the purpose of
the tandem OLED display is to give you
both the benefits of OLED and super High
brightness so in tablets we straight up
have not had this combination of
brightness and contrast ratios yet so
I'll give them credit for that like the
we've had oleds in that huge also super
thin what is it called Galaxy tab
pro4 or whatever just gigantic tablet we
reviewed and it's awesome inside it's
this super huge bright OLED and it feels
like you're holding this magical piece
of glass but it has like 500 600 in its
Max brightness cuz it's not as bright
get bright so here we are we have this
new awesome thing that is better at Max
brightness better Outdoors better on a
plane with the window open right that
type of thing great yeah it's the exact
same brightness as the mini LED display
on the last iPad Pro 12.9 in model and
now deeper blacks yes1 in bigger because
they're off love that because they're
pixels that are off so infinite contrast
ratio yeah so I've been using it I don't
know I'm going to put my review together
I don't think it's going to be a
standard review I think it's more just
me diving into my maybe tinfoil hat
theories about some of the weird choices
why is it thinner this year why is the
apple pencil only compatible with this
one this year why do they do all these
things why they get rid of the ultra
wide camera there's a bunch of
interesting choices that they made with
I think that is a better angle to go
with for the review because what else
are you going to say exactly yeah it's
the same like I saw someone did a review
that wasn't at all about the device
really it was like what does iPad OS
need for me to actually want to buy the
M4 iPad Pro I would also be into that
which is a better video yeah like and
it's also such an evergreen that's been
like the question on our minds for like
four years now it's like this is a
really powerful thing that's still an
iPad so what do we wanted to do MH okay
sick that's the iPad I just wanted to
throw one more thing in from last
week I did because last week I very
dismissively was like stem splitter is
boring and I uh sick got drinks with a
friend of mine who is you know working
in the upper echelons of the music
industry which I am not um and he
graciously informed me that I was uh
pretty wrong and that stem s splitter uh
works way better than previous attempts
and it's integrated really well into
logic and as like a utility um it's
actually like a really cool thing to
have built into logic maybe not as a
creative tool for making things
necessarily but if you're a professional
he was like this is very very cool so
thank you unnamed friend yeah but yeah
so sorry I wrote it off if you're
working in music maybe you should check
this out and not listen to me yeah also
for those that maybe didn't watch the
podcast last week or or forgot about it
it's basically that during the iPad
event they announced logic pro two for
iPad yeah and also an update to logic I
think it's called logic 11 now something
like that but they're uh they're
introducing a stem splitter feature
where you can add any music file and it
will separate the tracks I got a live
demo of it and this is one of those
things where the demo was so good that I
was like this is definitely tailored
exactly for this feature and I don't
know how well it will work but in that
demo it blew my mind at how well it
worked and it was like here we gave it
this song and we added we did the St
splitter thing and it separated it out
and here's the drums and here's the
vocals and here's the bass and here's
the different instruments all split out
through the power of AI and apple
silicon and you play back just the
vocals and it's like it sounds like I'm
just listening to a recording of just
the vocals and yeah super cool tool to
use and to me it always struck me like I
don't use logic but it struck me as just
like getting rid of some plugins that
we're doing that in the past it's just
like yeah we just built it in now it's
like when Final Cut does a better
tracking tool you're like oh well that's
better than I was using so great so I
just I I would love to see it on other
tracks also a question could you just
remove one of the stems and then content
ID wouldn't like go off I think well
Content ID is kind of like
sophisticatedly Trend uh trained on more
than just like the exact set of sounds I
think it would still go off I think it
would just because of some of the
examples I've seen in the past of like a
car driving by and like the radio is
playing and they've gotten hit with
stuff like that I've heard people have
hummed songs and it gotten copy
really yeah that's I think it's a great
question but I have a feeling it goes on
the it's a little more cautious when it
comes to it and would probably still hit
it okay that could be a short how many
parts of a song can we delete before we
get maybe we don't want to like destroy
the channel testing make a new channel
to test that yeah cool Jerry is still
out on uh chromag glow uh I haven't I
haven't used like I don't have any
projects here where I'm like doing stuff
like that so I haven't used it everyone
I've talked to who is using it for work
says the CPU usage is higher than they
expected for like a apple native thing
uh which might mean there actually is
some AI stuff happening yeah but uh that
would be my guess cuz it's probably
really uh AI focused and M4 is
specifically tailored to that because
they have all the neural cores there's I
there's an email in uh in Apple's inbox
that's me being like please explain this
to me okay yeah and chromag glow was the
AI feature where they basically sampled
a bunch of old really famous instruments
and tried to extract the vibes from it
so that you can The Vibes to allegedly
straight VI extractor Vibe extractor
that's what they should have called it
dude way much more tell yeah telling
yeah well speaking of an event that was
lacking a lot of
Vibes open AI had their event on Tuesday
so I didn't get Monday Monday it's
Wednesday yeah thank you so I didn't get
to watch it no how did it go
well it was interesting it was
interesting yeah I would say the the
high level stuff uh they announced a new
version of GPT 4 there was this big
thing where everyone was thinking they
were going to unveil their own search
engine the day before google.io that
ended up not being true wild um Sam also
tweeted that's not true like the day
before which is funny but they did
unveil something called GPT
40 which I think is a terrible name okay
uh also looks weird does the o stand for
something yes it stands for Omni I had
to do a lot of digging to find this on
the day of okay and it's because it's
now a multimodal model so Omni is
supposed to mean everything it does
everything Omni yeah cool but it it's
kind of weird cuz it looks like 40 it's
a lower quo it's a lower quo that feels
weird it can't like smell it's not on me
yet that's true they just didn't go over
that part yet yeah so does native M
multimodo which is good because the only
other model I believe that was natively
moim Modo was Gemini mhm um it is much
faster than gbt 4 they say that it can
respond in as fast as 232 milliseconds
which is probably like the most ideal
possible case during the event they were
sort of talking to it naturally and
talking back and forth and you know had
that Scarlet Johansson voice when and
when asked when they asked Miram morti
like is this car Jo voice she's like
what are you talking about we what no
did you see the tweet that was like we
know this isn't Scarlett Johansson's
voice because none of her movies are on
YouTube for it to rip from bur yeah um
and so they did like this demo where
they were talking back and forth to it
pretty naturally and it works pretty
well it definitely feels a lot more like
the way that a human would speak to you
oh a bedtime story about robots and love
I got you covered gather around Barrett
they did have to like interrupt it
multiple times because it has the same
problem that all assistants have where
it doesn't just tell you your answer and
move and stop it just tells you your
answer and then keeps talking a bunch I
would say the like the interrupting
thing was also something that was like a
feature that they were describing which
I I think at the core is pretty cool
like in a general conversation it's a
good bandaid yeah people get interrupted
or like sometimes you chime in on things
people say we interrupt each other all
the time so you know that's just part of
natural say real quick disagree disagree
real quick real and it did a pretty good
job at it except for sometimes I felt
like when they were talking to it was a
little too cautious of thinking it was
getting interrupted where there were
legit Parts where it sounded like the
audio was cutting out because they would
ask it a question it would start
answering and then I don't know if they
like moved or like somebody in the
audience said something but it would
like cut out and then stop and they
would have to ask it again or it would
cut out then nothing would happen and it
would start talking again it felt like
it was trying to like
be courteous of the person asking it and
be like too conversational to the
detriment of itself which was like cool
but not great yeah this is this all
comes back to the super highlevel
original theory that I've said many
times before about these AI chat models
is they don't know what they're saying
they are saying things because they are
trained on things and can say things
that are similar that look like they
should be correct but once they have the
sentence that they spit out they have no
way of knowing they don't have a way of
like understanding if they just said a
true fact or or not they don't have any
way of verifying they don't know what
they're saying so if you were
conversational then you could actually
look at what you're about to say and
then trim it because you realize oh this
is like extra and we're in a
conversation but it doesn't have any of
that ability so I feel like the best the
biggest advantage to these like okay 40
is cool and it's like faster and it's
multimodal and all that but the ADV the
the the Advan that I'm looking forward
to the most is an llm that can actually
fact check itself and understand what
it's saying and do something about that
so the funny thing is in Gemini there's
a button that you can press like a
Google button that it gives you an
output and then you can press the Google
thing fact no it fact checks itself does
it yeah like cuz it's it goes straight
from what the model thinks from the
internet and there's like a double check
button where it references more sources
and then gives you a bunch of links and
then can verify if it was true or not
well it's supposed to I think it's going
to still be on the person I think it is
too because like when you have Gemini on
the phone now there's a Google it button
so if I if I go like what is the Marquez
browny's podcast called and it says it's
called The Verge cast the it could just
tell you that and then give you a Google
button and then you hit Google and then
the results show up and it's like oh he
has been on the vergecast but the it's
called way for like you don't it's the
human that still has to the fact check
yeah so if it could if it could fact
check itself and be like oh my bad I
remember I just told you that it was
wave for or one thing it's actually the
other thing that that's what I want but
I don't think any of them do that yeah
and I really want I don't know if
they're ever going to be able I mean
maybe eventually it feels like it's just
a layer on top of the llm the llm is
already doing a lot of work and it's
super cool what it can do but I just
want that like okay now take your output
feed it back into another layer where I
can go okay I told you the truth yeah
like how high can you get that
confidence interval right yeah some
other very weird things that it can do
um there's like a vision version where
you can talk to it while it uses your
front-facing camera and it's supposed to
be able to be able to understand your
facial expressions as well as the
intonation in your voice to better
understand what you're asking in the
context of what you're asking and they
did a bunch of demos that were very like
what is my facial expression which is
like you know it's kind of like pushing
what do I look like the borders of
I feel like a lot of those examples were
like it did a pretty good job of doing
what they wanted but they always had to
ask it specifically like if we were
having a
conversation mid conversation I wouldn't
be like David look at my facial
expression and get the hint here like
inste they're like it can see your
facial expression while we're talking
and the guy goes look at me and look at
my facial expression what is it and it's
like that's not conversation I think
that so we saw this issue with like the
rabbit R1 as well right like all of the
reviews are like what is this what is
this and what is that like with the D
perhaps what is this perhaps what is
this uh and nobody's gonna do that in
the real world but the way that everyone
thinks that you need to test it is like
if this doesn't work then the whole
thing falls apart so you have to test
like the least common denominator at
first so I think that's why they were
showing it is they proving it can
understand your facial expressions but I
still want to see more of a real world
use case where my facial expression
changing changes the output of the model
yeah is that like it uses it as context
or something that's what it says but
they never showed any demos where it
should and it's whole point one of the
big things they talked about which I
actually think it did a pretty good job
with was like inflection in the voice
and so like through quicker responses
interrupting an inflection that's
supposed to make it feel more
conversationally based and and one thing
I was thinking about was like sometimes
it did a really good job but the problem
with trying to replicate a human it just
reminds me of like walking in video
games like we have been doing that for
so long and Graphics have gotten so good
and it's still so obvious when like a
video game character is walking and it
doesn't look anything like a human right
so like this thing will hit a couple
little marks where it's like oh that
kind of did sound like less of a robot
but more and then it'll just do
something that sounds exactly like an AI
and it will I don't I think we're so far
away from it being a true like to really
confuse someone I'm picturing it like
you ask it for the weather and it sees
that you're happy and it goes it's
raining outside bring an umbrella and it
says sees someone who's like kind of sad
and says what's the weather and it goes
it's a little worse than normal out like
it's totally fine like it tries to
comfort you yeah I'll say one thing I
thought that was impressive was like the
way it fixed its mistakes sometimes or
or like the way it had I'll use an
example so like normally when you ask an
llm or whatever something and it's wrong
it'll just go like my mistake or just
like the most Cann respon is of like my
bad the same question and it gets it
wrong again yeah yeah so there's a point
in this where he asks it to look at his
facial expression and its response was
uh seems like I'm looking at a picture
of a wooden surface oh you know what
that was the thing I sent you before
don't worry I'm not actually a table um
okay so so take a take another look uh
that makes more
sense look again and tell me my facial
expression where normally it would just
be like my mistake and it said but
instead it responded with like oh that
makes more sense that makes more sense
like I get it now and then answered so
like it is semi- using the context of
what's going on and making up these
little things that it doesn't need to
but that is a little tiny step that does
make it feel more conversation I feel
like they didn't harp on that because
every time it did that was when it was
screwing something up which they
probably didn't want to linger on but
like that actually to me was the most
impressive that said like I feel like
people are so used to them screwing
things up that if you can screw things
up in a more natural human way then
that's kind of impressive it does feel
more like the conversation and not just
like ask you question you respond kind
of crap I also think that they took like
I don't think they took this from rabbit
because they probably don't give a crap
about rabbit at all um I don't even
think about you well yeah something the
R1 did that the Humane AI pen didn't do
was it would go like hm I guess let me
look that up for you and it saying words
the filler time
it so what in the back does seem it Tak
that and it did that a lot every time
they would ask it a question it would be
like so they'd be like how tall is the
Empire State Building whatever and like
oh you're asking about the Empire State
Building sure it's blah blah blah blah
and it's like if you just said nothing
and just had a waiting time that would
feel that tension there would be like H
but because it's repeating it back to
you you don't feel any delay but then it
so you feel less delay but it feels more
AI
like that's that's that to me is like a
giveaway if I'm talking to a human and
I'm like bro how tall is the Empire
State Building and he goes you're asking
about the Empire State Building the
Empire State Building is and I'm like
why are you saying all this just say
them over you're stalling I see I think
if you sorry well yeah that more human
there's a part in there that is doing
what you're saying but I think they did
an even better job which was anytime
they asked a question that had multiple
questions or like points of reference in
the question you could almost tell it
was think of how to respond and then
could
respond respond while it was thinking of
the next response it was going to tack
onto that so this wasn't in theuh actual
event but they did a bunch of test
videos on their YouTube channel and one
of them was a guy saying he says like
also they all say hey chat GPT which I
hate I would like a name or something
like that's a long wake word um but he
said hey chat GPT I'm about to become a
father and I need some dad jokes for
like the future can I tell you some and
you can tell me if it's funny and you're
asking the AI jokes are funny bad
example but you could tell the two
responses it had cooked up ready to go
which made it feel quicker so after he
said that it responded with oh that's so
exciting congratulations on becoming a
new parent and you could tell that was
one response and then the next response
was like sure I'm happy to listen to a
joke and tell you if it's funny so you
could tell that while it was waiting to
figure out response to yeah had response
one loaded already and I think that that
is how that they're able to claim like
such faster models is that they're like
is is that they just use filler content
like they take the the important
information that they actually have to
parse through the the model and they
crunch that while they take the very
simple things and can fire that off to
you rapidly with like a conversational
that's way better filler though than
just like hm you want to know what the
Brooklyn Bridge is like I can tell you
what that is like it is blah blah blah
conations on becoming a father even
though I don't really want to hear that
from an AI yeah it's a little weird I'm
proud of
you dead
up God yeah yeah so yeah I mean I
actually I thought it was uh the event
in general was pretty good I'll give
them a which nice huge points for that
was 26 minutes I think it was way
shorter they didn't try to just like
keep it going
um they did a bunch of math with it
which was kind of cool because that's a
big thing that you know they've been
trying to do like if you're just
predicting then you're not actually
doing reasoning right and I'm sure it's
still just doing that but with higher
accuracy it does seem like every example
is like most like childlike task ever
like name this one thing or just like
read all of my code and tell me exactly
what it does there doesn't seem to be
the in between I think that mostly comes
down to poor examples that
all them you
said all Theos are like hey you show
this by looking like a total and
asking it the most basic question ever I
was thinking about this during the the
Google IO conference yesterday and I I
think this is every AI demo that you do
it's like you make your engineers like
look like total idiots because they have
to ask the most basic questions just
because they have to show that the AI
can do it they're like oh so like for
example that they had it work it they
had upt 40 work it through a math
problem that was what is 1 x + 1 = 4
like 3x plus 3x + 1al 4 and it was like
how do I solve this can you work it
through me step by step and it was like
well first what do we do when we want to
figure out an an exponent move all the
exponents to one side and so it
basically like just took it through the
basic math and it's funny because Google
at. did a lot of the same stuff where it
was like this could be useful for kids
this could useful for kids which
reminded me a lot of the the rabbit
thing that we talked about yeah um it's
not just that it's like cuz this has
been on my mind like it seems like as we
get deeper and deeper into this AI
assistant nightmare the the use case
examples are getting worse like like
they're like not only they're making
their own employees look kind of dumb
sometimes but then they'll try to like
balance it with the sort of like human
example like the hey can you teach me
some dad jokes like I know I say this
before on the pot but it's like what a
sad reality when you're like assuming
you you I'm assuming you have a spouse
you know you're about to welcome your
first child that's who I guess like you
want to impressed with it's like but
it's like what's the like oh no my
spouse like isn't funny at all like like
they they can't help me like Workshop
these jokes I need stupid computer or
like I actually kind of hate them and
would rather talk to them it's like what
are you talking about one of the Gemini
examples that they showed in the
original Gemini demo that they showed
again yesterday in like a Sizzle reel
was write a cute social Med post for
backer and it which is their dog and it
was like baer's having a great Saturday
#k and I'm like are you that Den okay
yeah I see are you really Outsourcing
that to an AI know what is wrong with
you that I think these these individual
examples have to be bad because there's
no one example you can give that will
apply to everyone watching so they're
trying to give like a Halo example that
app
no one individually it applies to no one
but the concepts of it are maybe
applicable because that's when you see
all the LinkedIn posts about like oh
this changed my like when you see write
a cute social media post about my dog on
one half of LinkedIn people are going it
can do copyrighting oh cool and on the
other half it's people going it can
write my school essay for me oh cool and
on the other half there's people going
it can write a letter for me like when I
need a condolences letter there there
are a bunch feel what you would
Outsource a cond a condolences like some
people would bro I just I just got back
from a tournament with I have a teammate
who is a ghost rider for executives at
large corporations so I'm not going to
say their name or the corporations but
many of these presidents of schools and
CEOs have to write letters and things
all the time to congratulate someone on
something or to write back to an alumni
to convince them to donate or all these
other
and they just don't have time to have
all that bandwidth and so they hire a
person to do it or there's someone going
oh the AI can do that now yes that's
super useful I like when I wrote to
Obama in 2012 and I got a letter back
that was signed to Barack Obama I was
like what yeah that's somebody's job
right now guess how many more letters
that person's going to that's one thing
but being like yo I'm sorry this person
you like died like having the computer
was like no I have a teammate I thought
he was going to be like who just
recently lost something and I really
didn't feel like writing this
no but no that's true I mean it's I'll
the examples are are very specific that
I'm giving but it's like they're trying
to give you like one Ultra generic
example that applies to no one so that
everyone else's more specific examples
that they wouldn't use on stage because
it's too small a group can kind of be
tied into it in a way so yes you're
going to get some really brutally
generic like write a caption for my dog
and they're like okay I just learned
that it can write I just learned that it
knows the subjects and it can just think
that I think that's what they're I guess
you're right
but I mean I think it can be that and
there still should be some someone
making more creative examples because
both of these events and we'll get into
the Google one later were just felt like
there was no wow factor it felt so
simple and it made all of the AI stuff
feel really boring and tedious okay I
got an example for you because we were
talking earlier and I was telling you
that I feel like after the last two
events I'm actually more optimistic
about the state of AI than I was before
these events because everything was so
mundane so because there was because
everything you're more optimistic about
Humanity living with AI and yes yes
exactly I'm more optimistic that it's
not going to take us over and like Dy oh
yeah wor because I feel like the trend
that they're all moving towards is
having the like broad example and then
everyone can kind of have their own
little slice of AI for their own
personal use case so for example this
got me thinking
I read books like I'm sure many people
do I know I'm flexing um but I like
doing book clubs but I don't know anyone
that's reading my same books at the same
time can I go online and find people
that are doing this sure but then David
and I have tried to have book clubs like
three times it's like when we were
reading the AI book The Google book like
two years ago yeah so it's like some
sometimes exactly that is that is my
problem sorry also no one else in my
life is really like super into fantasy
books like that can I find one person
and make sure they're reading the same
chapter that I am every week sure that's
annoying I just want to talk to someone
and be like that would be cool right and
having an AI know exactly where I am in
the book and being able to have a
conversation about characters and things
I was doing it the other day with Gemini
just to test it out because this thought
came to me and I'm reading a book right
now so I was like oh without spoiling
anything tell me about blah blah blah
and this character and what you think
and they came back with like legitimate
answers which was like pretty
interesting are you not worried it's
going to mess up the spoil part and just
be like oh great uh character a was so
good and then like but it's really said
when they di 25 Pages later I have to
say that now but at some point and I
think in the near future it can easily
know where I am in the book and know not
to you really want to have a book club
with a computer I don't want to have a
book club period I just sometimes want
to like talk about characters and stuff
with the computer with anyone I don't
know man I hate to break it to you
there's this thing called Reddit and any
discussion you want to have about a book
is already on there they're all spoilers
or spoiler-free but like it's not synced
up exactly the page that I'm up to
there's also and I have a I have a
tangent example that's like when you're
in an extremely specific case like when
you have sometimes it's tech support or
just like a product that you use and
you're like digging through forums like
I need to find someone with my exact
issue with these exact symptoms and
these exact bugs and like you can go to
a forum or Reddit and like type it out
and like wait for people or you can go
hey computer like look at all of the
people who have ever had any issues
related to this and then let me have a
conversation with you about those issues
and maybe I can figure this one out
because then you're sort of like
bringing in hopefully the most relevant
information and instead of having to
click click click through every single
comment you can sort of like talk to the
person who knows all the comments and
then when new issues get populated
throughout the Universe and they don't
get populated onto the internet the AIS
will no longer be able to solve your
problems I I agree like I think Mark has
is really right like that is like kind
of part of the dream of the AI the a
future what's so concerning is
increasingly and I I do also agree with
you Marquez like they have to give these
sort of weird broad examples to satisfy
all these different camps but it does
feel like increasingly there's this
message being subtly projected at us at
the events that's like you know what's
so
exhausting syp sympathy love uh being an
emotional being that let's fuing have
the computer do it I think that's just
again poor creative examples like they
could think of so many better examples
where this would actually be like useful
and you know how the iPad Mini
specifically targets Pilots yes and like
I'm listening you don't really know
about that except I'm sure that the
pilot Community is always like super we
love iPad minis yeah but if the whole
event was about Pilots you tune out I
don't know like I I feel like I'm
interested in how can A specific group
of people use this in a specific way you
know because like I can at least
sympathize I can at least empathize well
I guess empathize is not the right word
but I can understand like oh yeah that
is being helpful to part of the human
race if you're a company like apple you
need everyone to picture themselves as
that part I was just going to use apple
as an example for not doing Apple watch
Ultra their examples were like scuba
diver extreme like hiker or Trail Runner
and like yeah and that's still sold to
hundreds of thousands of people who will
never do any of because it's
aspirational marketing yeah that's
that's the pickup truck effect that's
like this thing is
built everything yeah whereas I think
yeah the pickup truck effect 100% people
like but what if I need it at some point
what if I want to help my friends
move like driving over rocks like built
T nobody you live in Brooklyn hey man I
have a gravel driveway
I gra there are leaves on my street
sometimes it snowed once I need the
clearance yeah that's that is very much
the challenge the prepper mentality of
America we should we have to take a
break I think we do okay I just need to
finish the event real quick okay uh
irony super ironic um there's a Mac app
coming for chat GPT only a Mac app which
is hilarious cuz Microsoft basically
kind of owns open AI not really but sort
of and they also sorry I'm going to butt
in just because of that sentence they
open the whole thing with like we are
trying our goal is to bring chat GPT
anywhere you want except unless you have
a Windows computer I guess which I think
is because Microsoft like their whole
next big move is co-pilot everywhere
like there's literally a co-pilot key on
all the new Windows computers like is
there move already I think that
basically like whatever open AI does
with chat GPT is just going to be
co-pilot with a it's it's a skin that's
it's called co-pilot but it's basically
just chat GPT so but it is awkward that
they have all this marketing that's like
chat gbt is everywhere except for our
biggest
funer um they said it's coming to
Windows later this year which is going
to be super awkward it's basically going
to be like the Google assistant in
Gemini thing because there's going to be
co-pilot and then there's going to be an
open like chat GPT app on Windows as
well right which they're the same
product basically
it's a branding issue um that was
basically it that's all I wanted to say
okay I thought it was funny well we got
we got another event to talk about but
we'll we'll get to that after the break
basically the same event with a
different
name and eight times longer yeah yeah
but uh since we are taking a break we
should also take some time for
[Music]
trivia the lights work again are they
different colors they go by our voice
depend on how
loud right the the lights have a mind of
their own honestly they AI run they are
AI run uh
and in this case uh AI stands for
Marquez can you please
stop right now people in their cars are
really mad sorry about that and the
people watching the dishes sh anyway so
after the break we're going to get into
all of our good old Google IO discussion
but I was reading the Google Blog as I
do from time to time called the keyword
very silly um and I learned something
even sillier than the name the keyword
and that's we all like know what Google
iio like the io stands for like input
output yeah Google on the keyword
actually lists two alternate
explanations for why they chose IO and I
will give you a point for each of those
explanations that you can give me is
each explanation just what the I and
what the O is or is there like more to
it it's there's more they're not
acronyms there's more to it but not like
that much they're they're backronyms
right uh wait like IO is the backr like
does it stand for something uh one of
one of them yes I think I know one of
them and then the other one is more gray
than that you guys shouldn't have asked
you shouldn't have asked it that's your
you're the trivia Master you got to
decide what questions your stands for
I'm
out which is exactly how I felt when it
started
yesterday well that's a good segue we'll
take a quick ad break we'll be right
back I'm out
[Music]
support for waveform comes from Kota are
all the tools and tabs bouncing around
on your desktop stressing you out Koda
can help you get organized with an
all-in-one workspace so it Blends the
flexibility of docs the structure of
spreadsheets the power of applications
and the intelligence of AI to make work
a little less work Kota is designed to
help your remote colleagues get on the
same page no matter what time zone
you're in and if you feel like you're
always playing catchup with kot's
extensive planning capabilities you can
stay aligned by managing your planning
Cycles in one location while setting and
measuring objectives and key results
with full visibility across teams plus
you can customize the look and feel of
Koda with hundreds of templates to get
inspired by in kota's gallery so if you
want a platform that can Empower your
team to collaborate effectively and
focus on shared goals you can get
started with Coda today for free so head
over to coda.io swave that's c o FY y.
iWave to get started for free cod.
iWave welcome back everybody as you may
have noticed we just kind of alluded to
the fact that google.io was
Tuesday this was uh arguably in my most
humble opinion because I'm the humblest
person on this podcast in this current
moment nice correct no I'm anyway uh one
of the most soulless Google iOS I have
ever watched W um I was thinking about
this this morning I was like remember
when Google iio was Sergey Brin jumping
out of a helicopter with Google Glass on
and Landing yeah in San Francisco live
and remember that live and then we got
Project Ara with the which was the
modular smartphone with all the bits and
pizza I mean we got IO announcement we
got like Project Loon which was like
bringing internet to like random
countries are all these things that you
so far dead yeah yeah it's still it's
still fun at iio though yeah like I
remember being like year Starlet Starin
Starline we at least got that yeah I
just remember being like in in high
school and being like wow Google is such
a cool company I'm so I want to work
there so I wanted to work there so bad I
was like everything they do is just a
moonshot idea it might not work out but
it doesn't matter because it's all
funded by search which is cool it's like
they have an infinite money machine that
keeps turning and because of that they
can just do whatever the heck they want
and this year it kind of felt like they
were pivoting to be a BB company I'm not
going to lie they talked about AI models
constantly they talked about the price
that they're charging developers to
access their AI models which is
something Microsoft would do which is
something open AI would do but that is
not something that Google used to do at
Google IO yeah IO io's changed I was
definitely this year felt like this it
felt like you know the like part in
every IO where they talk about all their
servers and tpus and all that and
there's like that exact same graphic
every single year first of all that
graphic got used probably 45 times
whoever made that is not getting paid
enough for how much they use the
likeness of it but it felt like that
like end 25 minutes of every IO where
where you're like all right all the cool
stuff happened like when are we getting
out of there that felt like the entire
event it was like the most low
energy IO I've ever seen I mean I've
only been here covering it for like
seven years but just all the things
nothing they announced had this like
real wow factor there was very few times
where my Twitter timeline was all like
this is so cool they just announced this
like nobody really had that one thing
where it was really and we've had like
last year visualized roots on maps was
this really cool visual example we've
had like the chain link fence that I
reference all the time like yes there
are things that did not come out I that
was IO but it was cool and it had that
like wow moment the crowd seemed out of
it this year almost all of the
announcers I felt just also felt low
energy except for Samir he did a great
job and felt as high energy as normal
but like yeah I don't know and the whole
event felt like it was dragging from the
first terrible tayor Swift joke they
made in like the first one minute and
then they proceeded to make a bunch of
other bad Taylor Swift jokes that really
felt like Gemini wrote it but yeah this
might be a silly question cuz I didn't
watch it when you said Samir do you mean
like from YouTube no umid I'm forgetting
is like yeah the guy from Android and
what was a bummer was he was basically
like new things coming to Android and
then just had to say the exact same
Gemini things just in like basically a
mobile version Samir Sam he's the
president of Android ecosystem and he
had his like same energy which just made
all of the presenters around him feel
low energy everyone felt really low I
don't know what the what was going on
but it felt low energy I think a perfect
way to wrap up kind of that we weren't
the same people feeling this is Ben at 9
Google posted a title an article this
morning that said so Google made a
10-minute recap mhm and his article says
Google's 10-minute iio recap is somehow
just as tedious as the full event and IO
usually isn't tedious until the last
like 20 to 30 minutes like it's usually
like cool cool cool wow I didn't even
think of the fact that you could do that
with machine learning wow you can get
rid of a chain Ling fence you can't but
still like all of this stuff that
genuinely blew my mind that I feel like
we also used to see when every pixel
would drop yeah there would always be
one or two cool AI features that were
like wow this is I'm so excited for this
year and there was almost there was like
a couple things that were like I'm glad
Gemini's doing that now I can name like
three which we'll get into which we'll
get into but everything else felt really
corporate it felt B2B which was really
weird it was surprising to me because
they also made the distinct choice to
separate the pixel 8A announcement out
of IO right so we had a separate pixel
8A happen before like a week two weeks
before IO yeah and then to me that was
like oh io's stacked this year we don't
even have room for the pixel 8 not and
so that's why it's surprising and I I a
lot of people say on Twitter that like
oh it's just because there's not a lot
of Hardware stuff that's tomorrow but
like they have done cool visually
interesting things software wise before
end with AI before that's not the reason
it for so it was just not a good IO this
year but I want to do two cool things
about it while we then continue to um be
mean about it for the rest of this
episode um I'm sad we didn't go because
Google iio swag is always fantastic and
the swag they had this year looked great
um I posted some photos of it so like uh
I think they did a great design this
year the tote looks awesome the kwck
looks awesome the water bottle sick I'm
really sad we missed that um sad if
that's the most exciting part well no
the most exciting part was Mark reier
opening as the DJ um he did a fantastic
job if anything his Vibes are too
Immaculate that everything after him
feels boring he tried to do what he
always does in in at the beginning of
shows and he was just trying to bring
his same energy and I texted him
afterwards I'm like bro I'm so sorry you
had to deal with that crowd and he was
like I did what I could my favorite part
is that again as an American units of
measurement really confuse me and so
when they when they measured when they
measured Gemini's contextual ability in
tokens no in number of Cheesecake
Factory menus worth of words right I was
like Gemini can hold 95 Cheesecake
Factory menus worth of context at one
time I was like that's it that seems
like have you been to the cheesec
factory factory is the menu really big
it's a book it's a book Adam could have
a book club that no one would join him
with I need to go to a restaurant that
has like three options I don't want I
want never go to menu for three weeks
now I'm on the 70th page yeah yeah okay
yeah okay so besides Mark's Immaculate
entrance in which he had a for viewers
that and listeners that don't know who
Mark is he's like he makes music on the
spot sort of person and they had him use
the music LM thing to try to like
generate music and then play with it
he's like an improv genius yeah he's an
improv it was very tough uh cuz the
crowd was tough the music LM didn't work
that well and the beats that it was
giving him were not that funky a lot of
problems he wears a lot of robes they
had custom Google IO Mark revier robes
and he shot them out of a cannon um and
then and then St dark came on stage in
the energy whip boom into the
ground uh okay so there were actually
some interesting things and they
actually did start off with a couple
pretty interesting things they started
off with which with what I think was one
of the most interesting things which was
an update to Google photos where now you
can just contextually ask Google photos
to show you certain pictures and also
ask questions about your photos uh so
you can say and I actually used Google
photos to find my license plate a
million times last year all the time all
the I've never now in Google photos you
now I memorize my license plate but in
Google photos you can now say what's my
license plate number again and it'll
bring up pictures of your license plate
yeah which is cool you can say show me
Lucia's how Lucia's swimming has
progressed and it'll bring up a bunch of
photos and videos you took of your
daughter swimming so it can understand
the context of okay this is what you're
asking and then map that to what it
tagged the photos as of being and I
think that's actually really sick I do
think the license plate example was
perfect because they were like normally
you would just search license plate and
now every license plate in your every
car you've ever taken a photo of shows
of and then you say my license plate
number it's like oh this is a car that I
see pretty often so it must be your
license plate let me find one picture
that has it and only show you that one
picture so you don't have to scroll
through a bunch mhm that's cool yeah I
like this and I think because the
results are specifically pulling from
your photos it can avoid
hallucinating because it's going to give
you the exact Source it also because
yeah it just has to show you an image
it's not generating something cuz before
like so if I wanted to find my license
plate like I had this actually for
example I was in the back of a car and I
was going to go to the airport and
they're like I just need your passport
number and your signature and I was like
okay here's my signature what is I don't
have my it's in the trunk but I just
pulled up Google photos and I just
searched passport and I got the latest
photo that I took of my passport and I
just got it from there yeah instead
theoretically I I would just ask Google
photos what's my passport number and it
would give me my passport number and as
long as I also can see that it's
referencing an image of my passport and
not some other random photo I have of a
passport I think I'm good it doesn't it
doesn't give you a text output it just
shows you the photo the one picture so I
think that actually solves the is I
think you could probably ask Gemini at
some point what's my passport number it
would pull it up and then it would
probably potentially reference the photo
but right now this Google photos update
it's just an easier way to like ask
Google photos to show you specific
pictures that's NE it kind of feels like
which I like more than generative Ai No
I agree it feels like the magic of what
Google search has been where everyone
makes the joke of like what's that song
that goes ba ba ba ba ba it's like you
can ask it some pretty crazy questions
and now in Google photos you can just be
way more specific and it can find things
easier for you Google photo search has
always been one of my favorite features
ever just like being able to search
through different things and it can tell
kind of what they are and create albums
or look at places that you've been to
and now being able to go a little more
into that where maybe I'm like what was
that trail I I hiked in Glacier National
Park in 2019 and like if I took a
picture of the trail sign it'll probably
come up as like Cracker Lake Trail and
then that's awesome and I don't have to
just search through every single thing
that was in Montana yeah I often will
think oh yeah I went on that trip I
think it was in 2019 so I'll search like
mountains 2019 and I have to sort
through all the mountains that I took a
picture of in 2019 until I find it maybe
it's not that year so now I can ask it
like was show me the mountains when I
was in Italy the last time and yeah it
can still do sort of those things right
now but it's just a lot more contextual
which beneficial it's it's helping
because a lot of people didn't even know
you could search Google photos for
things and it would find them very
specifically so it's just making this
prompt engineering less of a skill you
just type what you're thinking and it
can find it which is why I think they
specifically use the promp what's my
license plate number again because it
sounds more human like the way that I
would probably prompt that is my license
plate and it would bring it up you know
whereas a normal person says what's my
license PL guy because I think the
ultimate goal is like be able to have
natural speech Computing with these
computers with every prompt yeah I think
our generation grew up learning how to
speak Google which is keyword it's a
keyword language you have to know how to
Google and you're basically just picking
out keywords we've talked about this
like prompt engineering is the skill
that we all learned yeah and now it's it
wants the old people to be able to do it
yeah where young people just go like
write me some code and it just does it
you just don't have to know how to do it
right there's a lot of other stuff so
we're just going to go through it a
little bit randomly I kind of went
linearly through the KE note so if it
jumps around a little bit that's
Google's fault no one remembers exactly
how the keynote went so everyone
hallucinate that the AI hallucinate that
uh okay Google search generative
experience which was a Google Labs
feature that you could opt into for the
last year since Google iio 2023 which I
have been opted into for a while and has
annoyed the heck out of me I've been
using it as well I had an example
recently where the AI thing gave me one
answer and then the top suggested Google
thing told me a different answer yeah
thaten to me yeah so so that was an
optin feature that I forgot was op in
because I would have turned it off a
long time ago if I remembered that uh it
is now rolling out for everybody and it
also now generates a bunch of extra
information and tiles and Pages for you
and in my opinion it was a very bad look
because basically the entire visible
screen was all generative Ai and I
didn't see any links
anywhere yeah which is bad yeah it was
basically like creating a bunch of
almost like Windows 8 tiles of like all
potentially different things you might
want whether it's the sale link for
something with prices or the reviews of
a restaurant just like all in these
different tiles and you didn't see a
single link on the page I also just want
to like paint Mark has the picture of
when they announced it the way they
announced it as you know the huge screen
on the io stage yeah it was all white
and had the Google search bar but it
didn't say the word Google above it and
all of us kept looking at each other
like are they going to rename search are
they going to change or is this going to
say like I don't think any of us thought
it would actually but we we were all
pretty sure it was going to say Google
powered by Gemini and I think part of
that hype is why none of this stuff felt
that cool cuz we're all like they're
going to do something big they're going
to do something big they're going to
rename it they're going to change how
this looks and then they just didn't
yeah um I think they're trying to
completely reframe in your mind what it
means to to search something now you
they don't want you to have to search
something and then find the information
yourself and actually what they kept
saying over and over again was let
Google do the Googling for you which was
super
weird it's like so like never leave
Google it's yeah it was basic because I
think that what they were afraid of is a
lot of people use chat GPT as a search
engine even though it's not a search
engine I've heard crazier things yeah
you know how many people use Tik Tok as
a search engine a lot of people more
than you I mean it's like using I I use
YouTube as a search engine cuz it's
better you is the second largest search
engine in the world yeah I like watching
videos of people answering my questions
and doing things specifically I like
people dancing to my
questions this is how you change a tire
forget Kora dances yeah yeah that that
feels like a big part of it is is Google
has historically been kicking people
into links and then they leave Google
and they go find their answer somewhere
else but if Google can just like siphon
all the content out and service the
thing that you were looking for in the
first place then Google helped you not
the site yeah and then that site never
gets any credit even though the site is
the one that had the information which I
just want to point to the fact that like
not that long ago like two to three
years ago Google got in serious trouble
for like scraping some information and
putting it on the sidebar of Google just
saying like these sites say this this
this and everybody freaked out and then
Google got in trouble for it and now
they're just completely siphoning away
sites together to the point where there
is now a button called Web so you know
how when you Google something it'll be
like images videos news there's now
going to be a button called Web where
you press the web button and it actually
shows you the links oh wait you it
wasn't a scroll away
before uh I you probably can keep
scrolling but there's so many things
there but like do you know how there
yeah there's shopping Maps photos now
there's one that's web so it's only web
links and hate to say I'm excited for
this because Google search has become
such a pain in the ass that I like just
want to look at different links it
basically gets rid of everything that
Google has added in the last five years
just it's like old Google it's the opt
out of the Reddit redesign for Google
enhancement site
yeah yeah uh that's hilarious I find
that hilarious somebody tweeted uh that
that's the funniest April Fool's joke
that Google has done
um yeah it yeah it's just a weird thing
um Nei from The Verge refers to this
whole redesign as Google Day Zero which
is basically like what happens when a
search is just a generative like
everything is generated and there is
just zero incentive to go to websites
like we've talked about this forever
because we always were like this is
eventually going to become the thing and
currently right now because they haven't
like ful rolled out this thing it'll
give the little generative AI preview
but you still have links but all most of
the examples that they showed at IO were
like the full page was generated and
once you hit a point where the full page
is generated that's when you start to
hit the existential thread of like how
are websites going to make money anymore
yeah this is again why uh I think this
this llm stuff needs fact checking buil
in uh I just Googled it real quick
Google's mission statement is to
organize the world world's information
and make it universally accessible and
useful and if you think about it if
you're Google yeah for years and years
and years you've been collecting and
organizing all of these links and then
people go to
you and they say I want to know where
this thing is on the internet and then
you show them the list and then they
leave and you've been doing that for so
so long that you have all this
information if you're Google probably
what seems like a natural graduation is
all right the Internet it's here it is
what it is it's this giant thing now we
can do is learn all of it summarize it
all for ourselves and now when you ask
me a question I can just reference it
and hand you exactly the information you
need and you never even have to go
through that mess of links ever again
people will just this is a crazy
statement kids will not really learn the
skill of navigating through search
results yeah not at all that's another
thing that they won't really that's also
scary cuz that makes me think that kids
are just going to believe the first
thing they see made by a generative Ai
and that's why you need it to be fact
checkable it needs to be ver verifying
itself because yeah the skill still of
like seeing the Google It button and
going yeah I'm going to verify this one
and like looking through the results
that's still a skill that we have but if
you never have that skill and you just
go to Google the thing and it just
surfaces the thing you just believe it
that could be pretty dangerous here's a
question for everyone just on topic of
this when's the last time you went to
the second page of a Google search
all the time I do it all the time it's
pretty you know you're in for one yeah
it's like 50% of the time 50% of the
time I wouldn't say 50 for me but when
I'm doing research I go through like
five pages I got a long tail I get a lot
of stuff on the first page but then
every once in a while I get something on
the like 36th page and it's wild that
there's probably people who don't
realize that there's like the 10 and
then like it keeps going after that like
yeah keeps going wait quick question
Marquez with what you were saying isn't
that the best C well I guess user
experience customer UI in theory it is
if it's correct yeah and who's going to
write content this is if you have a
magic box that tells you all the answers
nobody can make the content that it's
scraping from this is the we've been
talking about this for months I know
that's the problem that's the fun part
but nobody thought about this does the
incentive to create content could
Disappear Completely when you don't get
stop going to websites and they don't
make any AD Revenue then yeah is then
things go under is the fastest way also
the best way for consumers like yeah
you'll get the information faster
because you don't have to click through
websites but also sometimes I think it's
I think it's also a what about the
journey I think there's also a tale to
this where like At first the fastest way
is best but then when it gets to a more
indepth like think of someone who's
about to buy something yeah exactly like
if you're I'm about to make a big
purchase I'm to buy a laptop real quick
if I just Google like what are some good
laptops for $1,000 then I just set my
frame for the first page but then on the
second or third weekend where I'm like
I'm about to spend the money now I'm
going to really dive in no there it's
like something I Google almost every
week chicken drumsticks oven temp time
don't need don't need more than 15
seconds for that one you know what I
mean but yeah like if I if I wanted if I
wanted to go on a little little internet
journey I think there's times though
when also you don't have time for the
journey if that makes sense like chicken
drumsticks on attempt time like I'm
hungry I think about the time I was out
and CLA calls me it's like a pipe burst
in the downstairs and we need something
what do we do with it and I just ran to
Lowe's and I was like what's the best
thing and instead I'm sitting there
where like I wish it could just be like
is it a small leak is it this and I can
it'll give me that temporary fix right
there and I don't have to go through 20
Pages inside of Lowe's while my
basement's flooding yeah I think there's
some fast things where that is the ideal
scenario but yes the journey sometimes
is fun because you learn about things on
the journey that turns into actual
usable information in your own brain
yeah that we get to use right imagine
that imagine that okay imagine
experiences we got to move on cuz
there's so much there's so much stuff um
Gemini 1.5 Pro which you and I have been
beta testing for what feels like months
at this point they doubled the context
window to 2 million tokens uh and now
they're just spouting millions of
Cheesecake Factory menus yeah they're
just flexing on every single other
company that they have the most tokens
which y wow I still don't understand
tokens tokens at all they're vbucks a
word is like a token it's like
tokenization of a word so you can map it
to other words and they just cost money
Transformers Transformers Transformers
cuz people make fun of me for saying
that a lot um Power which costs money
they cost power Mone they're called
tokens cuz it's like it's the smallest
you and this is like the dumbest
possible way of explaining it but it's
like it's the smallest you can break
down a piece of information in a data
set to then have it be evaluated across
the
to every other token exactly so like in
a sentence like you would break down
each word into a token and then analyze
those words as independent variables
tokenization CU you're like in an image
like a pixel could be a token or a
cluster of pixels could be a token okay
so then quick question when they say
when they say 2 million tokens do they
mean that I can like do a 2 million word
word yes okay got it oh so it's per
individual query it can take up to 2
million tokens yes okay that's the
context so the window is basically like
how much information can I throw at this
because theoretically in these models
the more information you give it the
more the more accurate they can be okay
okay remember the dolly prompts that
were like give me a man in an astronaut
suit with a red handkerchief around his
NE be more blah blah you can just keep
going okay right yeah cool okay um now
they are also embedding Gemini into a
lot of Google workspace stuff so you can
have Gemini as like an additional person
in your meeting that's like taking notes
for you you and you can interact with
during Google meet meetings they should
call it Go
pilot why cuz it's Google like it's like
co-pilot but Google oh oh come on was it
that
bad was that bad I'm picturing on a
killed by Google website in like three
months yeah what did sorry that just
reminded me what did Mark call
music gos oh yeah Google Loops gloops
gloops yeah yeah he called it gloops at
one point which they should was the best
part of a yeah uh they introduced a new
model called Gemini 1.5 flash which is a
lighter weight faster cheaper model that
handles lighter weight queries
hooray Microsoft is so scared um we got
project okay so project Astra is what I
think is basically their answer to like
the Humane and rabbit thing except it's
better because we always knew it would
be the demo they
showed the demo they show was basically
on a pixel it has a live feed of your
surroundings so on Humane or rabbit you
have to take a photo and then it
analyzes the photo and talks about it on
this one it was basically a real time
image intake where it was taking in a
video feed with this person walking
around with their pixel and they could
just be like they were just kind of like
what does this code do and it'd be like
oh it does this does this okay cool uh
well what what what where could I buy
this teddy bear oh the teddy bear can be
bought on Amazon for $1499 cool cool
cool uh over and then they did this
casual thing where they like switched to
these smart glasses that had uh cameras
on them which was also strange cuz they
were like where did I leave my glasses
and it remembered where but they never
showed it in that initial feed so are
they remembering it previously or so
here was the weird thing yeah they they
said like where did I leave my glasses I
was like it's on the side table it only
knew that because the person was walking
around with their pixel camera open like
for 5 minutes and it happened to see it
in the corner while they were walking
around
obviously in the real world I think this
was basically the same thing where it's
like in the far off future if you had a
Humane AI pin that was constantly taking
in all of your video feed information
all the time it would always know where
you left all your stuff because it was
constantly watching which nobody wants
um so that's the convenience though
think of the convenience the storage
yeah remembering everything that Google
just put on their servers yeah th it on
YouTube so I think this is just a demo
to show that yeah they they can do what
human and rabbit are doing but way
faster and way better and it's a video
feed instead of a ph and because it's
the live video feed they also did this
thing where it's like you could draw in
the video feed and be like what is this
thing and like an arrow to it so if like
for some reason you can't just describe
the thing in the viewfinder as well you
can it's basically Circle to search
through live M multimodal like which is
something that open AI was basically
demoing too on Monday um cuz you could
you could Point your phone at things and
it would help you through math problems
as you were solving it yeah so it was
cool uh they didn't talk about you know
if it's ever going to come to anything
ever they just demoed it they said it
was a project kind of like those like
translation classes that just never
became a thing and I think they were
trying to make like a nod to those by
saying like yeah we've got we're working
on stuff but they're probably never
going to release anything it kind of
also made me feel like because this was
just like another blip on the radar
during all of
made me kind of feel like the Humane pin
and the rabbit pin needed to make the
hardware versions because everyone just
like kind of moved right past this even
though it's doing the exact same things
better than both of those but since
they're not a shiny product everyone's
like cool it's just something in iio
they're basically like yeah you can do
this on a phone moving on better yeah I
don't think about you Child's Play yeah
yeah all right so that was already a lot
um there's a lot more Google IO stuff uh
either you're welcome or I'm sorry or
you're not read that or not listen to
all that but we got to do trivia because
that'll help your brain break up this
episode
so hey what is this is this on purpose
it is yeah I tried to use Google's music
effects to recreate our normal trivia
music um fun it got none of the things I
put in my prompt in here like not a
single one oh I guess I asked for
drums terrible sounds like is that drums
or snapping I asked for 6 seconds of
interstitial music intended to
transition from the main segment of a
podcast to the trig the trivia segment
that happens before an ad the track
should feature a hammonded organ and
drums and be bright Punchy and fun uh
that's not wrong I would say bright
Punchy and fun yeah but where's the
organ where's the six second let's chill
out a little bit give it a chance this
is Google small startup company I was
wondering he told me earlier like let me
handle the trivia music and I got
something cooked up and I was like what
what is he going to do and now we know
all right second trivia question so a
bunch of people were talking about how
the open AI voice sounded like Scarlett
Johansson in the movie Her Like David
mentioned what was the name of the AI
that she voiced in that movie I don't
watch movies
okay well you guys are going to get some
points uh yeah it's been a while since
I've seen this film so much for that
yeah cool all right I've never even
heard of that movie what perfect me NE
serious serious Andre go home we can't
we can't tell Andrew anything about this
movie we have to get through the the the
trivia answers and not spoil a single
thing and then Andrew you need to go
home and you need to watch
her no
[Music]
um okay notebook LM which used to be
called project Tailwind which Adam was
particularly excited about it's now more
focused towards education but
effectively it can take in information
from your Google Drive like photos and
documents and videos and basically
create like a model of these AIS that
can help you with specific things
related to your specific information and
the way that they showed it was
basically a live podcast um where they
had these two virtual tutors they were
kind of like both talking separately
being like okay Jimmy so we're going to
be talking about gravity today so you
know how something's 1.9.8 m/ second and
then the other AI squared and the other
AI would be like not only is it that but
if you dropped an apple it would drop at
the same speed as a rock um and then you
can like call in almost and then ask
then you become the new member of this
conversation ask a questions and they'll
respond to you interesting it was some
virtual tutors and they were very
realistic very similarly to open ai's 40
model that felt very realistic I felt
like this week is the week where all of
the companies were like we are no longer
robotic we have can have humanoid like
voices that talk in humanoid like ways
great so that was interesting okay uh I
would actually love to play with that to
see if it's any good it was kind of cool
that there was like two virtual AIS that
were sort of like talking to you at the
same time but also interacting with each
other didn't it also just like pause and
not finish the
question um um I think that he
interrupted it oh he interrupted CU he
was basically saying I'm helping Jimmy
with his math homework hey Gemini 1 like
what do you think about this and be like
wow great question and then it just like
paused and didn't finish answering the
question yeah okay probably just a bad
uh demo on on stage yeah um okay they
also introduced I thought it was called
imen but it's IM imag imagine 3 which IM
I thought it was imagin cuz image
generation but it's imagine which is
more like imagine it's like a triple on
it probably like still is that but just
a imine funny better way of saying it
yeah uh basically their third generation
Dolly esque model with better photo
creation yay cool um music AI soundbox
which they had Mark reer at the
beginning use the music AI soundbox to
create these beats and they also had
like a childish Gambino ad where he was
talking about it I think uh he was
talking about video stuff later wasn't
he talking about Veil yeah you're
talking about Veil cuz he was put up
there as a
uh why can't I think of the word film
like he was doing movies not music which
I thought was funny but got it got it
okay yeah so the music generation was
you know better music yay they basically
are just like updating all these small
AI things that they were like I can do
this too and better than last time yes
which is where we go with vo or yeah I
think it's vo um ve ve I don't know I
don't know it probably is vo I'm just
saying that because it's already a word
in Spanish so oh what does it mean in
Spanish like IC basically IC like from
7el like 2C like
I oh okay well that would make sense
yeah yeah it can create 1080p video from
text image and video prompts uh you can
further edit those videos with
additional prompts they had testing
extending the scenes which was cool that
was the coolest part of it I think which
I think was like a direct shot at uh run
Runway runaway because that can only do
like 30 second video clips right now I
think possibly 1 minute well I thought
it was cool because it wasn't just that
it extends that is I think it was you
could put a clip in and say make this
longer right and then it would make the
clip longer by just like it it basically
is like content aware fill where you
need a photo that's bigger but it does
it on a video I think that's awesome
there are so many times where you're
like found footage this doesn't really
roll the be enough of the b-roll that I
need here like if this could be five
more seconds I do remember I remember
right after Dolly came out people were
basically making these videos of like
how to make your your a roll setup
cooler with AI and it was basically like
they just had a very contained version
and then they generated a filled out
image of their office to make it look
like they were in a bigger space that's
sick which is kind of cool but now vo
can just do do it for you uh the nice
thing too is it maintains consistency
over time so if you have like a
character the character will look the
same and it doesn't do that random
stable diffusion thing that uh AI
generated video used to do where it
would like flash in and out and like
change shape slightly and keep moving
around it feels like we're only a year
or two away from like being able to
remove a fence from the foreground of a
photo I don't know dude only God can do
that I don't think that's that's nobody
knows how to do that uh and then wait
sorry real quick Adam and I were just
like factchecking some stuff and we
found something that's like too funny
not to share with you guys like
immediately this is um when Google
announced last year lra uh which is the
actual model that does a lot of their
generative audio look at this image that
they use I just sent it to you guys in
slack wait that's just the waveform logo
that's just like our it's literally our
waveform Clips background the colors are
the same and like gradients the same and
like it's kind of the same that's
exactly the same yo Google send us some
IO swag from this year to make up for
this cuz the same steo this from us you
stole this from us it's like right on
the top of the page wow I'm tagging Tim
in this and seeing it's literally the
purple into scarlet color we'll get back
to that and see C okay um all right uh
with the vo thing they said it's going
to be available to select creators over
the coming weeks which is weird that you
have to to be a Creator to apparently
use it and I think what they're trying
to do is control the positive narrative
by giving limited access to to artists
who will carefully that will say
positive things about it which is fun
uh super fun okay the pain in your voice
as you said
that this actually is kind of cool we
found it there's some nuggets in this
needle stack you know what I'm saying so
nugget we wait you mean there's some
needles in this hay stack needles in
this nugget stack I don't mean that
there's some nuggets in this in this
needle stack if you like hey that's why
I said nuggets like chicken nuggets so
they have multi-step reasoning in Google
Search now which is cool this is
actually something that Andrew was
specifically calling out a couple weeks
ago this is so close to my Google Maps
idea I feel like but it's not quite yeah
okay do you want to describe it yeah
it's pretty much it's kind of like using
Google search and maps and reviews and
putting it all together so you can have
a more specific question or suggestion
that you want so their's was like I want
a yoga studio that's within walking
distance of Beacon Hill is what they
said within a half an hour walk of
Beacon Hill and it's going to bring up
different yoga studios that are within
those parameters that you set and then
based on the ratings the higher ratings
in there so it's an easier thing for you
to choose rather than just being a I
guess it's not that much different than
just searching on maps where but in maps
you would have to see like oh that's 20
blocks away from me this is actually
like Gathering that into what it deems
as walkable in half an hour so yeah the
the specific example they used was find
the best yoga or pilates studio in
Boston and show the details on their
intro offers and walking distance from
Beacon Hill and so it generate again
this is like the generated page so it's
like kind of scary in a lot of ways but
it pulls up like four that are within
the 30 minute walking distance from
Beacon Hill and it shows the distance
and it shows their intro offers and it
shows their ratings and that's that's
really cool I can't deny that I will
will say like a real world example of
this is last Saturday Saturday cuz
Friday night there was an insane Eclipse
all over North America not Eclipse there
was a Borealis CU apparently like
geom you didn't everyone in New York and
New Jersey miss it man you would have
had to get anywhere else in the United
States yeah we tried I was depressed I
got I like got into bed after getting
dinner with Ellis on Friday night and it
was 12:30 in the morning and I opened
Instagram and I was like what the heck
is happening and what did we do the next
day we drove to Monto but the whole
morning I was like Googling like uh
what's the weather here okay and then I
had to go on Google Maps and I'd be like
how far is that okay and I had to look
at the radar of it again and so I was
jumping back and forth between these
tabs like crazy I was going back Google
Maps and like these weather radar sites
and like how clear is the sky during the
that part of the year so if I was able
to just if you could trust these
generative uh answers ask the question
like what is the closest dark sky Park
that won't rain today that I'm most
likely to see the aurora borealis from
and it would just tell me the number one
answer and I could trust it which is the
biggest thing that would have been a lot
easier than me jumping and forth between
all these apps cuz this legit took me
like 3 hours to figure out where I
wanted to go this poses an interesting
question on if it would be able to use
real time information and real-time
inform information that Google provides
but they didn't show examples like that
because what you're saying would need to
know weather at a certain time that it's
knowing and updating where everything
they showed is stuff that could have
been posted months ago like yoga studio
walk intro offers I guess is more closer
to real time but like if you wanted to
do that or if I wanted to say and this
is something that's available in Google
what's the the closest four plus Star
restaurant in a 20-minute walk that is
not busy right now MH that like I
wouldn't have to wait for yeah could it
pull the busy time thing that they have
in Google in reviews and would it pull
that I would hope but I don't know they
should have showed it if it can pull
that live in there but I don't know if
they would be able to do this but yeah
this is so this is so close to me being
able I just want this to be able to use
in Android auto on maps with my voice
voice and say stuff back to me through
the car speakers and be able to do a
little more within a route um I feel
like this feels like a step close to
that I'm excited for it this was very
close to the thing that you mentioned
the other yeah yeah yeah we're almost
there I'm almost able to find the
closest Taco Bell serving breakfast on
my route next next to a third wave
coffee shop that doesn't add 30 minutes
yeah that doesn't which is pretty much
exactly what you asked for I know it is
I know so close yeah we'll see if that
if that works um Gmail now you might
think that the updates to Gmail would be
things that I would care about
mhm no no so all I want in Gmail is
better contextual search because right
now you can Search keywords and that's
basically it and you have to sort
through all of the different things and
you have to sometimes it just doesn't
show the email even though it has the
exact words that you're searching for is
it safe to say gmail search function is
Google's worst search function out of
anything that they do I would say so I
think it is impossible do you know how
many times I try and find my Mileage
Plus number in my Gmail but I just get
400 United promotional emails that have
never been opened it's like if I'm
searching for something I probably want
it to be something that's been opened
and read before this needs the exact
same update as Google photos just got
yes yeah which is well tell me my
Mileage Plus number and then it goes
here's what it is and here's the email
that showed it okay so that's a perfect
analogy because the Google The Google
photos thing you ask it a question and
then it brings you the photo that you
want to see right the update to Gmail
that they just added is you ask you ask
it a question about your emails and it
generates you a response it doesn't
bring you to the email that you need to
see it just tells you about your emails
which I don't trust because I just want
to see the email email yeah yeah so it
can summarize an email chain which is
like
I guess sure maybe how long your email
chains I know Corporate email chains are
probably really long and possibly
annoying still don't trust a generated
answer of things you can ask Gemini
questions about your email um okay uh it
has suggested replies that are generated
by Gemini which is not that different
from the suggested replies it already
has right now except that now it's
suggesting like full replies instead of
just like hey Martha as like the
beginning of the reply or whatever um
one of the examples that they got on
something you can do in Gmail with
Gemini is they asked it to organize and
track their receipts so it extracted
receipts from emails that they got from
a specific person put all the receipts
in a folder and drive and then created a
Google Sheets document that in a bunch
of cells like organized the receipts by
category damn that was awesome that was
cool it was cool but it was like so
specific and neat that it can do all
that and it still probably can't find my
Mileage Plus number yes I bet the rabbit
could do that if I took a picture of
every receipt I've ever had I think this
is cooler though because it can
show sorry I missed the sarcasm
completely there yeah I love that that
was the number one thing that people
were Amazed by with the rabbit they were
like it can make tables spreadsheets of
simple spreadsheets hey the Humane pin
can do that soon sometime in sometime in
the fut I think the biggest problem with
with Gmail's contextual search right now
is the signal to noise ratio is terrible
like you were saying like there's one
email that says your Mileage Plus number
all of the other ones are promotional
yeah signal is one noise is
999,000 yeah maybe they just did a bad
job at explaining it I'm hoping Gemini
for Gmail search could be really awesome
cuz I need it so bad I'm so sick of
digging through my email and just not
being able to find things that are
literally things I or a friend typed me
I've legitimately just thought about
just nuking my whole email and just
starting a new one start over because
I'm like I can't find that's not a bad
idea I think about it a lot but it's and
I wish you could do that with phone
numbers but you're just getting someone
else's like phone number that they're
reusing so kind of pointless you're
going to get spam anyway yeah yeah I
remember like sorry this is random
moving moving a couple years ago and
getting solicitation mail being like how
is this possible that I'm getting mail
this is a new address it doesn't make
any sense anyway yeah yeah okay I would
like that to happen so they're they're
creating this new workspace side panel
that's going to be going into a lot of
the Google workspace apps like Gmail and
like Google meet and like Google Chat
was a which is part of meet but no one I
don't think ever has used I take a
minute and be like what's Google Chat I
forgot that was I forgot it existed yeah
um because it's a chat Clan by Google
and you just don't use them because they
oh hangs oh right yeah oh no sorry wait
was it yeah Alo wait was it it Google
Messenger that's what it was oh mess
messages by Google oh messages or is it
Android messages I thought it was
[Music]
inbox all right uh yeah so so that side
panel is how you're going to interact
with Gemini in order to interact with
all of your Google stuff what I found
kind of frustrating about this is that
it's only in Google workspace stuff oh
and to me this is their version of Apple
lockin of like we're giving you all
these Gemini features but you can only
really use it in Google
products so but also like if you have
like a regular account that's not a
workspace account is that does that
still work no you can still use it like
that's part of Google workspace I
believe as me as a person with Google
Calendar and Gmail and three or four
other Google services I can you can use
Gemini through that okay yeah yeah um
they introduced this thing called a
Gemini powered teammate uh if you use
Google Chat forgot about this chip yeah
chip so why do they name it stop stop
naming I actually named it Chad before
they named it chip and then they were
like chip and I was like Chad was to be
fair that it wasn't that's not what its
name is you can name it and they named
it chip as just a joke in the thing got
it yeah so the way this worked is
imagine a slack Channel where you're all
talking about stuff you're all like
sending files and documents and stuff
you have an additional teammate that you
can name whatever you want we should
name ours Chad if we ever do it uhhuh
but you can say like hey Chad what was
the PDF that I sent to Tim last month I
can't really find it and Chad goes I'll
go find that for you and then it comes
back and it gives it to you and it can
ask information about it so it's
basically an employee a personal
assistant I'm on board yeah yeah so but
that's like in only in Google Chat which
literally no one has ever used in
history this is going to be dead in a
week I did not know that this was a
product yeah like did most of them yeah
on Gmail you can see the little chat
like you can rotate between Gmail and
Google Chat like whenever you want which
is a thing that you can like get rid of
or po out of that I always I I've never
I know oh look invite everybody in our
work yeah I'm pretty sure only Google
uses this so I'm not sure how helpful
this is going to be okay but it's an
interesting idea marz the last time we
talked and it was
2020 I didn't know I use this yeah this
isn't Hangouts what it probably was
Hangouts when we were talking chats yeah
oh H did we say anything fun well this
one says I haven't used this since 2013
so oh my that means it was just Gchat
right just I don't know anymore I don't
know who knows I can't believe we will
live in this hellscape um it's basically
like a a live like slackbot doing things
for you yeah yeah it's a slackbot that
is like contextual and can grab
different files that you've used which
pretty cool yeah it's really sad that
it's only in Google chat but yeah you
know but also makes sense I guess
they're not going to build it for slack
correct first to show off at IO yeah uh
okay um they have a new thing called
gems which in JS which is basically gpts
you know how chat GPT has a custom GPT
you can build that's like this is the
math tutor GPT and it only talks about
math M which like there's an existen
question here of like would you rather
do have like one AI agent that knows
literally everything and is omniscient
or would you have individualized agents
I'd rather just have an omniscient agent
that's just me but you can have Js which
uh are specialized Google Gemini Geminis
with ultra deep knowledge of specific
topics and yes well sort of no it's the
same knowledge as regular Gemini it's
not deeper than regular Gemini knowledge
I don't understand the I still don't
understand this that much yeah maybe
it's less prone to hallucinating if it
doesn't have a bunch of extra input
maybe maybe there's more well it has the
same input though cuz it's the same
model but it doesn't but it doesn't have
all the other knowledge oh of other
topics to hallucinate about I don't know
it's it's it's the same as the custom
gpts where you use chat GPT to create a
custom GPT so it's like it that input at
the beginning has to be correct and for
I don't it's I don't know the funny
thing about this was
when the lady on stage announced
it it was really awkward because she was
like now you don't use Gemini in the
same way that I use Gemini and we
understand that and that's why we're
introducing gems and there was like a 4
second silence and she like laughed
there had to have clearly been on the
teleprompter like hold for ause it was
just too deep into like you said this
soulless event it was not her fault it
was just like it was not a great naming
scheme everyone was out of it at this
point there were multiple camera cuts of
people yawning in the audience and I
just think she was waiting for an
Applause and it did not come she had
like an awkward chuckle I felt really
bad about that yeah yeah uh
okay some more stuff trip planning which
is the stupidest I I just h ah they're
building out the trip planning stuff for
some reason um and now you can like talk
to Gemini and tell it about the trip
that you want it'll make you a trip and
then you say oh but I'm allergic to
peanuts so don't bring me to a
restaurant that likes penut it goes oh
okay and it swaps it out for you for a
chocolate factory and so it's like
modular um I don't want to give up
control to an AI but maybe people do I
don't I can only assume they're so into
this because of like the way they're
making some sort of affiliate money on
all of these different travel things
like I don't yeah I don't want to just
like say plan me a trip here and then I
have I know nothing about the trip I'm
going to yeah and that's what it wants
to do there are some psychos who would
do that I guess there are you've seen
those Tik toks of like I spun around and
threw a dart at a globe and when it
landed on Madagascar I booked my flight
like that's but then you want to go and
have a good time off the cusp you don't
want it to plan your entire itinerary
somebody would do it it's funny cuz when
video actually that honestly would be an
awesome Studio aome I liked the old chat
GPT version of just like help me find
like I had to give you the place and
specifics of it and like you would help
throw a few suggestions like a hike at
Rocky Mountain National Park tell me the
top five that are over five miles that
felt good now we're at the point where
it's like can you walk me down this
hiking trail and tell me what I'm going
to see I don't like this part I like the
old not this new B GPT oh my God uh okay
I think we're losing Marquez so we got
to wrap this up so CL we're giving him
the full experience six more
features I might die can we use Gemini
to summarize this event if you're bored
of this episode use Gemini to summarize
this episode but then leave it running
in the background I you I can summarize
this episode the rest of this for you
ready okay stuff that Google already did
is now powered by Machine learning
that's it
well more different types of machine
learning basically sure cool
yeah now it works 15% worse and more
nurly nurly with the brain stuff look
like like like what we're going to get
to next Gemini on Android what are they
using it for scam detection going to
work 15% worse uh yeah Circle they're
doing like a circle to search hover sort
of
thing 15% worse yeah I want to say I'm
pretty sure they showed how you can buy
shoes with Gemini five different times
during this event they're very into live
shopping yeah yeah I think Google's
really trying to amp up the live
shopping affiliate stuff because I think
they see the writing on the wall for
like search not making money anymore in
the distant future so they're like how
can we just like create new revenue
streams well they're killing off their
their ad streams by summarizing all the
websites that they sell ads on so now
they're just going to make the affiliate
marketing through the right the the
e-commerce that just appears directly
under the search cuz Google AdSense is
like oh my God y they're the backbone of
the web and they're also throttling it
and making sure it doesn't work anymore
all right Gemini and Android cool
there's this Gemini hover window now
that can hover above apps so you can
like ask it questions about the app and
interact with the app and do that kind
of stuff um seems helpful sometimes
annoying other times if it was one of
those Android things where it's like
kind of pinned to the right corner and
you could press it and then appear when
you want it to be yep that'd be nice I'm
not really sure how they're integrating
it but you know what this is giving me
the vibe up I feel like Google doesn't
know what to use AI for and it reminds
me of like that too it reminds me of
like when Apple makes a new thing and
they're like developers will figure out
what to do with it yeah like Google
doesn't know what to show the AI doing
so it's just doing a bunch of things it
already does and they're just like
treating it as a new language almost
where they're like developers and you
guys will eventually figure it out and
make it useful can I have a Counterpoint
to that go I Google knows exactly what
their AI can do and just like with Bing
and like old chatbots that went off the
rails Google is way too scared of the
power that they have and what people
could do with it that they have to show
the most minuscule safe examples of
things and release it in a very safe way
because they know how much power they
have buying sneakers buying sneakers
yeah make us some money don't do
anything that would hurt the entire our
entire brand I think that's that's my
take on yeah but at this point everyone
already kind of sees them as losing to
open AI so like why not just do it they
know that they actually are winning and
it's only a matter of time and they know
they're not behind so much better
positioned than open a and and this
event really kind of showcased that
because it showcased all the Google
workspace stuff that they could actually
use Gemini in um this isn't a joke
though we still do have five more things
to cover I'm going to go quick promise
we get through okay so in the Gemini and
Android floating window thing you could
be watching a YouTube video you can be
asking questions about the YouTube video
while you're watching it which is semi
cool however it hallucinates and gets
stuff wrong um someone gave an example
they talked about pickle ball again
because that's all the tech Bros talk
about and
uh the Google executive who is now in
the pickle ball because of course you
are um asked was like is it illegal to
put a spin on the pickle ball and it was
like yes you cannot do spin on the
pickle ball and some pickle ball person
on Twitter was like what what yes you
can um so that's fun uh it now has AI
powered scam detection which is kind of
creepy uh because it is consist it's
listening to your phone call when you
pick it up and it's listening to what
the other thing on the end of the line
is saying because it might not be a
person might be a robot by then it's too
late and it will tell you it will so the
example they gave was like someone who
had answered a call and was like we just
need you to like click this link and
blah blah blah and then on the phone
there was a little popup that says like
this is probably a scam you should hang
up now I actually think that's awesome
great for old people except that it's
listening to you all the time which is
weird yeah which is weird and I imagine
they're going to probably do something
about like it's only on device and we
don't do anything with the information
and we get rid of it immediately after
mhm I think it would be super useful for
people who would fall for those games
because so many of them it takes 30
minutes to an hour some of them have to
go out and buy gift cards and stuff like
that and at that point it could be like
this is a scam hang up that could save
people like literally life savings
totally I think that's awesome yeah um
they're calling all of these new Gemini
in Android features Gemini Nano with
multimodality because we need more
branding um and it's yeah starting on
Pixel later this year uh I imagine it'll
get moved out to more Android phones
later they're definitely trying to like
build a fortress against apple and be
like look at all our AI stuff that only
Android phones have uh Google Chrome is
also getting Gemini Nano I'm not really
sure what that means you can do basic
stuff in it that you can do with Gemini
Nano now if you use Chrome that was sort
of passive they're not that scared of
other browsers right now they still have
way too much market share so last year
they also introduced this thing called
synth ID because when they showed off
the image generation they basically
talked about this open standard that
they want to create where it bakes in
this hidden Watermark within an AI
generated image because we all used to
be worried about Dolly stuff and now
we're worried about other AI things now
but now they're expanding syid to text
somehow um no idea how they're going to
do that but that's interesting and also
video uh so I don't think it's going to
be an interesting standard but it's just
their version of figuring out whether or
not something say I
generated they made a joke at the end of
IO where they were like we're going to
save you the trouble with how many times
we said AI okay we said it 120 times and
they were like ha haha funny and then
right as Sundar was getting off the
stage he said it one more time and the
ticker went up to 121 but then they said
it again so it was 122 I'm just saying
also I think that sure they can like
make fun of that and say all that stuff
but they said Gemini I think I think the
final count on Gemini was 160 times um
and my theory is that they're just
replacing AI with Gemini because now
they don't need to convince the stock
market that they're an AI company
anymore now they just want to bake
Gemini into your brain as hard as
physically possible so they just said it
over and over and over again um which
was funny it was funny yeah yeah so I
think um just to wrap this all up
because I'm sure everyone listening or
watching is begging for us to be done
with this I'm sorry um um Matt anini
from uh this is and Austin Evans Channel
I think had a great tweet to kind of
Explain why both of these events felt
not great um he said the problem with
overhyping AI is that all the stuff
we're seeing today doesn't feel new this
stuff is cool it's impressive but it's
barely catching up to what we were told
AI was potentially capable of two years
ago so it feels almost like no
advancements have been made I thought
that was a pretty good summary of
everything like we've watched so much in
the past of like this is the stuff we're
going to do look how amazing it is and
now we're getting the stuff it actually
is doing and it hasn't made it to that
point I also think they just throw
around technical jargon with like model
sizes and token like parameterization
and that yeah they had a whole T they
talked about water cooling they're like
we have state-of-the-art water cooling
in our new TPU Center and it was like
who cares like who are you trying to
tell this to yeah big water big one yeah
um so yeah we can we can we can start to
wrap it up so sorry so Marquez are you
going to go back and watch the events I
don't think I'm going to watch
it I do think I got the idea that like
Google is trying to make sure everyone
knows that they've been an AI company
for super long and then on the other
side of the spectrum is Apple where
they're like okay we don't need to say
this over and over and over and over
again but we do need people to
understand that we have at least been
doing something AI related and they're
like these are opposite ends of the
spectrum of like how in your face they
want to be about that and both are fine
both are fine I don't really have a a
horse in the race I'm just hoping they
just keep making their stuff better well
that is a great place to end this
podcast which means it's a great place
to do our trivy
answers how do you have all of them I
don't know wait for
tri but before we get started quick
update on the score one thing at a time
in second place we have a tie with eight
points between Andrew who's
currently carrying the one and what does
that mean David uh they both have eight
points wait Marquez got something right
last week yeah I didn't watch it oh
right we didn't update you Marquez
technically was closer than you last
week on the uh how many times did Apple
say AI so he actually
stole your points he actually got the
exact number yeah so you it did you get
it from Quinn's tweet yeah he did I'm
pretty
sure right thanks Quinn um so uh yes
unfortunately Andrew your lead has been
temporarily taken by Marquez brownley um
uh who has nine points is this the
closest trivia we've had this deep into
a trivia I think so yeah yeah it's wow
it's usually my fault that it's not
close but but question one Google IO
happened yesterday uh a few days from
the time this is going to get published
but yesterday from recording and we all
know IO stands for input output classic
uh Tech thing we also know it sometimes
stands for Indian Ocean if you watched
any of our uh domain name specials yeah
um but according to Google's Blog the
keyword IO actually has two more
alternate meetings to
Google what are they and I'm giving one
point
for correct answer two possible
points I have zero
idea I feel like one of them you would
never get the other one you might get I
think both of them you'll both
definitely
get not if I don't write it think oh
interesting interesting
strategy all right Andrew what you put
audio listeners something happened uh uh
who should go next D David David all
right I wrote two answers information
operation oh clearly and I also wrote
I'm out but that is hilarious thanks
Marquez what do you got I wrote IO being
like the one and zero of
binary oh it's so you're so close like
so close yeah but his second answer is
the best and Google IO for in and-
out because every time we go we got to
go I almost want to give him a point for
that because that's better than whatever
they could have thought of no um so uh
the first thing they listed is the
acronym innovation in the
open sure yeah right um but according
according according to the keyword what
first prompted them to use IO prompt in
the original Google IO was they wanted
it to reference the number Google which
begins with a one and looks like an i
and is followed by a zero it's
technically followed by 99 more zeros
after that zero but that would make for
a really long event
title Fair funny so now you guys know
that okay cool okay next question what
was the name of the AI in the movie Her
which we recently learned Andrew have
not seen so this is going to go
lovely vr10 Creator be me 2013 midnight
Santa Cruz California you're a hippie
you're at a movie theater by yourself
sobbing at 1:30 in the morning when the
movie gets out actually it was probably
closer to 215 what is happening you sit
on a bus for 45 minutes on the way back
to campus and your entire life has
changed and now you're sitting in Carney
New Jersey and you could never see Wen
Phoenix in the same light
Marquez what' you write
Siri nope you wrot Alexa I wrot Paula
Paula yeah no uh I wrote Samantha
correct let's go
Samantha who would have guessed the
person who saw the movie knew the answer
to that I actually was thinking really
hard I would have thought that you guys
would have seen the movie I don't know I
thought it was a pretty
stovie dude you don't know me very Sam
Sam Alman has seen this movie there was
a funny uh article that was written
about the open AI event that said I it
was called like I am one I'm once again
pleading that our AI overlords watch the
rest of the movie yeah be and it was
about how they were like referencing her
at the open AI event but the ending of
her is supposed to be
like no spoilers anyway it was funny it
was haha we're wrapping this up because
as little do you know we're recording
another episode directly after this I'm
hot and sweaty so yeah so okay yeah
thanks for watching and subscribing of
course and for sticking with us and for
learning about Google IO alongside us
and we'll catch you guys next week did
all the people who subscribed last week
make you feel
better yes it worked no I feel I feel
pretty good it worked yeah yeah the io
and Google iio also stands for like And
[Music]
subscribe don't ask wait for was
produced by Adam Molina and Ellis Ren
we're partner with VOX media podcast
Network and oural music was by V
Sil I don't think there's no you might
be the only person in the world who and
who like doesn't even know the plot of
this movie though like you know what
this movie is about right neither of you
oh my God oh my God you guys this is
like you are the two perfect people to
watch this movie with no context I want
to stay as the perfect person to watch
this with no content my all of the AI
things we've been talking about would
make so much can you picture can you
picture Andrew coming in after sitting
through yesterday's Google IO and then
watching her for the first time and
being like I get it bro her formed 90%
of my personality
5.0 / 5 (0 votes)