I Used AI To Expand Myself 100 TIMES On Photoshop. Here's What Happened.

Joe Scott
15 Apr 202422:36

Summary

TLDRThe video explores the integration of generative AI tools in Photoshop, specifically the 2024 update's features like generative fill and expand. The creator shares their experience using these tools for thumbnail design and conducts an experiment with generative expansion, reflecting on the potential and ethical considerations of AI in creativity and productivity.

Takeaways

  • 🎹 The 2024 update to Photoshop introduced generative AI tools like Generative Fill and Generative Expand, which assist in image editing tasks.
  • đŸ–Œïž Generative AI in Photoshop can predict and fill in missing parts of an image based on the surrounding context, saving time for designers.
  • 📈 The use of AI in tools like Photoshop is a game changer, significantly improving productivity and efficiency in image editing.
  • 👹‍🎹 The speaker, while not a Photoshop expert, has used it extensively for creating thumbnails for over 800 videos on his channel.
  • 📚 The speaker discusses the broader impact of AI tools on productivity and how they integrate into existing processes, potentially leading to an AI-driven singularity in various industries.
  • 🧠 The concept of 'activation energy' from Mel Robbins' 5-Second Rule is introduced as a productivity hack.
  • đŸ“± The Imprint app is mentioned as a tool for learning and applying key concepts from books in a quick and interactive manner.
  • 🚀 The speaker experimented with generative AI to create an 'infinite zoom' effect by repeatedly expanding an image, resulting in a visually interesting outcome.
  • đŸŽ„ The process of creating a seamless 'infinite zoom' video is challenging and requires advanced editing skills, which the speaker lacks in Adobe After Effects.
  • đŸ€– AI-generated images are becoming prevalent, raising ethical concerns about the use of copyrighted work without artist's permission.
  • 🌐 The speaker suggests that AI tools are not just a one-time revolution but are gradually integrating into our lives and work processes.

Q & A

  • What new feature did the 2024 update to Photoshop introduce?

    -The 2024 update to Photoshop introduced several generative AI tools, including Generative Fill and Generative Expand, which allow users to fill in selected areas of an image with content that the AI predicts based on the surrounding visual context.

  • How does Generative Fill work in Photoshop?

    -Generative Fill works by allowing the user to select an area of an image that needs to be filled in. The tool then automatically fills in the selected area with content that matches the surrounding context, making it a powerful tool for quickly completing tasks like creating thumbnails or filling in gaps in an image.

  • What is the main purpose of the Generative Expand tool?

    -The Generative Expand tool is designed to help users expand the canvas of an image beyond the subject, filling in the new space with content that complements the existing image. This is particularly useful for creating thumbnails or expanding images to fit specific aspect ratios, like 16:9 for video thumbnails.

  • What challenges did the speaker face when trying to create an infinite zoom effect using Photoshop?

    -The speaker faced challenges with timing and scaling when trying to create an infinite zoom effect. The velocity of scaling changed depending on the percentage of scale, making it difficult to synchronize the expansions across multiple images. Additionally, the expansions created warping effects on the image edges, which made perfect alignment difficult.

  • How did the speaker address the challenges of creating an infinite zoom effect?

    -The speaker initially attempted to solve the problem within Adobe Premiere and later Adobe After Effects, but faced difficulties due to the complexity of the task and personal skill level. Eventually, the speaker enlisted the help of a friend named Mark, who was proficient in After Effects, to manually adjust each frame and timing to create the desired effect.

  • What ethical concerns are raised by the use of generative AI in image creation?

    -The use of generative AI in image creation raises concerns about copyright infringement, as these tools are often trained on images from the internet, which may include copyrighted work used without the artist's permission. Additionally, there are concerns about the potential loss of revenue for artists and photographers whose work is used in the training data, as well as the impact on industries like stock image libraries.

  • How does Adobe's Firefly differ from other generative AI models like Dolly and Mid Journey?

    -Adobe's Firefly is unique in that it is trained only on images from the Adobe stock catalog, which means Adobe already has licensing rights to all the images used in training. This approach is considered more ethical because it ensures that artists receive some compensation for their work being used in the AI training data.

  • What is the significance of the speaker's experiment with generative AI in the context of AI integration into everyday tools?

    -The speaker's experiment demonstrates how AI tools are not just about creating big, disruptive changes but are also about integrating into existing programs and processes to increase productivity and efficiency. It shows how AI can become a part of our everyday lives, enhancing our capabilities and saving time in tasks like image editing.

  • What book did the speaker buy to improve productivity, but hasn't read yet?

    -The speaker bought a book that is supposed to teach how to build the habit of reading a bunch of pages every day to improve productivity, but admitted to not having read it yet due to not developing the habit of reading.

  • How does the Imprint app help users learn and apply concepts from books?

    -The Imprint app distills the essential points of books from popular thinkers and authors into interactive courses that can be completed in less than 5 minutes. It presents the information visually with graphics and reinforces learning through questions, making the concepts actionable and easy to apply to users' lives.

  • What is the 5-Second Rule taught by Mel Robbins, as mentioned in the video?

    -The 5-Second Rule is a productivity technique where, whenever you find yourself procrastinating or hesitating on something, you count backward from five and then immediately do the thing you're procrastinating on. It provides activation energy and helps overcome the psychological barriers to taking action.

Outlines

00:00

🎹 Introducing AI in Photoshop

This paragraph introduces the integration of generative AI tools in Photoshop, specifically highlighting the 2024 update. It discusses the new features such as Generative Fill and Generative Expand, which allow users to easily manipulate images by adding or altering elements based on the surrounding context. The speaker shares their experience with Photoshop and how they've utilized these AI tools, particularly for creating thumbnails for their videos. They also express their curiosity about the potential of these tools when used repeatedly and in various combinations.

05:03

🚀 Exploring AI's Creative Potential

The speaker delves into an experimental project where they used the Generative Expand tool multiple times to see the progression of the image alterations. They describe the surprising results, such as the image's transformation into a melting ceiling and a strange landscape. The paragraph also touches on the concept of AI as a tool, emphasizing the time-saving benefits and creative possibilities it brings to traditional tools. The speaker reflects on the evolution of AI in image generation, mentioning notable programs like Dolly, Mid Journey, and Firefly, and the ethical considerations surrounding AI-generated content.

10:04

🌐 Navigating the AI Art Landscape

This section discusses the impact of generative AI on the art and stock image industry. It highlights the concerns of artists and photographers whose work is used in AI training without consent, potentially affecting their income from traditional sources. Adobe's introduction of Firefly, a generative AI model trained only on Adobe stock images, is presented as a more ethical alternative. The speaker shares their personal experience with Adobe Premiere and After Effects, illustrating the challenges of integrating AI-generated images into video editing processes.

15:34

📚 Leveraging AI for Personal Growth

The speaker transitions from discussing AI in image editing to its application in personal development. They share their experience with an app called Imprint, which condenses book insights into interactive, quick-to-digest courses. The speaker emphasizes the app's effectiveness in learning and applying concepts like 'atomic habits' and Mel Robbins' 5-second rule. They also mention the variety of topics covered by the app, from productivity to social issues, and encourage viewers to try the app for its potential to positively impact their lives.

20:35

🙌 Reflecting on AI's Impact and Future

In the concluding paragraph, the speaker reflects on the overall impact of AI tools on productivity and creativity. They argue that the technological singularity is not a future event but a gradual integration of AI into our daily lives and tools. The speaker acknowledges the potential dangers of AI, such as the ease of manipulating images, but emphasizes the benefits in productivity enhancement. They share their personal productivity struggles and how the Imprint app offers a solution. The speaker invites viewer engagement and concludes with a reminder of their regular video schedule.

Mindmap

Keywords

💡Generative AI

Generative AI refers to artificial intelligence systems that can create new content, such as images, text, or music. In the context of the video, it is used to describe the new features in the 2024 update of Photoshop, which allow users to generate content within images. For example, 'generative expand' fills in the background of an image based on the surrounding content.

💡Photoshop

Photoshop is a widely used image editing software developed by Adobe Inc. The video discusses the integration of generative AI tools into Photoshop, which allows for tasks like expanding images or generating new content within an image. It is central to the video's theme as it is the primary tool used for the experiments and demonstrations.

💡Content-Aware Fill

Content-Aware Fill is a feature in Photoshop that automatically fills in selected areas of an image with content that matches the surrounding pixels. It is mentioned in the script as a precursor to the more advanced generative AI tools, which have improved upon this technology.

💡Adobe Stock

Adobe Stock is a library of licensed images that can be used for creative projects. The video discusses the use of an image from Adobe Stock in creating a 16:9 thumbnail for a video, highlighting the integration of Adobe's services and the ethical considerations of using AI trained on licensed content.

💡Firefly

Firefly is Adobe's generative AI model, which is integrated into Photoshop as of the 2024 update. It is significant because it is trained using images from the Adobe Stock catalog, ensuring that the artists whose work is used in the training data are compensated. This is contrasted with other AI models that have faced criticism for using copyrighted work without permission.

💡Thumbnails

Thumbnails are small, representative images used to preview videos or other content. In the video, the creation of thumbnails for YouTube videos is a primary use case for the Photoshop features discussed. The script mentions the challenges of creating thumbnails from vertical images and how the new AI tools simplify this process.

💡Infinite Zoom

Infinite Zoom is an experimental technique explored in the video where an image is repeatedly expanded upon using generative AI, creating a visually intriguing effect that suggests an infinitely zooming out experience. The video demonstrates this by using Photoshop's generative expand tool and later comparing it with an app by VideoLeap that achieves similar results more quickly.

💡AI Ethics

AI Ethics refers to the moral principles and guidelines that should govern the development and use of artificial intelligence. The video touches on this topic by discussing the use of copyrighted images in training AI models and the potential impact on artists and photographers. It raises questions about the sustainability and fairness of AI-generated content.

💡Adobe Premiere

Adobe Premiere is a video editing software used for editing and producing videos. In the script, it is mentioned as the initial tool the creator attempted to use for the 'Infinite Zoom' effect before realizing the need for more advanced compositing in Adobe After Effects.

💡Adobe After Effects

Adobe After Effects is a digital visual effects, motion graphics, and compositing application used for video post-production. The video discusses the use of After Effects to achieve the desired 'Infinite Zoom' effect after encountering limitations in Adobe Premiere, highlighting the complexity of the task.

💡Imprint App

Imprint App is a learning application that distills key insights from popular books into interactive, short courses. It is mentioned in the video as a tool that helps the creator overcome the habit of reading by providing a more digestible and interactive way to learn from books, which is related to the theme of using technology to improve productivity and learning.

Highlights

Generative AI tools are now integrated into Photoshop, enhancing its capabilities for users.

The 2024 update to Photoshop introduced features like generative fill and generative expand, which use AI to predict and fill in image areas.

Generative AI in Photoshop can be used to create thumbnails and expand images to desired formats, such as 16 by 9.

Content-aware fill was a previous tool that attempted to fill in image areas, but generative AI tools have improved upon this functionality.

The speaker has experience in using Photoshop for thumbnail creation and advertising, giving context to their familiarity with the software.

An experiment was conducted by the speaker to see what would happen if generative expand was applied repeatedly to an image.

The generative AI tools in Photoshop are based on Adobe's Firefly, their proprietary AI model trained on Adobe Stock images.

Firefly was introduced to address ethical concerns about AI training on copyrighted images without artist consent.

The speaker's experiment with generative expand resulted in an interesting and unpredictable visual journey.

The concept of an AI 'singularity' is discussed, suggesting that AI integration into everyday tools is a gradual process rather than a sudden event.

The speaker's attempt to create a smooth infinite zoom effect using Adobe Premiere and After Effects proved challenging due to velocity issues with scaling.

An app called VideoLeap was mentioned, which can create an infinite zoom effect more quickly and easily than manual editing.

The potential dangers of AI tools, such as the ease of manipulating images and the impact on industries like stock photography, are briefly discussed.

AI tools are designed to increase productivity, and the speaker shares their positive experience with using these tools in Photoshop.

The speaker promotes an app called Imprint, which condenses book insights into interactive, short courses for learning on-the-go.

Imprint uses visual aids and a Q&A format to make learning engaging and memorable, with a focus on actionable knowledge.

The speaker's overall message is that AI tools are becoming increasingly integrated into our lives, offering both benefits and challenges.

Transcripts

00:00

did you know that you can use generative

00:01

AI directly in Photoshop if you use

00:03

Photoshop on a regular basis this is not

00:05

news to you if you don't use Photoshop

00:07

on a regular basis you might not know

00:09

about this but also um you might not

00:12

care because you don't use Photoshop

00:15

either way stick around because this is

00:16

not just a tutorial there's a there's a

00:17

bigger point to this but then the 2024

00:19

update to photoshop they included

00:21

several uh generative AI tools like

00:23

generative Phill where if you select a

00:25

part of a photo and then tell it what

00:26

you want it to put there it just puts it

00:29

there

00:30

that and generative expand that lets you

00:32

expand out from your subject and it just

00:34

fills in what's around them it's not

00:36

perfect and sometimes it makes weird

00:38

choices but it kind of works like chaty

00:40

PT by just sort of like how chaty PT

00:42

just predicts what the next word should

00:43

be this just sort of predicts what

00:45

should be there based off of what it

00:47

sees around the edges of the images and

00:49

I use this a lot in making my thumbnails

00:51

and stuff so I got curious like what

00:53

would happen if you did that generative

00:55

expand thing over and over and over

00:56

again like like like what if you did it

00:58

a 100 times how many times would it take

01:00

before things started to get weird the

01:03

answer not very

01:06

many oh

01:10

man so I should start by saying that I

01:12

am not a Photoshop expert by any means

01:14

but I have used it a lot over the years

01:17

like I worked as a copywriter in

01:18

advertising uh for about 15 years uh and

01:21

obviously I was not a designer I was a

01:23

copywriter but I worked with designers

01:25

and there was a lot of mocking up stuff

01:27

in Photoshop that way I've also made

01:29

thumb nails for over 800 videos on this

01:32

channel yes I have done over 800 videos

01:34

on this channel and by the way each

01:35

video I do a few different thumbnails so

01:38

it's well into the thousands at this

01:40

point I would say on a scale of 1 to 10

01:42

I'm maybe a six maybe a seven in in

01:45

Photoshop skills um I'm I'm good enough

01:48

to mock up ideas but then if I want it

01:49

to really look good I send it to an

01:51

actual designer but what I've easily

01:53

spent most of my time on Photoshop doing

01:55

is making thumbnails for these videos

01:57

and all videos for thumbnails are 16 by9

02:00

kind of like what you're looking at

02:01

right here and vertical photos are the

02:05

bane of my existence so here's what I

02:07

mean by this let's just say you find the

02:09

perfect image but it's not 16 by9 so the

02:12

first thing you got to do is create a

02:13

little template it's a 16x9 template and

02:16

then you have to pull this out drop it

02:18

in this is what I had to do about 5

02:20

years ago so you had two options here

02:22

one is you scale it way up so that it

02:25

fills the entire width and then you have

02:28

to just sort of frame it ever is best

02:30

for that frame but you don't get to see

02:32

the entire thing this is especially bad

02:35

when you're doing you know people with

02:36

their faces and stuff like that or you

02:38

can scale it to about you know the right

02:41

size but then you got to fill in these

02:43

things on the side now again like just

02:45

five years ago the best option that you

02:47

had here would be to do something like

02:49

select the stamp tool and that's where

02:51

you uh select something right there and

02:54

then you paint over to the side and it

02:56

just basically covers what was there in

02:58

that little stamp that you did

03:00

this is very beginner uh Photoshop stuff

03:03

as you can see you can only do a little

03:04

bit at a time and it's it's uh it's very

03:09

timec consuming now just a few years ago

03:11

they introduced something called content

03:12

aware fill and what you would do there

03:15

is you would select the area that you

03:16

need to fill in and then you hit content

03:19

aware fill and it would basically fill

03:22

it in with whatever is Clos it you can

03:24

see where it's green right there it

03:25

would kind of show you what it's pulling

03:26

from and uh it's to say that it it

03:30

wasn't perfect but it would fill it in

03:32

with something for some reason this this

03:35

stable background became bricks in its

03:37

estimation but now with the 2024 model

03:39

they have this generative expand tool

03:42

and it's so freaking simple you hit the

03:44

crop tool then you come down here into

03:46

this little bar that they've created

03:48

select 16 by9 so it creates the size

03:50

that you want and then just expand it

03:52

out to the size that you want it to to

03:54

be you create the space that it needs to

03:57

fill in and then uh you just select

04:01

generate and you got a little task tool

04:04

up here it takes a second for it to to

04:06

load up but look at what happens it's

04:09

done it's almost I mean unless you're

04:11

really looking for it you cannot hardly

04:12

even

04:13

tell that there was anything change to

04:16

it it does give you a few different

04:17

options to choose from here uh but

04:20

usually usually one of them works

04:23

perfectly so here's another example of

04:25

how I used one of these I did a video a

04:26

while back on U how they're bringing

04:28

back the woolly mammoth de extincting

04:30

animals that kind of thing well I found

04:32

this photo uh in the Adobe stock catalog

04:35

so again I need to make this 16 by9 so

04:37

you go to crop it's 16 by9 in the

04:41

ratio make it a little bit bigger so

04:44

it's the right

04:48

size generate and now you've got this

04:51

background and it looks perfect doesn't

04:53

it so I wanted to show that there's

04:56

human beings in here with the mammoth so

04:59

what I did was I select the Marquee tool

05:03

and just kind of I want a person

05:04

standing right here on this rock right

05:06

here so I just kind of did a little

05:08

thing kind of selected a little thing

05:09

like that create a space now you hit

05:12

generative fill and you type in uh a

05:15

person looking up at the

05:19

mammoth

05:22

in

05:24

cold weather

05:28

clothing and hit gener

05:30

and there it is you want a different

05:32

person you also got that person who's

05:34

kind of growing out of the rock which is

05:36

weird that might be my favorite look at

05:40

that it's magic so like when we talk

05:42

about AI being a tool it's it's kind of

05:45

an abstract concept until you get to

05:47

things like this and and it's just these

05:49

very incredibly time-saving features

05:51

that AI has brought in to tools that

05:53

we've been using for a long time that

05:55

are the real game changer to me but

05:57

seeing this at work got me thinking like

05:59

what if I took a picture of myself and

06:01

did a generative expand on it and then I

06:03

expanded out from that and then out from

06:05

that and out from that what if I did

06:07

that a 100 times and then maybe I could

06:09

Stitch all that together like an

06:10

infinite Zoom thing so I decided to give

06:12

it a try so I took a shot of myself

06:15

which you can see here and I expanded it

06:17

and I decided to use the grid that the

06:19

crop tool creates and made it so the

06:20

size of the photo was the same as one of

06:22

the cells created by the grid that way

06:24

all the expansions would be the same I

06:26

think it came out to about 300% did the

06:28

generative expand and and got this oh

06:32

man it looks like the ceiling is melting

06:35

look how long my arm is I realized

06:38

pretty quickly that if I just kept

06:39

expanding the image over and over that

06:40

the size of the image was going to

06:42

become unsustainably large so I reduced

06:44

the image size back down to 4K

06:46

resolution and then expanded it again oh

06:49

this is interesting this is actually a

06:51

lot more interesting than what I got

06:52

yesterday

06:54

[Music]

07:03

you see now I'm in like a weird

07:05

cave I should probably mention that I

07:08

did all this on a live stream that I

07:09

made available for members so we got to

07:11

hang out and chat while I did this it

07:13

took a few hours hit the join button

07:15

down below if you want to be invited my

07:17

next weird idea

07:20

anyway oh we're coming out of the cave

07:23

now wasn't expecting that the expansion

07:26

seemed to put me in a cave for some

07:28

reason or maybe something like like a

07:29

bomb shelter built into a cave there

07:32

maybe a story behind this but within

07:33

maybe about five expansions it took me

07:35

out of the cave and then we were just

07:37

off across an endless

07:47

landscape so part of this little

07:48

experiment was that I wanted to just let

07:50

it be random um remember it gives you

07:52

three choices and obviously whichever

07:54

choice you choose will affect the Next

07:56

Generation and so on and so on so if I

07:57

wanted to direct where it went I could

07:59

do so by picking a different choice or

08:01

for that matter I could have just

08:03

entered a text prompt to change it

08:04

however I wanted but I chose to just let

08:06

Randomness do its

08:07

thing for a while

08:34

yeah after about an hour and a half of

08:35

this endless landscape that wasn't

08:37

changing much it just all kind of

08:38

started to feel kind of syy so I decided

08:40

to nudge it a little bit to see if I can

08:42

make it go in a different direction and

08:44

go in a different direction it

08:46

did o

08:50

oh so how does this work without going

08:53

into a whole Deep dive on generative Ai

08:55

and how it works because that's totally

08:56

out of scope for this video and it's

08:58

been covered a million times before the

08:59

first of these programs to really make

09:01

some ways was Dolly from open AI which

09:04

was only just revealed in January of

09:05

2021 yeah believe it or not this is only

09:07

about 3 years old but doly was a

09:09

revelation because it combined natural

09:11

language processing with image

09:12

generation making it possible to just

09:14

enter a text prompt and get a fully

09:16

realized image out of it it wasn't

09:18

perfect uh especially with hands it was

09:20

notoriously bad with hands at first but

09:22

it was a proof of concept um also the

09:25

original DOI wasn't really available to

09:27

the public it was more of a research

09:28

project DOI to made significant

09:30

improvements to the original and it was

09:31

available to the public in April of 2024

09:34

followed soon after by mid journey in

09:36

July of that year and stable defusion

09:37

one month later in August yeah I think

09:39

someday we're going to look back on 2022

09:41

as the year when this all started

09:42

happening one of the big problems with

09:44

these image generators was that they

09:46

were basically trained on all the images

09:48

on the internet meaning artists

09:50

photographers illustrators they were all

09:52

having their copyrighted work being used

09:54

without their permission which is still

09:56

going on but to add insult to injury

09:58

these same artists and photographers had

10:00

to face the possibility of losing work

10:02

because people can now just make

10:04

whatever images they want instead of

10:06

paying an artist to do it for them but

10:08

even more so they stood to lose Revenue

10:10

they normally made by licensing their

10:11

work to stock image libraries stock

10:13

image sites have been a whole industry

10:14

for a while now for decades actually

10:17

they were used by designers and agencies

10:19

to mock up campaigns and websites and

10:21

they were really one of the first

10:22

Industries in the shopping block once

10:24

the gener to AI art thing became a thing

10:26

and one of the largest stock libraries

10:28

in the world was was owned by Adobe

10:30

itself they stood to lose Millions maybe

10:33

billions of dollars as people turn to

10:34

these generative AI models so in March

10:36

of last year Adobe introduced Firefly so

10:39

Firefly is adobe's generative AI model

10:41

it works just like Dolly and mid Journey

10:43

the difference is that Firefly is only

10:45

trained using the images in Adobe stock

10:47

catalog which means Adobe already had

10:49

the licensing rights to all these images

10:51

so it feels a bit more ethical because

10:53

at least the artists got paid something

10:55

for their work being used in training

10:56

data now I tried Firefly when came out

10:59

honestly I was kind of underwhelmed by

11:02

it compared to say mid Journey who I

11:03

still think has the best image quality

11:05

and apparently I'm not alone in that

11:07

opinion because Firefly has never really

11:09

taken off as a standalone image

11:10

generator but in 2024 Adobe Incorporated

11:14

Firefly into Photoshop so yeah these

11:16

generative expand these generative fill

11:18

tools that's Firefly and as a tool

11:20

inside of Photoshop it's been a game

11:22

changer and it's made possible all the

11:23

things that I'm talking about in this

11:27

video okay so you know how I said I was

11:30

maybe a six or a seven on Photoshop a

11:31

second ago I would give myself the same

11:34

rating of proficiency when it comes to

11:35

editing I've done tons of editing but

11:38

yeah I would still rate myself as only

11:39

maybe a six or a seven but when it comes

11:41

to graphics and like effects and stuff

11:43

like that um I maybe a

11:46

three as you're about to find out so

11:49

here's how I was hoping this would work

11:51

I was hoping that I could just use the

11:52

scale function in Adobe Premiere and

11:56

time it up just right have it kind of

11:57

just fade in from one picture to another

12:00

and time the the expansions just right

12:03

so that all I would have to do is apply

12:05

that to one clip and then I could just

12:07

copy that property and apply to all the

12:09

other Clips line them up Bob's your

12:11

uncle um turns out Bob was not my uncle

12:14

cuz as you can see this first one looks

12:16

fine but then the timing on the second

12:18

one I just couldn't quite get it right I

12:21

played with it and played with it trying

12:22

to make it work and what it turns out

12:24

the problem is is that um the velocity

12:27

of scaling the the speed of the scaling

12:30

changes depending on what percentage of

12:33

scale the image is meaning an image

12:36

that's expanding from say 30% to 50% is

12:39

going to move a lot faster than when

12:41

it's going from 200 to 250% and that

12:44

just made timing up those clips really

12:46

difficult I even came over here in the

12:48

effects controls panel and was applying

12:50

a a curve to the scaling um trying to

12:54

make it work right there and as you can

12:56

see I played with this for like a couple

12:57

of hours and I just never could quite

12:59

get it right did I mention I'm about a

13:01

three when it comes to graphics and

13:02

effects what I realized I need to do is

13:04

to be able to move it in zspace so if

13:07

you know know anything about um editing

13:10

and compositing and things like that you

13:12

got the X and the y in the parameters

13:14

that you see but then there's like a

13:15

depth a zspace and I figured if I could

13:17

get the zspace lined up that way I

13:20

wouldn't be dealing with those weird

13:21

velocity issues when it came to scaling

13:23

the problem is you can't really do that

13:25

in Premiere you need something like

13:27

Adobe After Effects to do that and if if

13:29

I'm maybe a six or a seven editor in

13:31

Premiere I am like a -2 editor when it

13:34

comes to Adobe After Effects I am not

13:36

good at that luckily I have a good

13:38

friend named Mark who is good at that so

13:40

after I finally gave up on this I handed

13:42

it over to Mark and he did his thing in

13:44

After

13:48

Effects so it turns out even using the

13:50

zspace and after effects wasn't quite as

13:52

easy as I was hoping for there was a lot

13:54

of positioning the images to line up

13:56

just right so instead of just getting

13:57

one clip right and then copying and

13:59

pacing that property to all the others

14:01

like I had hoped we'd be able to do um

14:03

he actually had to painstakingly take

14:05

each Shot line them up adjust their

14:07

timing and do that one after another a

14:09

100 times I ow him a few beers for this

14:13

and even then he struggled to make the

14:15

transitions as as smooth as I was hoping

14:17

for it turns out that the expansions

14:19

themselves kind of created a bit of a

14:20

warp on the outside of the images that

14:22

prevented it from lining up perfectly so

14:24

yeah we we still get a little bit of a

14:26

stutter effect which I'm I wish we

14:28

didn't have but in the end we came out

14:30

of it with this

14:32

[Music]

14:53

[Applause]

14:56

[Music]

15:33

is my face all swirly

15:38

now so overall I'm pretty happy with how

15:40

this came out um I would like for it to

15:42

be smoother that kind of wah wah wah

15:44

effect isn't really what I was going for

15:46

but apparently uh you know Mark did

15:48

everything he could and it still kind of

15:50

came out like that so I think it's

15:51

something in the in the expansion itself

15:53

I don't know maybe if I made the

15:54

expansions a little bit smaller it would

15:56

be smoother um I don't know if any of

15:58

you guys out there have a better idea

15:59

I'd love to hear it but this was just an

16:01

experiment I wanted to see what would

16:02

happen um I would be interested in

16:04

repeating it to see if it goes in

16:05

different directions every time you do

16:06

it I did test this a little bit before I

16:09

did that full live stream I only did

16:10

maybe like five to seven expansions and

16:14

that one got a lot more abstract it was

16:16

a lot weirder so U I was actually kind

16:18

of surprised when I was doing it and it

16:20

just kind of gave me this infinite

16:21

landscape forever because the first time

16:23

I did it it immediately went into just

16:24

like weird shapes everywhere so yeah

16:26

again I'm I'm I'm curious what would

16:28

happen if I did it again but uh this

16:29

took hours of my time and of Mark's time

16:32

so I'm not sure I really want to go

16:34

through all that again but somebody did

16:36

point me to an app uh from a a company

16:40

called video leap that does this

16:41

infinite Zoom thing apparently in a

16:43

matter of minutes so I downloaded it

16:45

let's uh let's let's check it out so

16:47

this is one of those AI apps that does

16:48

all kinds of cool things um immediately

16:51

replaces backgrounds does before and

16:53

afters voice swaps face swaps that kind

16:55

of thing and right there in the middle

16:56

they got infinite Zoom all right so so

16:58

I've uploaded the photo and um you can

17:01

write a description to have it do

17:03

something specific but uh I'm just going

17:05

to let it do the random thing because

17:07

I've been doing the random things so H

17:09

continue here I'm zooming out let's go

17:21

[Music]

17:40

okay this one went a bit more cyberpunk

17:43

it even has music on it that didn't

17:44

expect that okay so that was clearly a

17:46

lot faster than what I did with the

17:48

Photoshop thing um but it also didn't go

17:50

nearly as far it only kind of goes back

17:52

to a certain point and then zooms

17:54

out or zooms back I do kind of Wonder um

17:59

if I can make several of these like take

18:01

the last frame of this and make that the

18:03

starting point for the next one and just

18:04

let it Zoom do that forever

18:09

um I'm not going to do that now all

18:11

right so the the point of this whole

18:12

video is basically to show how um you

18:15

know all these AI tools are meant to

18:16

save time and this is how AI is going to

18:19

enter our lives you know we think of

18:21

this like technological singularity this

18:23

big event this big thing that's just

18:24

kind of going to explode and happen but

18:26

you know I've been making the point for

18:27

a while that the singularity is

18:29

something we've been living in probably

18:30

since the beginning of the Industrial

18:31

Revolution and these AI tools this is

18:35

how it's going to happen is it sort of

18:37

just integrating just piece by piece

18:40

into programs and processes that we

18:41

already use but there are dangers in

18:43

this though I mean there's the obvious

18:44

dangers of the fact that you can just

18:46

select a little circle and put a person

18:48

into a photo or you know manipulate

18:50

stuff in that way but we've been doing

18:51

that for a long time um the problem that

18:53

I see now is like when you go through

18:55

the the Adobe stock Library like most of

18:58

the IM in there are just AI generated

19:00

images like the woolly mammoth photo

19:02

that I was playing with earlier that was

19:04

an AI generated

19:06

image that was not a real photograph of

19:09

a mammoth obviously but it it gets me

19:11

wondering though like if if the Firefly

19:13

model is training off of the Adobe stock

19:16

image

19:17

gallery is it just training off of

19:20

itself at this point the great AI uros

19:23

if you will but the point of all these

19:25

AI tools is to increase productivity and

19:27

that was the the point of Photoshop from

19:29

the very beginning like I'm not kidding

19:31

the the ability to expand a vertical

19:32

image to 16x9 at the push of a button

19:35

has saved me countless hours of

19:37

thumbnail editing and I could use all

19:39

the help I can get in the productivity

19:40

space which is why I bought this

19:43

book which I've never gotten around a

19:45

reading I mean what could have possibly

19:47

kept me from getting to all these Pages

19:49

he says gesturing wildly at everything

19:52

yeah I haven't developed the habit of

19:53

reading bunch of pages every day which

19:55

has kept me from getting to this book

19:58

that's supposed to teach you how to

20:00

build the habit of reading a bunch of

20:01

pages every day it's a vicious cycle but

20:03

I did find a way around this problem and

20:05

it's an app called imprint and look I'll

20:07

be honest I didn't know anything about

20:08

this app until they reached out to me

20:10

but I'm really glad they did because I

20:11

actually really like this cuz one of the

20:13

perks of doing what I do sometimes you

20:14

actually has something really cool drop

20:15

in your lap so here's how it works

20:17

imprint pulls insights from some of the

20:19

most popular thinkers and authors today

20:21

and well across all time actually and

20:23

distills the book's essential points

20:24

into Fun interactive courses courses

20:26

that you can take in less than 5 minutes

20:28

think about all the times you been

20:29

mindlessly scrolling on your phone you

20:30

could be learning Concepts that could

20:32

change your life like the concept of

20:34

atomic habits which is on here and I

20:37

learned all the key Concepts in like 20

20:38

minutes over 3 days and there's all

20:40

kinds of other books in here on

20:41

everything from sleep with Matthew

20:42

Walker and Building Wealth with ram seti

20:45

even The Art of War from sunsu they also

20:47

have learning paths depending on what

20:48

you want to improve about yourself like

20:50

I did the productivity path and that's

20:52

where I learned about Mel Robins 5c rule

20:54

which is basically whenever you find

20:55

yourself procrastinating or hesitating

20:57

on something just count backwards from

20:59

five and then just do the thing you're

21:00

procrastinating on that sounds

21:02

incredibly simple but psychologically it

21:03

gives you what they call activation

21:05

energy which kind of gives you the

21:07

willpower to do the thing it's a simple

21:09

trick but it works and the reason that I

21:11

remembered that is because imprint

21:12

lessons are designed based on the

21:13

science of learning meaning each chapter

21:16

is broken down and explained visually

21:18

with Graphics that bring the concept

21:19

home and then at the end of it it asks

21:20

you questions to sort of reinforce what

21:22

you just learned now I especially like

21:24

that the things that you learn on here

21:25

are actionable like things that you can

21:27

actually apply to your life but they

21:29

also just have you know books on social

21:31

issues and history if you just want to

21:33

absorb the knowledge anyway I've really

21:34

been enjoying it and I'm happy to put

21:36

this out there for you guys so just go

21:37

to imprint app.com jo- Scot for a 7-Day

21:40

free trial and 20% off your subscription

21:43

honestly if you do that 7-Day trial I

21:44

think you'll be hooked by day three like

21:46

I I liked it immediately it's it's like

21:48

a it's like a cheat code to improving

21:50

your life so one more time that's

21:52

imprint app.com jscott or just click the

21:55

link down in the descript thank you guys

21:56

so much for watching I would love to

21:58

hear your opinions in the comments about

22:00

the the dangers of all this and uh maybe

22:02

there's better ways of doing what I did

22:04

here again it was just kind of a big

22:05

experiment if this is your first time

22:07

watching my channel I invite you to

22:09

watch this video right here uh because

22:11

Google thinks that that might be right

22:12

up your alley or you can look at any of

22:14

the videos on the sidebar if you're on

22:15

your web browser give them a click and

22:18

if you enjoy them and you're not

22:19

subscribed I invite you to subscribe

22:21

I'll come back at videos every Monday

22:22

but that's it for now uh this is a very

22:24

different kind of video we'll be getting

22:25

back to normal types of videos soon

22:27

enough but thank you guys so much for

22:29

watching now go out there have an eye

22:31

opening rest of the week stay safe and

22:33

I'll see you next Monday love you guys

22:34

take care

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Tags associés
Photoshop AIGenerative ToolsProductivity BoostEthical ConcernsAI ImageryAdobe StockCreative ProcessTech InnovationDesign WorkflowContent Creation
Avez-vous besoin d'un résumé en français?