Did AI Just End Music? (Now it’s Personal) ft. Rick Beato

ColdFusion
30 Apr 202425:46

Summary

TLDRThe video script from Cold Fusion explores the advent of AI-generated music, a technology that has the potential to revolutionize the music industry. It discusses the capabilities of platforms like Sunno and Udio, which can create music from simple text prompts, and the implications this technology has for musicians and the future of music creation. The script also touches on the historical context of computer-aided music composition and the ethical and legal challenges posed by AI music, including copyright issues and the potential devaluation of human artistry. The host interviews musician Rick Bardo and artist Taran Southern, who has commercially released music using AI. The video concludes with a reflection on the personal and emotional aspects of music creation that AI may not be able to replicate, emphasizing the irreplaceable value of human creativity and expression in art.

Takeaways

  • 🎵 AI-generated music is becoming increasingly sophisticated, with platforms like udio and Sunno AI allowing users to create music by simply typing in a text prompt.
  • 👩‍💻 AI music systems are leveraging large language models and neural networks to understand complex patterns in music and generate original compositions.
  • 🎼 The process of creating music with AI involves inputting text prompts that include desired genres, instruments, and moods, which the AI then uses to produce a track.
  • 🚀 AI music generation is being seen as a potential game-changer by some, while others in the music industry express concerns about its impact on human musicians and the uniqueness of human-created music.
  • 🤖 AI music platforms can produce a wide range of music styles, from classical to R&B, and even create album covers, showcasing the versatility of AI in creative fields.
  • 📈 There are concerns about the potential for AI to replace human jobs in the music industry, as AI systems become more capable of producing high-quality, genre-spanning music.
  • 💡 Ethical considerations are arising, particularly regarding the training of AI on copyrighted material and the potential for AI to infringe on the rights of human artists.
  • 📉 The influx of AI-generated music could lead to a devaluation of music as a commodity, making it harder for human musicians to earn a stable income from their work.
  • 🎧 Despite the advancements in AI music generation, some believe that the emotional and personal connection inherent in human music creation will always set it apart from AI.
  • 🌐 The music industry is likely to face new challenges and opportunities as AI technology continues to evolve, potentially reshaping the way music is created, distributed, and consumed.
  • ⚖️ There is a call for new copyright legislation to address the role of AI in music creation, reflecting the changing landscape of artistic production and intellectual property rights.

Q & A

  • What is the significance of the AI-generated music in the video?

    -The AI-generated music signifies a watershed moment for AI music, showcasing the ability of AI to not only create vocals but also the backing music, which is considered more complex for AI to generate coherently.

  • What are the names of the two AI music platforms mentioned in the video?

    -The two AI music platforms mentioned are Udio and Sunno AI.

  • What is the process like for a human musician to create a song traditionally?

    -Traditionally, a human musician would start with a guitar riff or a melody, then build a track with bass, drums, and atmosphere around it. This process involves many substeps including equalization, compression, adding effects, and resolving frequency clashes. It can take days or weeks to complete a song.

  • What are some of the technical features that Udio's AI includes in its music generation?

    -Udio's AI includes production techniques like side-chaining, tape effects, reverb and delay on vocals in appropriate areas, and it can mimic the sounds of different electric guitar pickups.

  • What are some criticisms or concerns raised about AI music generation?

    -Concerns include the potential for AI to make mistakes, limited flexibility once an output is generated, low fidelity on some tracks, and weaknesses in certain music genres. Additionally, there are ethical concerns about the use of copyrighted material in training AI systems.

  • How did the AI music generation start and evolve over time?

    -AI music generation started with computer-aided compositions in the 1950s and evolved through various stages, including the development of tools like Emmy in the 1980s, which could generate music in the styles of different composers. Modern AI music generation began with neural networks in 2012 and has advanced to the point where it can generate music based on simple text queries.

  • What is the potential impact of AI music on the music industry?

    -AI music could disrupt the industry by providing an unlimited supply of music, which could devalue human-made music and make it harder for musicians to make a stable income. It could also change the landscape for professional musicians and the way music is perceived by the public.

  • What are the views of musician and YouTuber Rick Bardo on AI music?

    -Rick Bardo believes that AI will not completely replace musicians because people enjoy playing real instruments. However, he acknowledges that AI could impact the income of artists, especially those creating stock or royalty-free music.

  • What is the role of neural networks in modern AI music generation?

    -Neural networks are used to understand patterns in vast amounts of data and create outputs based on user input. They predict the next note or sound in a sequence, considering variables like instrument tone, tempo, rhythm, and sound design choices.

  • How does audio diffusion contribute to the generation of music by AI?

    -Audio diffusion is a process of adding and removing noise from a signal until the desired output is achieved. It starts with random noise and refines it into a coherent piece of music based on the interpretation of the input prompt.

  • What are the potential legal and ethical issues with AI music generation?

    -Potential issues include the use of copyrighted material in training AI systems and the inability to copyright AI-generated artwork. There is also concern about the devaluation of human artistry and the need for new copyright legislation around AI-generated content.

  • What is the future outlook on AI-generated music according to the video?

    -The future is expected to see more polished, higher-fidelity AI-generated music that is easily accessible. However, there is a concern about AI fatigue and the diminishing value of human-made music. Live music performed by humans may become more valuable, and the personal journey of music creation may be affected.

Outlines

00:00

🎵 AI and the Future of Music Creation

This paragraph introduces the topic of AI-generated music, highlighting the capabilities of AI to create music across various genres from a simple text prompt. It discusses the mixed feelings of a musician and producer towards AI's encroachment on music creation and mentions platforms like udio and sunno AI, which are revolutionizing the industry by making music creation accessible to anyone, regardless of musical knowledge.

05:01

🎶 The Process and History of AI Music

The second paragraph delves into the process of creating music traditionally and contrasts it with the AI music generation process. It outlines the historical context of computer-aided music composition, from the 1957 Illiac Suite to modern applications like sunno and udio. The paragraph also discusses the technical aspects behind these AI music applications, including the use of vast amounts of data and neural networks to generate music.

10:09

🚀 AI Music Generation: Technical Insights and Concerns

This section provides an in-depth look at the technology behind AI music generation, including the use of neural networks and audio diffusion. It raises ethical concerns regarding the training data used by these AI systems and the potential for copyright infringement. The paragraph also discusses the potential impact of AI-generated music on the music industry and the concerns of artists about their work being undervalued or replaced by AI compositions.

15:10

🤔 The Impact of AI on Musicians and the Music Industry

The fourth paragraph discusses the potential impact of AI-generated music on musicians, particularly those who are not in the top 1% of successful musicians. It includes a conversation with musician Rick Bardo about the future of music in the age of AI, the challenges of competing with AI-generated music, and the excitement around the possibilities AI presents for making certain aspects of music creation easier.

20:12

🎧 The Role of AI in Music Production

This paragraph explores the role of AI in the music production process, touching on the potential for AI to improve tasks such as mastering and mixing. It also discusses the concerns about the authenticity and emotional connection of AI-generated music compared to human-created music. The paragraph ends with a reflection on the changing nature of music creation and the importance of human elements in art.

25:12

📚 Learning About Neural Networks and AI

The final paragraph promotes an educational resource, Brilliant, for learning about artificial neural networks and other scientific topics. It emphasizes the convenience and interactive nature of the courses, which can be accessed from anywhere and at any time. The paragraph also includes a brief mention of a podcast interview with Rick Bardo for further insights into the topic of AI and music.

🎉 Conclusion and Call to Action

The concluding paragraph summarizes the episode's discussion on AI-generated music, inviting viewers to subscribe for more content on science, technology, and business. It also provides a link to the full interview with Rick Bardo and ends with a teaser for the next episode of Cold Fusion.

Mindmap

Keywords

💡AI Music Generation

AI Music Generation refers to the process where artificial intelligence algorithms create original music and lyrics. In the video, it is the central theme, showcasing how AI can produce songs across various genres by analyzing patterns in data. Examples include the AI-generated classical piece and R&B song created after the user typed in text prompts.

💡Udio

Udio is a platform mentioned in the video created by a group of ex-DeepMind engineers. It represents the cutting edge of AI music generation, allowing users to input text prompts and receive a complete musical composition in return. It signifies a watershed moment for AI in music, as it can produce not just vocals but also the instrumental backing.

💡Sunno AI

Sunno AI is another application in the field of AI music generation. It is highlighted as an example of how accessible music creation has become with AI, allowing even those without musical knowledge to produce tracks. The video discusses its capabilities and the ethical considerations it raises regarding the use of AI in music creation.

💡Musical Knowledge

Musical knowledge traditionally refers to the understanding of musical theory, composition, and performance. In the context of the video, it contrasts with the use of AI music generation platforms that enable users without such knowledge to create music. The video suggests a future where the need for deep musical understanding may be diminished due to AI's capabilities.

💡Copyrighted Material

Copyrighted material is creative work that is legally protected from unauthorized use. The video raises concerns about whether AI music generation systems like Udio and Sunno AI were trained on copyrighted material, which could lead to legal and ethical issues. An experiment in the video showed that inputting lyrics similar to a known song resulted in AI-generated music that closely resembled the original, indicating potential copyright infringement.

💡Neural Networks

Neural networks are a cornerstone of modern AI, inspired by the human brain's network of neurons. They are used in AI music generation to understand complex patterns in music data and create new compositions. The video explains that these networks predict and generate music by analyzing vast amounts of musical data, learning the relationships between different musical elements.

💡Audio Diffusion

Audio diffusion is a technique used in generative music where noise is added and removed from a signal to create a coherent musical output. It is likened to image diffusion, where random noise is refined into a clear image. In the video, it is presented as a method that underlies the creation of AI-generated music, contributing to its increasingly polished and high-fidelity sound.

💡Ethical Concerns

Ethical concerns in the video pertain to the impact of AI on the music industry and the potential devaluation of human artistry. The discussion includes the potential for AI to replace human musicians, the use of copyrighted material in training AI systems, and the future of music creation and copyright law in the age of AI.

💡Music Industry Disruption

Music Industry Disruption refers to the significant changes brought about by AI music generation systems. The video explores how these systems could affect musicians' livelihoods, the value of music, and the nature of music consumption. It suggests that an influx of AI-generated music could devalue human-created music and challenge traditional business models in the music industry.

💡Human Element in Art

The human element in art is the emotional and personal touch that artists contribute to their work. The video emphasizes that despite the advancements in AI music generation, the human aspect of creating and experiencing music remains irreplaceable. It suggests that the imperfections and personal journeys of human musicians are what make music profoundly relatable and valuable.

💡AI Fatigue

AI fatigue is a concept discussed in the video that refers to the potential future state where AI-generated content becomes so prevalent that it may diminish the appreciation for human-made art. The concern is that people may begin to doubt the authenticity of music they hear, suspecting it to be AI-generated, which could undermine the value and enjoyment of human composition and performance.

Highlights

AI has generated a song that is 100% AI-made, including lyrics, rhythm, and vocals, showcasing the capabilities of modern AI in music creation.

The AI music generation platform udio was created by a group of ex-DeepMind engineers, indicating a shift towards more sophisticated AI music systems.

AI music systems like udio and Sunno AI are making music creation accessible to anyone, regardless of musical knowledge, by using text prompts to generate tracks.

AI-generated music is not just limited to vocals but also includes the creation of backing music, which is considered more challenging for AI.

The video discusses the potential impact of AI on the music industry, including concerns over job displacement and the future of music creation.

AI music generators use vast amounts of data to understand patterns and create outputs based on user input, similar to how large language models generate text.

The technology behind AI music generation includes audio diffusion, which refines noise into a coherent musical piece based on the user's prompt.

There are ethical concerns regarding the training of AI music systems on copyrighted material and the potential for infringement.

Over 200 artists signed an open letter to AI developers and tech companies asking them to stop using AI to infringe upon and devalue the rights of human artists.

The future of the music industry may involve an unlimited supply of AI music, which could devalue human-made music and impact musicians' incomes.

Interview with musician Rick Bardo discusses the potential for AI to replace certain aspects of music production, such as mastering and mixing.

AI-generated music could lead to an 'AI fatigue' where human-made music may be undervalued due to the abundance of AI-generated content.

Live music performed by humans may become increasingly valuable in an era inundated with AI music.

The essence of music creation lies in the personal journey of discovery and the human element, which AI cannot replicate.

AI music generators could serve as a great sampling tool for musicians, offering a new dimension to music production.

The video concludes with a story from 1997, highlighting the ongoing debate about the role of AI in music composition and the value of human creativity.

Transcripts

00:00

this video was brought to you by

00:03

brilliant hi welcome to another episode

00:05

of Cold Fusion The Following song is

00:08

100% generated by AI

00:12

[Music]

00:17

abusion where

00:19

curiosity ignites and Minds take

00:23

[Music]

00:26

life a better future

00:30

can help us make a better future

00:32

everything you hear is all artificially

00:34

generated by AI generated I mean the

00:37

lyrics which was chat gbt the Rhythm the

00:41

vocals to get this output all I did was

00:44

type in a text prompt of what I wanted

00:46

there are a few revisions but still the

00:48

result is honestly insane and if you

00:50

think that's a one-off here's another

00:52

example I wanted a classical piece so I

00:54

type this

00:56

[Music]

01:03

how about some guitar driven

01:08

stuff

01:09

so you

01:11

cra a wild ride awesome

01:17

[Music]

01:24

[Applause]

01:27

R&B our souls please remember me

01:30

[Music]

01:31

awesome UK

01:33

[Music]

01:37

[Applause]

01:40

garage all very impressive even the

01:43

album covers were made using Ai and

01:45

that's just to let you know what we're

01:46

dealing with

01:47

here as some of you know I've been a

01:49

musician for the better part of 20 years

01:51

and a producer for a decade if you've

01:54

watched a cold fusion video before

01:55

you've heard some of my music so being a

01:57

technology Enthusiast as well a do have

02:00

mixed feelings about this AI has finally

02:02

come to something very close to my heart

02:04

we've seen AI voice swapping that could

02:06

get artists to perform songs in

02:08

different genres but this is something

02:10

totally different with this latest

02:11

generation of AI music it's not just the

02:14

vocals but the backing music which is

02:16

artificial which in my view is much more

02:18

difficult for AI to do coherently what

02:21

you just heard previously was from a

02:23

platform called udio and it was created

02:25

by a group of X deepmind Engineers it's

02:27

a watershed moment for AI music and

02:29

they're not the only ones a few months

02:31

ago there was a similar application

02:33

called sunno AI in a recent video I

02:36

looked at the AI deception and it was

02:38

about how some companies are lying and

02:40

over promising the capabilities of AI

02:42

but on the flip side I said that there's

02:44

also some companies that are obfuscating

02:46

the true extent of AI replacing jobs

02:49

today's episode falls in the second

02:51

category first was images graphic design

02:54

writing videos and now finally music

02:57

it's clear that AI music systems will

02:59

will eventually impact those in the

03:01

music industry but how in this episode

03:04

we explore AI music apps what they're

03:06

all about how it works its criticisms

03:09

and how AI could disrupt the music

03:11

industry I also had the opportunity to

03:13

speak to the amazing musician and

03:15

YouTuber Rick Bardo to get his thoughts

03:17

on this and I also spoke to Taran

03:19

Southern who's considered to be one of

03:21

the first artists to commercially

03:22

release music using AI all right so on

03:25

the count of three let's

03:28

go you are watching to Fusion

03:33

TV quote creating music is an inherently

03:37

complex process but we're streamlining

03:39

that making it more accessible and

03:41

intuitive than ever before end quote

03:44

that's from son's co-founder Mikey shaan

03:47

and it just about summarizes what these

03:48

AI companies are trying to achieve in

03:51

fact the premise of sunno and udio is

03:53

basically the same now anybody can

03:55

create music even though they have no

03:57

prior musical knowledge just type a text

03:59

prompt and Away you go users can provide

04:02

subjects instruments feelings and or

04:05

their custom lyrics and in just a minute

04:07

a track is ready to play 30 Seconds for

04:10

udio and 2 minutes for sunno both

04:12

platforms can extend tracks and provide

04:14

different variations and both are free

04:16

to use right now some say that Yo's

04:19

first version is already pretty good

04:21

better than Sun's version 3 I've used

04:23

both of them to gain Insight but I ended

04:25

up liking y's output more personally

04:28

because it was cleaner and possessed a

04:29

better understanding of less typical

04:31

genres so even though the examples I've

04:33

shown you and will show you are through

04:35

using yio keep in mind that a lot of the

04:37

content in this video also applies to

04:39

sunno sunno version 3 is now available

04:42

to everyone including those on the free

04:44

plan it is the best AI music generator

04:47

by far and it just got even better with

04:49

V3 a completely free AI music generator

04:52

launched called udio people have been

04:54

saying this is the chat GPT moment for

04:56

AI music or calling it The Sora of Music

04:59

a lot of the time people massively

05:01

overhype new AI launches that's how

05:03

everyone gets clicks but there's some

05:05

truth to this one to understand just how

05:07

revolutionary this Tech is it helps to

05:10

know how modern music is made before AI

05:13

the first step is to come up with a

05:14

guitar refi I made this one a few years

05:16

ago when marking around on my acoustic

05:17

guitar but for the final track I used a

05:19

Gretch electric guitar with a delay in

05:21

Reverb pedal and a distortion pedal I

05:24

was going for a calm 34 Walt time

05:26

signature with a post Rock feel

05:34

[Music]

05:39

after I recorded this I hopped into

05:41

Ableton and built some Bas and drums and

05:43

atmosphere around the guitar ref even

05:45

for this process every step involves

05:47

many substeps including eqing

05:49

compressing adding effects and just

05:52

listening to the components in isolation

05:54

and

05:54

[Music]

05:56

together as the song is being created

05:58

you'll definitely come across some

06:00

production problems where frequencies

06:01

Clash solving these issues are just a

06:04

part of the nature of mixing sounds

06:06

together finally I added some vocals and

06:08

then arranged the

06:13

track so in total a finished song can

06:16

take days or even weeks to complete and

06:19

sometimes the songs just don't work out

06:20

so you have to try again but with all of

06:23

that effort exploring the Sonic stage

06:25

and pulling a song out of the ether is

06:27

half the fun but now in contrast with AI

06:30

all people have to do is type in some

06:32

words and then get a song I hope you can

06:34

see the difference clearly now here's

06:36

some observations that I found with yudo

06:38

it includes production techniques like

06:40

side chaining tape effects Reverb and

06:42

delay on vocals in appropriate areas in

06:45

some outputs the vocals seem extremely

06:47

real there's also harmonies in there and

06:50

even for electric guitar it mimics the

06:52

sounds of different pickups all just so

06:54

insane some weaknesses include sometimes

06:57

it messes up big time and I'll show a

06:58

couple of examples of

07:04

[Music]

07:08

that it's very limited in flexibility

07:11

once you get an output you can't really

07:13

change it there's low Fidelity on some

07:14

tracks and it has some weaknesses in

07:16

some genres such as UK jungle okay so

07:19

what's the story behind these

07:21

applications sunno is a venture-backed

07:23

company founded in late 2023 all four of

07:26

the founders previously worked at Keno

07:28

an aite Tech startup for financial data

07:31

which was later acquired by S&P Global

07:33

sunno even partnered with Microsoft

07:35

co-pilot and is up to version 3 like

07:38

sunno udio was founded in late 2023 but

07:41

the company only recently went out of

07:43

stealth mode and made their application

07:45

public a couple of weeks ago it was

07:47

founded by an ensemble of former

07:49

researchers from Google Deep Mind and

07:51

has the financial backing of popular

07:53

Silicon Valley venture capital firm and

07:55

seen hwart and also from musicians like

07:57

common and Will i Am as you've seen well

08:01

heard the output is a rich combination

08:03

of all sorts of instruments so the

08:05

training data must have been significant

08:07

and that leads us to the next section

08:09

how do these applications work well

08:11

we'll dive in in a bit but first let's

08:13

rewind the tape a bit this isn't

08:15

strictly the first time this has been

08:19

tried making music with the aid of

08:21

computers goes back to 1957 lejaren

08:24

Hiller and Leonard isacson would compose

08:27

a piece called the iliac Suite this

08:29

piece of music is often considered to be

08:31

the first created with the aid of a

08:32

computer there were notable efforts in

08:34

the 1980s to point computers into a new

08:37

Direction generative music in 1984

08:40

scientist and composer David cope

08:42

created Emmy which stands for

08:44

experiments in musical intelligence it

08:47

was an interactive software tool that

08:49

could generate music in the styles of

08:50

different composers from short pieces to

08:53

entire operas and I said you know what

08:54

if I could create a

08:56

program that would create B carals and

08:59

cope and Stravinsky and Mozart and

09:02

anybody and the only way I could think

09:04

of doing that was to create a program

09:06

that was built around a database of

09:08

Music let's say I had a database of Bach

09:12

Corrals all 371 of them and he had a

09:15

little program that was sat on top of

09:16

this smallest program I could make that

09:19

would analyze that music and then create

09:22

a new Bo Corral that wasn't one of the B

09:24

Corrals that was in the database but

09:26

sounded like them all and some of the

09:28

outputs had been used in commercial

09:30

settings over the years there's a very

09:32

interesting story regarding Emmy and

09:34

I'll get into that later in the episode

09:36

so stick around for that and then there

09:38

was the computer accompaniment system in

09:40

1985 it used algorithms to generate a

09:43

complimentary music track based on a

09:45

user's live input the computer

09:47

accompanyment system is based on a very

09:49

robust pattern matcher that compares the

09:53

notes of the live performance as I play

09:55

them to the notes that are stored in the

09:57

score

10:09

even David Bowie developed a tool in the

10:11

'90s it was called the verbasizer a

10:13

digital approach to lyrical writing well

10:16

that's all well and good you might be

10:17

thinking but what about the modern stuff

10:19

well as you probably know from 2012 the

10:21

world switched from hard-coded narrow

10:23

algorithms to neural networks and with

10:26

that the modern AI boom had officially

10:28

started but the the question has to be

10:30

asked how are neuron Nets being used in

10:32

music Generation Well in 2016 Google's

10:35

Project magenta turned a few heads when

10:37

they released an AI generated piano

10:39

piece made by Deep learning algorithms

10:42

ultimately the next big step was to

10:44

generate music just based on a simple

10:46

text query in natural human language

10:48

Google's music LM did just that and

10:50

honestly it was impressive for the time

10:52

but far from perfect this was followed

10:54

by open ai's jukebox stable audio

10:57

adobe's music generation gen and

10:59

YouTube's dream track now these were all

11:02

Valiant efforts but they all had the

11:03

same problem they were ultimately fairly

11:05

limited they worked to a degree but more

11:08

often than not the composition sounded

11:10

rigid and just not very human to put it

11:12

bluntly but even with such limitations

11:14

there were some trying to push the

11:16

boundaries Taran Southern is often

11:18

considered to be one of the first

11:19

musicians to use modern AI to release

11:21

commercial music and this was all the

11:23

way back in 2017 if you can believe it I

11:26

asked her all about it I was having

11:28

trouble finding really high quality

11:30

music for the documentary that was

11:33

relatively inexpensive and so I was

11:35

looking for alternate options and I came

11:38

up upon this article in the New York

11:39

Times that was looking at artificial

11:41

intelligence as a way to compose music

11:44

and I thought well that's really

11:45

interesting so I started experimenting

11:47

with it was blown away by what was

11:50

possible you know this was even before

11:52

llms hit the scene and I think where we

11:56

were at with music creation and AI back

11:59

in in 2017 was actually pretty far along

12:02

um and so I was so excited by what I was

12:05

hearing that I just decided to create an

12:07

entire experiment out of out of the

12:11

project and made an album using I think

12:13

four different AI Technologies keep in

12:16

mind that what Taran did back in 2017

12:18

was quite different to what's going on

12:19

today there was still a lot of work that

12:21

had to go into the compositions but now

12:23

anyone can type in some text and get a

12:25

decent output so how does it all work

12:28

what's the tech behind it

12:32

like most modern generative AI

12:33

applications for example llms these

12:36

systems use vast amounts of data to

12:38

understand patterns and then create an

12:40

output based on the user's input for

12:42

chat GPT the output is text and for

12:45

these AIS its original songs and lyrics

12:47

we'll touch on the ethical concerns a

12:49

bit later but you can probably realize

12:51

something a large language model like

12:53

chat GPT for example generates text by

12:55

predicting the next word in a text

12:57

sequence but composing music is

12:59

significantly more complex due to a lot

13:01

of variables instrument tone Tempo

13:03

Rhythm sound design choices volume of

13:06

different components compression and EQ

13:09

are just some of the variables that has

13:11

to be considered in addition the system

13:13

must understand what the user wants and

13:15

how that relates to genres and

13:16

particular sounds in a coherent yet

13:19

pleasing way not easy by any means but

13:22

what about the audio synthesis itself

13:24

many point to audio diffusion as the

13:26

secret Source behind generative music in

13:28

simple terms it's is the process of

13:30

adding and removing noise from a signal

13:32

until you get the desired output we

13:34

first saw similar methods used in images

13:36

where image diffusion starts off with

13:38

random noise which is then refined to a

13:40

final image based on the interpretation

13:42

of the prompt audio diffusion Works in a

13:44

similar way this is a very high level

13:46

breakdown just to save time but if you

13:48

want to get into the depths of the

13:49

technicalities I'll leave a link to a

13:51

few articles in the source document in

13:53

the video description as always

13:59

if you've been following the generative

14:00

AI space for the last 2 years you

14:03

already knew this part of the video was

14:04

coming open AI CTO Mira morati was under

14:08

public fire following an interview about

14:09

the training data of their video

14:11

generation tool Sora when asked if the

14:13

impressive video output from Sora was

14:15

the result of data trained on videos

14:17

from YouTube Facebook and Instagram she

14:20

said that she quote was not sure about

14:22

that to many online the answer was

14:25

suspicious with only a few weeks since

14:28

the launch of yudo and Soo version 3

14:30

there's a question that has to be asked

14:32

were these systems trained on

14:34

copyrighted material one user set out to

14:37

investigate they ran their test by

14:38

entering in similar lyrics to the song

14:40

Dancing Queen by EBA in the prompt they

14:43

had no direct mention of the band or the

14:45

song but the outputs were very close to

14:47

the original including the basic Melody

14:50

Rhythm and even the Cadence of the

14:52

vocals now even human musicians are

14:55

continuously accused and taken to court

14:57

for a lot less as the user later point

14:59

points out it's not impossible to

15:01

achieve these results without the

15:02

original song being present in the data

15:04

set but the similarity is striking this

15:08

result was also solidified by other

15:10

experiments carried out by the user for

15:12

obvious reasons I can't play any of the

15:14

copyrighted music here to show you the

15:16

comparisons but as always I'll link the

15:18

original article in the show notes but

15:21

here's the thing it seems like the

15:23

founders already knew that this was

15:24

coming and they accepted it as part and

15:26

parcel of the new AI startup culture

15:28

anonio Rodriguez one of sunno ai's

15:31

investors told Rolling Stone that he was

15:33

aware of potential lawsuits from record

15:35

labels but he still decided to invest

15:37

into the company sunno AI has stated

15:39

that it's having regular conversations

15:41

with major record labels and is

15:43

committed to respecting the work of

15:44

artists if that's actually true is

15:46

questionable especially when a few weeks

15:48

ago over 200 plus artists wrote an open

15:51

letter to AI developers and tech

15:53

companies to quote cease the use of AI

15:56

to infringe upon and devalue the rights

15:58

of human artist end quote the letter was

16:01

signed by some big names old and new

16:03

from Bon joy to Billy eish and Metro

16:06

boomman all that begs the question what

16:08

does this mean for the future of the

16:10

music

16:14

industry one way to think of this AI in

16:16

the broadest sense is like an outof

16:18

control Central Bank monetary policy is

16:20

the equivalent of printing money leading

16:22

to inflation but this time it's diluting

16:25

and devaluing the supply of Music

16:27

instead of money if you're not the top

16:29

1% of successful musicians the music

16:31

industry is already a hard place to make

16:33

a consistent and stable income from Tik

16:36

Tok transforming how songs are consumed

16:37

and discovered to digital music

16:39

platforms relationships with musicians

16:41

the industry is already difficult but

16:44

now there's an unlimited supply of AI

16:46

music coming right at you like a tsunami

16:48

and it's going to be hard to stay afloat

16:51

if you're a musician who exclusively

16:53

make stock music or royalty-free sounds

16:55

for commercial purposes this all isn't

16:57

good news but what about other musicians

17:00

I sat down with Rick biard to discuss

17:02

what this means for the future so so so

17:05

do you think AI will completely replace

17:07

musicians one day at all or do you think

17:09

that that's not going to happen no I

17:11

don't think so no people have too much

17:13

fun playing real instruments I'm not

17:15

going to stop playing just because

17:16

there's AI guitar things say Spotify in

17:20

the future they have their AI versions

17:22

of songs and then you have people using

17:24

the models that already exist to make

17:25

their own music and uploading that to

17:27

Spotify what do you think that would do

17:29

to the potential income of the real

17:31

artists who are you know making that and

17:33

composing their own music oh definitely

17:35

have an impact in it yes it's just

17:38

deluding you know because there's no way

17:40

real artists could compete with

17:42

generative Creations right I mean how

17:45

many things can you can you put up there

17:47

in a day how many things can you

17:49

generate be very difficult to to compete

17:53

with that this is an interesting

17:55

question do you think that one day we'll

17:58

actually have a number one AI Billboard

18:01

song yes probably 2 years from now okay

18:06

you heard it here fast everyone there

18:07

will be a lot of news stories about it

18:09

and people will say boy I really like

18:11

this and then they'll just create more

18:12

and there will be more uh I can see a

18:15

time when you know nine out of the top

18:17

10 songs are AI generated wow in the

18:21

within 10 years I said in one of my

18:25

videos a year and a half ago I think it

18:27

was or so ago that people won't care if

18:31

it's AI if they like the

18:33

song and I firmly believe that it's just

18:37

a matter of how people are being

18:38

compensated now are you excited about

18:40

any aspect of this at all totally

18:43

excited I mean I I think that the jobs

18:45

that can be made easier can mastering be

18:48

done better by AI probably can mixing be

18:50

done better by AI probably there's a lot

18:52

of things vocal editing picking between

18:56

takes at least doing rough edits rough

19:00

edits for YouTube videos yes I'm excited

19:02

about it you know people when drum

19:04

machines came around in the 80s oh

19:07

they're going to take drum you know

19:09

drummers jobs away then drummers back in

19:12

the 80s emulated drum machines and they

19:14

play drum machine fills and they do all

19:16

these same kind of fills and then people

19:19

would bring in drummer the Drum Real

19:20

drummers to emulate drum machines they'd

19:22

have their things program do you think

19:24

becoming a professional musician or even

19:26

surviving as one or thriving as one

19:29

would be more difficult because of AI in

19:31

that case not sure about that I don't

19:34

know if if uh becoming a professional

19:36

musician will be affected by it I think

19:38

people enjoy playing music and uh

19:42

regardless of whether there are AI

19:43

versions out there that people are

19:44

listening to you know the rise of

19:47

autotune and things like that where have

19:49

a synthetic sound enabled these programs

19:53

I mean once you have a pitch corrected

19:54

voice how far of a stretch is it from

19:57

from an AI created voice that actually

20:00

has tuning imperfections in it like a

20:01

real person that's not that different

20:04

honestly how many things are actually

20:06

generated by computers grab some samples

20:08

off splice you create the high hat track

20:11

with an 808 high hat and you got your

20:13

you know you got your kick and you're

20:15

making a hip-hop tune and then you grab

20:17

you know some keyboard sounds and you

20:19

don't even know how to play the

20:20

keyboards and you hold down some things

20:22

and you you know you get some samples

20:25

from here and there you put them in and

20:26

then you create your vocal over that you

20:29

autotune and then you move notes around

20:33

it's you know it's pretty synthetic at

20:35

that point you know what's the

20:36

difference between that and AI the

20:39

current wave of AI including music

20:41

generators is made possible by neural

20:43

networks but what do they do and how do

20:45

they work fortunately there's a fun and

20:47

easy way to learn about that brilliant

20:50

I've talked about how I've used

20:51

Brilliance courses on artificial neural

20:53

networks before and for good reason but

20:55

they also have great interactive stem

20:57

courses in anything from from maths to

20:59

computer science and general science

21:01

it's convenient because you can learn

21:03

wherever you like whenever you like and

21:05

at your own pace there's also follow-up

21:07

questions to make sure you've digested

21:08

what you've learned so whether you just

21:10

want to brush up on your learning or

21:12

need a refresher for your career

21:13

brilliant has you covered you can get

21:15

started free for 30 days and for cold

21:18

fusion viewers brilliant is offering 20%

21:20

off an annual plan visit the URL

21:23

brilliant.org coldfusion to get started

21:26

okay so back to the video last year a US

21:29

federal judge ruled that AI artwork

21:31

can't be copyrighted as these apps gain

21:33

popularity we need to protect artists

21:36

but how this is done and how this will

21:37

be implemented is still up for debate

21:40

things are uncertain but one thing is

21:42

for sure we're heading into a new era of

21:44

copyright legislation around art and

21:46

artificial intelligence if you're more

21:49

interested in the law of copyright and

21:50

artificial intelligence check out my

21:52

conversation with Devon from legal eagle

21:55

in the future we're going to be

21:56

inundated with AI music but at that

21:58

stage live music performed by humans

22:01

will become increasingly valuable but

22:03

another thing that I'm worried about is

22:05

another form of AI fatigue where any

22:07

amazing human-made music that you hear

22:09

could be diminished in a few years

22:11

because those that hear it could just

22:12

think that it could be AI generated as

22:15

for me personally I think music AI

22:17

generators could make for a great

22:19

sampling tool for example I turned that

22:21

classical piece generated by AI at the

22:23

beginning of the video into a full track

22:25

I'll play part of it as an outro to this

22:27

episode but I'll leave a link for the

22:28

the full version below but in saying

22:30

that I can't help but feel a little

22:32

sense of loss that joy that comes from

22:34

having an idea in your head and turning

22:36

it into musical form is no longer

22:38

strictly human Music Creation is a

22:41

personal journey of Discovery and it's

22:42

unsettling that the art form is changing

22:45

but on the flip side I can understand

22:47

how freeing this is for those who have

22:49

no musical knowledge and just want to

22:50

create something I'm not blind to

22:54

that I know we covered a fair bit in

22:57

this episode but let me tell you a

22:58

little story that I think perfectly

23:00

encapsulates it all we have to go back

23:02

to 1997 to the University of Oregon

23:05

where a small audience is patiently

23:07

waiting to see a battle between a

23:08

musician and a computer Dr Steven Larson

23:11

is the musician in question and he also

23:13

teaches music theory at the University

23:15

he's ready to go up against a computer

23:17

to compose a piece of music in the style

23:19

of the famous Johan Sebastian bck the

23:22

idea is simple there's going to be three

23:24

pieces played live one that was composed

23:27

by the original buck one by Dr lson and

23:30

one by Emmy the computer generated music

23:32

which I mentioned at the start of the

23:34

episode all three entries will be

23:36

performed live to an audience and

23:37

they'll have to guess which piece is

23:39

composed by whom once all the

23:41

performances ended the audience

23:43

incorrectly thought that lon's piece was

23:45

made by the computer and the one

23:46

composed by Emmy they thought that that

23:49

was the original bark composition Dr

23:52

Larson was genuinely upset about this

23:54

quote bark is absolutely one of my

23:57

favorite composers my my admiration for

23:59

his music is deep and Cosmic that people

24:02

could be duped by a computer program was

24:04

very disconcerting end quote now you

24:07

could take that quote put it in any

24:09

conversation in 2024 about AI music and

24:11

it would be just as relevant the

24:14

circumstances have changed but as humans

24:16

we still perceive art the same David

24:19

cope the inventor of Emmy once revealed

24:21

that his artificial program can make

24:23

quote beautiful music but maybe not

24:26

profound music and I think I agree with

24:28

with that ultimately AI Generations are

24:31

going to get more polished better

24:32

sounding higher Fidelity and it's all

24:34

going to be at our fingertips but we

24:36

have to remember it's the messiness of

24:38

our human selves that inform the art and

24:41

that's the part we relate to most what

24:43

makes us human is that we can listen to

24:45

music not just hear

24:47

it and that is where we're at with AI

24:49

generated music so I hope you enjoyed

24:51

that episode if you did feel free to

24:54

subscribe there's plenty of other

24:55

interesting stuff here on Cold Fusion on

24:57

science technology in business anyway

25:00

that's about it from me my name is toogo

25:02

and you've been watching cold fusion and

25:04

I'll catch you again soon for the next

25:05

episode oh yeah and if you want to check

25:08

out my full interview with Rick Bardo

25:09

it's on the second podcast Channel I'll

25:11

leave a link below

25:16

[Music]

25:23

[Music]

25:31

[Music]

25:42

cold fusion it's new thinking

Rate This

5.0 / 5 (0 votes)

Related Tags
AI MusicCold FusionMusic IndustryTech InnovationArtificial IntelligenceMusic ProductionAI EthicsMusic CopyrightDigital MusicGenerative AIMusic Technology