Did AI Just End Music? (Now itโs Personal) ft. Rick Beato
Summary
TLDRThe video script from Cold Fusion explores the advent of AI-generated music, a technology that has the potential to revolutionize the music industry. It discusses the capabilities of platforms like Sunno and Udio, which can create music from simple text prompts, and the implications this technology has for musicians and the future of music creation. The script also touches on the historical context of computer-aided music composition and the ethical and legal challenges posed by AI music, including copyright issues and the potential devaluation of human artistry. The host interviews musician Rick Bardo and artist Taran Southern, who has commercially released music using AI. The video concludes with a reflection on the personal and emotional aspects of music creation that AI may not be able to replicate, emphasizing the irreplaceable value of human creativity and expression in art.
Takeaways
- ๐ต AI-generated music is becoming increasingly sophisticated, with platforms like udio and Sunno AI allowing users to create music by simply typing in a text prompt.
- ๐ฉโ๐ป AI music systems are leveraging large language models and neural networks to understand complex patterns in music and generate original compositions.
- ๐ผ The process of creating music with AI involves inputting text prompts that include desired genres, instruments, and moods, which the AI then uses to produce a track.
- ๐ AI music generation is being seen as a potential game-changer by some, while others in the music industry express concerns about its impact on human musicians and the uniqueness of human-created music.
- ๐ค AI music platforms can produce a wide range of music styles, from classical to R&B, and even create album covers, showcasing the versatility of AI in creative fields.
- ๐ There are concerns about the potential for AI to replace human jobs in the music industry, as AI systems become more capable of producing high-quality, genre-spanning music.
- ๐ก Ethical considerations are arising, particularly regarding the training of AI on copyrighted material and the potential for AI to infringe on the rights of human artists.
- ๐ The influx of AI-generated music could lead to a devaluation of music as a commodity, making it harder for human musicians to earn a stable income from their work.
- ๐ง Despite the advancements in AI music generation, some believe that the emotional and personal connection inherent in human music creation will always set it apart from AI.
- ๐ The music industry is likely to face new challenges and opportunities as AI technology continues to evolve, potentially reshaping the way music is created, distributed, and consumed.
- โ๏ธ There is a call for new copyright legislation to address the role of AI in music creation, reflecting the changing landscape of artistic production and intellectual property rights.
Q & A
What is the significance of the AI-generated music in the video?
-The AI-generated music signifies a watershed moment for AI music, showcasing the ability of AI to not only create vocals but also the backing music, which is considered more complex for AI to generate coherently.
What are the names of the two AI music platforms mentioned in the video?
-The two AI music platforms mentioned are Udio and Sunno AI.
What is the process like for a human musician to create a song traditionally?
-Traditionally, a human musician would start with a guitar riff or a melody, then build a track with bass, drums, and atmosphere around it. This process involves many substeps including equalization, compression, adding effects, and resolving frequency clashes. It can take days or weeks to complete a song.
What are some of the technical features that Udio's AI includes in its music generation?
-Udio's AI includes production techniques like side-chaining, tape effects, reverb and delay on vocals in appropriate areas, and it can mimic the sounds of different electric guitar pickups.
What are some criticisms or concerns raised about AI music generation?
-Concerns include the potential for AI to make mistakes, limited flexibility once an output is generated, low fidelity on some tracks, and weaknesses in certain music genres. Additionally, there are ethical concerns about the use of copyrighted material in training AI systems.
How did the AI music generation start and evolve over time?
-AI music generation started with computer-aided compositions in the 1950s and evolved through various stages, including the development of tools like Emmy in the 1980s, which could generate music in the styles of different composers. Modern AI music generation began with neural networks in 2012 and has advanced to the point where it can generate music based on simple text queries.
What is the potential impact of AI music on the music industry?
-AI music could disrupt the industry by providing an unlimited supply of music, which could devalue human-made music and make it harder for musicians to make a stable income. It could also change the landscape for professional musicians and the way music is perceived by the public.
What are the views of musician and YouTuber Rick Bardo on AI music?
-Rick Bardo believes that AI will not completely replace musicians because people enjoy playing real instruments. However, he acknowledges that AI could impact the income of artists, especially those creating stock or royalty-free music.
What is the role of neural networks in modern AI music generation?
-Neural networks are used to understand patterns in vast amounts of data and create outputs based on user input. They predict the next note or sound in a sequence, considering variables like instrument tone, tempo, rhythm, and sound design choices.
How does audio diffusion contribute to the generation of music by AI?
-Audio diffusion is a process of adding and removing noise from a signal until the desired output is achieved. It starts with random noise and refines it into a coherent piece of music based on the interpretation of the input prompt.
What are the potential legal and ethical issues with AI music generation?
-Potential issues include the use of copyrighted material in training AI systems and the inability to copyright AI-generated artwork. There is also concern about the devaluation of human artistry and the need for new copyright legislation around AI-generated content.
What is the future outlook on AI-generated music according to the video?
-The future is expected to see more polished, higher-fidelity AI-generated music that is easily accessible. However, there is a concern about AI fatigue and the diminishing value of human-made music. Live music performed by humans may become more valuable, and the personal journey of music creation may be affected.
Outlines
๐ต AI and the Future of Music Creation
This paragraph introduces the topic of AI-generated music, highlighting the capabilities of AI to create music across various genres from a simple text prompt. It discusses the mixed feelings of a musician and producer towards AI's encroachment on music creation and mentions platforms like udio and sunno AI, which are revolutionizing the industry by making music creation accessible to anyone, regardless of musical knowledge.
๐ถ The Process and History of AI Music
The second paragraph delves into the process of creating music traditionally and contrasts it with the AI music generation process. It outlines the historical context of computer-aided music composition, from the 1957 Illiac Suite to modern applications like sunno and udio. The paragraph also discusses the technical aspects behind these AI music applications, including the use of vast amounts of data and neural networks to generate music.
๐ AI Music Generation: Technical Insights and Concerns
This section provides an in-depth look at the technology behind AI music generation, including the use of neural networks and audio diffusion. It raises ethical concerns regarding the training data used by these AI systems and the potential for copyright infringement. The paragraph also discusses the potential impact of AI-generated music on the music industry and the concerns of artists about their work being undervalued or replaced by AI compositions.
๐ค The Impact of AI on Musicians and the Music Industry
The fourth paragraph discusses the potential impact of AI-generated music on musicians, particularly those who are not in the top 1% of successful musicians. It includes a conversation with musician Rick Bardo about the future of music in the age of AI, the challenges of competing with AI-generated music, and the excitement around the possibilities AI presents for making certain aspects of music creation easier.
๐ง The Role of AI in Music Production
This paragraph explores the role of AI in the music production process, touching on the potential for AI to improve tasks such as mastering and mixing. It also discusses the concerns about the authenticity and emotional connection of AI-generated music compared to human-created music. The paragraph ends with a reflection on the changing nature of music creation and the importance of human elements in art.
๐ Learning About Neural Networks and AI
The final paragraph promotes an educational resource, Brilliant, for learning about artificial neural networks and other scientific topics. It emphasizes the convenience and interactive nature of the courses, which can be accessed from anywhere and at any time. The paragraph also includes a brief mention of a podcast interview with Rick Bardo for further insights into the topic of AI and music.
๐ Conclusion and Call to Action
The concluding paragraph summarizes the episode's discussion on AI-generated music, inviting viewers to subscribe for more content on science, technology, and business. It also provides a link to the full interview with Rick Bardo and ends with a teaser for the next episode of Cold Fusion.
Mindmap
Keywords
๐กAI Music Generation
๐กUdio
๐กSunno AI
๐กMusical Knowledge
๐กCopyrighted Material
๐กNeural Networks
๐กAudio Diffusion
๐กEthical Concerns
๐กMusic Industry Disruption
๐กHuman Element in Art
๐กAI Fatigue
Highlights
AI has generated a song that is 100% AI-made, including lyrics, rhythm, and vocals, showcasing the capabilities of modern AI in music creation.
The AI music generation platform udio was created by a group of ex-DeepMind engineers, indicating a shift towards more sophisticated AI music systems.
AI music systems like udio and Sunno AI are making music creation accessible to anyone, regardless of musical knowledge, by using text prompts to generate tracks.
AI-generated music is not just limited to vocals but also includes the creation of backing music, which is considered more challenging for AI.
The video discusses the potential impact of AI on the music industry, including concerns over job displacement and the future of music creation.
AI music generators use vast amounts of data to understand patterns and create outputs based on user input, similar to how large language models generate text.
The technology behind AI music generation includes audio diffusion, which refines noise into a coherent musical piece based on the user's prompt.
There are ethical concerns regarding the training of AI music systems on copyrighted material and the potential for infringement.
Over 200 artists signed an open letter to AI developers and tech companies asking them to stop using AI to infringe upon and devalue the rights of human artists.
The future of the music industry may involve an unlimited supply of AI music, which could devalue human-made music and impact musicians' incomes.
Interview with musician Rick Bardo discusses the potential for AI to replace certain aspects of music production, such as mastering and mixing.
AI-generated music could lead to an 'AI fatigue' where human-made music may be undervalued due to the abundance of AI-generated content.
Live music performed by humans may become increasingly valuable in an era inundated with AI music.
The essence of music creation lies in the personal journey of discovery and the human element, which AI cannot replicate.
AI music generators could serve as a great sampling tool for musicians, offering a new dimension to music production.
The video concludes with a story from 1997, highlighting the ongoing debate about the role of AI in music composition and the value of human creativity.
Transcripts
this video was brought to you by
brilliant hi welcome to another episode
of Cold Fusion The Following song is
100% generated by AI
[Music]
abusion where
curiosity ignites and Minds take
[Music]
life a better future
can help us make a better future
everything you hear is all artificially
generated by AI generated I mean the
lyrics which was chat gbt the Rhythm the
vocals to get this output all I did was
type in a text prompt of what I wanted
there are a few revisions but still the
result is honestly insane and if you
think that's a one-off here's another
example I wanted a classical piece so I
type this
[Music]
how about some guitar driven
stuff
so you
cra a wild ride awesome
[Music]
[Applause]
R&B our souls please remember me
[Music]
awesome UK
[Music]
[Applause]
garage all very impressive even the
album covers were made using Ai and
that's just to let you know what we're
dealing with
here as some of you know I've been a
musician for the better part of 20 years
and a producer for a decade if you've
watched a cold fusion video before
you've heard some of my music so being a
technology Enthusiast as well a do have
mixed feelings about this AI has finally
come to something very close to my heart
we've seen AI voice swapping that could
get artists to perform songs in
different genres but this is something
totally different with this latest
generation of AI music it's not just the
vocals but the backing music which is
artificial which in my view is much more
difficult for AI to do coherently what
you just heard previously was from a
platform called udio and it was created
by a group of X deepmind Engineers it's
a watershed moment for AI music and
they're not the only ones a few months
ago there was a similar application
called sunno AI in a recent video I
looked at the AI deception and it was
about how some companies are lying and
over promising the capabilities of AI
but on the flip side I said that there's
also some companies that are obfuscating
the true extent of AI replacing jobs
today's episode falls in the second
category first was images graphic design
writing videos and now finally music
it's clear that AI music systems will
will eventually impact those in the
music industry but how in this episode
we explore AI music apps what they're
all about how it works its criticisms
and how AI could disrupt the music
industry I also had the opportunity to
speak to the amazing musician and
YouTuber Rick Bardo to get his thoughts
on this and I also spoke to Taran
Southern who's considered to be one of
the first artists to commercially
release music using AI all right so on
the count of three let's
go you are watching to Fusion
TV quote creating music is an inherently
complex process but we're streamlining
that making it more accessible and
intuitive than ever before end quote
that's from son's co-founder Mikey shaan
and it just about summarizes what these
AI companies are trying to achieve in
fact the premise of sunno and udio is
basically the same now anybody can
create music even though they have no
prior musical knowledge just type a text
prompt and Away you go users can provide
subjects instruments feelings and or
their custom lyrics and in just a minute
a track is ready to play 30 Seconds for
udio and 2 minutes for sunno both
platforms can extend tracks and provide
different variations and both are free
to use right now some say that Yo's
first version is already pretty good
better than Sun's version 3 I've used
both of them to gain Insight but I ended
up liking y's output more personally
because it was cleaner and possessed a
better understanding of less typical
genres so even though the examples I've
shown you and will show you are through
using yio keep in mind that a lot of the
content in this video also applies to
sunno sunno version 3 is now available
to everyone including those on the free
plan it is the best AI music generator
by far and it just got even better with
V3 a completely free AI music generator
launched called udio people have been
saying this is the chat GPT moment for
AI music or calling it The Sora of Music
a lot of the time people massively
overhype new AI launches that's how
everyone gets clicks but there's some
truth to this one to understand just how
revolutionary this Tech is it helps to
know how modern music is made before AI
the first step is to come up with a
guitar refi I made this one a few years
ago when marking around on my acoustic
guitar but for the final track I used a
Gretch electric guitar with a delay in
Reverb pedal and a distortion pedal I
was going for a calm 34 Walt time
signature with a post Rock feel
[Music]
after I recorded this I hopped into
Ableton and built some Bas and drums and
atmosphere around the guitar ref even
for this process every step involves
many substeps including eqing
compressing adding effects and just
listening to the components in isolation
and
[Music]
together as the song is being created
you'll definitely come across some
production problems where frequencies
Clash solving these issues are just a
part of the nature of mixing sounds
together finally I added some vocals and
then arranged the
track so in total a finished song can
take days or even weeks to complete and
sometimes the songs just don't work out
so you have to try again but with all of
that effort exploring the Sonic stage
and pulling a song out of the ether is
half the fun but now in contrast with AI
all people have to do is type in some
words and then get a song I hope you can
see the difference clearly now here's
some observations that I found with yudo
it includes production techniques like
side chaining tape effects Reverb and
delay on vocals in appropriate areas in
some outputs the vocals seem extremely
real there's also harmonies in there and
even for electric guitar it mimics the
sounds of different pickups all just so
insane some weaknesses include sometimes
it messes up big time and I'll show a
couple of examples of
[Music]
that it's very limited in flexibility
once you get an output you can't really
change it there's low Fidelity on some
tracks and it has some weaknesses in
some genres such as UK jungle okay so
what's the story behind these
applications sunno is a venture-backed
company founded in late 2023 all four of
the founders previously worked at Keno
an aite Tech startup for financial data
which was later acquired by S&P Global
sunno even partnered with Microsoft
co-pilot and is up to version 3 like
sunno udio was founded in late 2023 but
the company only recently went out of
stealth mode and made their application
public a couple of weeks ago it was
founded by an ensemble of former
researchers from Google Deep Mind and
has the financial backing of popular
Silicon Valley venture capital firm and
seen hwart and also from musicians like
common and Will i Am as you've seen well
heard the output is a rich combination
of all sorts of instruments so the
training data must have been significant
and that leads us to the next section
how do these applications work well
we'll dive in in a bit but first let's
rewind the tape a bit this isn't
strictly the first time this has been
tried making music with the aid of
computers goes back to 1957 lejaren
Hiller and Leonard isacson would compose
a piece called the iliac Suite this
piece of music is often considered to be
the first created with the aid of a
computer there were notable efforts in
the 1980s to point computers into a new
Direction generative music in 1984
scientist and composer David cope
created Emmy which stands for
experiments in musical intelligence it
was an interactive software tool that
could generate music in the styles of
different composers from short pieces to
entire operas and I said you know what
if I could create a
program that would create B carals and
cope and Stravinsky and Mozart and
anybody and the only way I could think
of doing that was to create a program
that was built around a database of
Music let's say I had a database of Bach
Corrals all 371 of them and he had a
little program that was sat on top of
this smallest program I could make that
would analyze that music and then create
a new Bo Corral that wasn't one of the B
Corrals that was in the database but
sounded like them all and some of the
outputs had been used in commercial
settings over the years there's a very
interesting story regarding Emmy and
I'll get into that later in the episode
so stick around for that and then there
was the computer accompaniment system in
1985 it used algorithms to generate a
complimentary music track based on a
user's live input the computer
accompanyment system is based on a very
robust pattern matcher that compares the
notes of the live performance as I play
them to the notes that are stored in the
score
even David Bowie developed a tool in the
'90s it was called the verbasizer a
digital approach to lyrical writing well
that's all well and good you might be
thinking but what about the modern stuff
well as you probably know from 2012 the
world switched from hard-coded narrow
algorithms to neural networks and with
that the modern AI boom had officially
started but the the question has to be
asked how are neuron Nets being used in
music Generation Well in 2016 Google's
Project magenta turned a few heads when
they released an AI generated piano
piece made by Deep learning algorithms
ultimately the next big step was to
generate music just based on a simple
text query in natural human language
Google's music LM did just that and
honestly it was impressive for the time
but far from perfect this was followed
by open ai's jukebox stable audio
adobe's music generation gen and
YouTube's dream track now these were all
Valiant efforts but they all had the
same problem they were ultimately fairly
limited they worked to a degree but more
often than not the composition sounded
rigid and just not very human to put it
bluntly but even with such limitations
there were some trying to push the
boundaries Taran Southern is often
considered to be one of the first
musicians to use modern AI to release
commercial music and this was all the
way back in 2017 if you can believe it I
asked her all about it I was having
trouble finding really high quality
music for the documentary that was
relatively inexpensive and so I was
looking for alternate options and I came
up upon this article in the New York
Times that was looking at artificial
intelligence as a way to compose music
and I thought well that's really
interesting so I started experimenting
with it was blown away by what was
possible you know this was even before
llms hit the scene and I think where we
were at with music creation and AI back
in in 2017 was actually pretty far along
um and so I was so excited by what I was
hearing that I just decided to create an
entire experiment out of out of the
project and made an album using I think
four different AI Technologies keep in
mind that what Taran did back in 2017
was quite different to what's going on
today there was still a lot of work that
had to go into the compositions but now
anyone can type in some text and get a
decent output so how does it all work
what's the tech behind it
like most modern generative AI
applications for example llms these
systems use vast amounts of data to
understand patterns and then create an
output based on the user's input for
chat GPT the output is text and for
these AIS its original songs and lyrics
we'll touch on the ethical concerns a
bit later but you can probably realize
something a large language model like
chat GPT for example generates text by
predicting the next word in a text
sequence but composing music is
significantly more complex due to a lot
of variables instrument tone Tempo
Rhythm sound design choices volume of
different components compression and EQ
are just some of the variables that has
to be considered in addition the system
must understand what the user wants and
how that relates to genres and
particular sounds in a coherent yet
pleasing way not easy by any means but
what about the audio synthesis itself
many point to audio diffusion as the
secret Source behind generative music in
simple terms it's is the process of
adding and removing noise from a signal
until you get the desired output we
first saw similar methods used in images
where image diffusion starts off with
random noise which is then refined to a
final image based on the interpretation
of the prompt audio diffusion Works in a
similar way this is a very high level
breakdown just to save time but if you
want to get into the depths of the
technicalities I'll leave a link to a
few articles in the source document in
the video description as always
if you've been following the generative
AI space for the last 2 years you
already knew this part of the video was
coming open AI CTO Mira morati was under
public fire following an interview about
the training data of their video
generation tool Sora when asked if the
impressive video output from Sora was
the result of data trained on videos
from YouTube Facebook and Instagram she
said that she quote was not sure about
that to many online the answer was
suspicious with only a few weeks since
the launch of yudo and Soo version 3
there's a question that has to be asked
were these systems trained on
copyrighted material one user set out to
investigate they ran their test by
entering in similar lyrics to the song
Dancing Queen by EBA in the prompt they
had no direct mention of the band or the
song but the outputs were very close to
the original including the basic Melody
Rhythm and even the Cadence of the
vocals now even human musicians are
continuously accused and taken to court
for a lot less as the user later point
points out it's not impossible to
achieve these results without the
original song being present in the data
set but the similarity is striking this
result was also solidified by other
experiments carried out by the user for
obvious reasons I can't play any of the
copyrighted music here to show you the
comparisons but as always I'll link the
original article in the show notes but
here's the thing it seems like the
founders already knew that this was
coming and they accepted it as part and
parcel of the new AI startup culture
anonio Rodriguez one of sunno ai's
investors told Rolling Stone that he was
aware of potential lawsuits from record
labels but he still decided to invest
into the company sunno AI has stated
that it's having regular conversations
with major record labels and is
committed to respecting the work of
artists if that's actually true is
questionable especially when a few weeks
ago over 200 plus artists wrote an open
letter to AI developers and tech
companies to quote cease the use of AI
to infringe upon and devalue the rights
of human artist end quote the letter was
signed by some big names old and new
from Bon joy to Billy eish and Metro
boomman all that begs the question what
does this mean for the future of the
music
industry one way to think of this AI in
the broadest sense is like an outof
control Central Bank monetary policy is
the equivalent of printing money leading
to inflation but this time it's diluting
and devaluing the supply of Music
instead of money if you're not the top
1% of successful musicians the music
industry is already a hard place to make
a consistent and stable income from Tik
Tok transforming how songs are consumed
and discovered to digital music
platforms relationships with musicians
the industry is already difficult but
now there's an unlimited supply of AI
music coming right at you like a tsunami
and it's going to be hard to stay afloat
if you're a musician who exclusively
make stock music or royalty-free sounds
for commercial purposes this all isn't
good news but what about other musicians
I sat down with Rick biard to discuss
what this means for the future so so so
do you think AI will completely replace
musicians one day at all or do you think
that that's not going to happen no I
don't think so no people have too much
fun playing real instruments I'm not
going to stop playing just because
there's AI guitar things say Spotify in
the future they have their AI versions
of songs and then you have people using
the models that already exist to make
their own music and uploading that to
Spotify what do you think that would do
to the potential income of the real
artists who are you know making that and
composing their own music oh definitely
have an impact in it yes it's just
deluding you know because there's no way
real artists could compete with
generative Creations right I mean how
many things can you can you put up there
in a day how many things can you
generate be very difficult to to compete
with that this is an interesting
question do you think that one day we'll
actually have a number one AI Billboard
song yes probably 2 years from now okay
you heard it here fast everyone there
will be a lot of news stories about it
and people will say boy I really like
this and then they'll just create more
and there will be more uh I can see a
time when you know nine out of the top
10 songs are AI generated wow in the
within 10 years I said in one of my
videos a year and a half ago I think it
was or so ago that people won't care if
it's AI if they like the
song and I firmly believe that it's just
a matter of how people are being
compensated now are you excited about
any aspect of this at all totally
excited I mean I I think that the jobs
that can be made easier can mastering be
done better by AI probably can mixing be
done better by AI probably there's a lot
of things vocal editing picking between
takes at least doing rough edits rough
edits for YouTube videos yes I'm excited
about it you know people when drum
machines came around in the 80s oh
they're going to take drum you know
drummers jobs away then drummers back in
the 80s emulated drum machines and they
play drum machine fills and they do all
these same kind of fills and then people
would bring in drummer the Drum Real
drummers to emulate drum machines they'd
have their things program do you think
becoming a professional musician or even
surviving as one or thriving as one
would be more difficult because of AI in
that case not sure about that I don't
know if if uh becoming a professional
musician will be affected by it I think
people enjoy playing music and uh
regardless of whether there are AI
versions out there that people are
listening to you know the rise of
autotune and things like that where have
a synthetic sound enabled these programs
I mean once you have a pitch corrected
voice how far of a stretch is it from
from an AI created voice that actually
has tuning imperfections in it like a
real person that's not that different
honestly how many things are actually
generated by computers grab some samples
off splice you create the high hat track
with an 808 high hat and you got your
you know you got your kick and you're
making a hip-hop tune and then you grab
you know some keyboard sounds and you
don't even know how to play the
keyboards and you hold down some things
and you you know you get some samples
from here and there you put them in and
then you create your vocal over that you
autotune and then you move notes around
it's you know it's pretty synthetic at
that point you know what's the
difference between that and AI the
current wave of AI including music
generators is made possible by neural
networks but what do they do and how do
they work fortunately there's a fun and
easy way to learn about that brilliant
I've talked about how I've used
Brilliance courses on artificial neural
networks before and for good reason but
they also have great interactive stem
courses in anything from from maths to
computer science and general science
it's convenient because you can learn
wherever you like whenever you like and
at your own pace there's also follow-up
questions to make sure you've digested
what you've learned so whether you just
want to brush up on your learning or
need a refresher for your career
brilliant has you covered you can get
started free for 30 days and for cold
fusion viewers brilliant is offering 20%
off an annual plan visit the URL
brilliant.org coldfusion to get started
okay so back to the video last year a US
federal judge ruled that AI artwork
can't be copyrighted as these apps gain
popularity we need to protect artists
but how this is done and how this will
be implemented is still up for debate
things are uncertain but one thing is
for sure we're heading into a new era of
copyright legislation around art and
artificial intelligence if you're more
interested in the law of copyright and
artificial intelligence check out my
conversation with Devon from legal eagle
in the future we're going to be
inundated with AI music but at that
stage live music performed by humans
will become increasingly valuable but
another thing that I'm worried about is
another form of AI fatigue where any
amazing human-made music that you hear
could be diminished in a few years
because those that hear it could just
think that it could be AI generated as
for me personally I think music AI
generators could make for a great
sampling tool for example I turned that
classical piece generated by AI at the
beginning of the video into a full track
I'll play part of it as an outro to this
episode but I'll leave a link for the
the full version below but in saying
that I can't help but feel a little
sense of loss that joy that comes from
having an idea in your head and turning
it into musical form is no longer
strictly human Music Creation is a
personal journey of Discovery and it's
unsettling that the art form is changing
but on the flip side I can understand
how freeing this is for those who have
no musical knowledge and just want to
create something I'm not blind to
that I know we covered a fair bit in
this episode but let me tell you a
little story that I think perfectly
encapsulates it all we have to go back
to 1997 to the University of Oregon
where a small audience is patiently
waiting to see a battle between a
musician and a computer Dr Steven Larson
is the musician in question and he also
teaches music theory at the University
he's ready to go up against a computer
to compose a piece of music in the style
of the famous Johan Sebastian bck the
idea is simple there's going to be three
pieces played live one that was composed
by the original buck one by Dr lson and
one by Emmy the computer generated music
which I mentioned at the start of the
episode all three entries will be
performed live to an audience and
they'll have to guess which piece is
composed by whom once all the
performances ended the audience
incorrectly thought that lon's piece was
made by the computer and the one
composed by Emmy they thought that that
was the original bark composition Dr
Larson was genuinely upset about this
quote bark is absolutely one of my
favorite composers my my admiration for
his music is deep and Cosmic that people
could be duped by a computer program was
very disconcerting end quote now you
could take that quote put it in any
conversation in 2024 about AI music and
it would be just as relevant the
circumstances have changed but as humans
we still perceive art the same David
cope the inventor of Emmy once revealed
that his artificial program can make
quote beautiful music but maybe not
profound music and I think I agree with
with that ultimately AI Generations are
going to get more polished better
sounding higher Fidelity and it's all
going to be at our fingertips but we
have to remember it's the messiness of
our human selves that inform the art and
that's the part we relate to most what
makes us human is that we can listen to
music not just hear
it and that is where we're at with AI
generated music so I hope you enjoyed
that episode if you did feel free to
subscribe there's plenty of other
interesting stuff here on Cold Fusion on
science technology in business anyway
that's about it from me my name is toogo
and you've been watching cold fusion and
I'll catch you again soon for the next
episode oh yeah and if you want to check
out my full interview with Rick Bardo
it's on the second podcast Channel I'll
leave a link below
[Music]
[Music]
[Music]
cold fusion it's new thinking
5.0 / 5 (0 votes)