Free AI Audio Tools You Won't Believe Exist
Summary
TLDR本视频介绍了四个令人难以置信的免费AI音频工具,特别强调了可以在本地计算机上运行的AI音乐生成器,无需互联网连接。重点介绍了Adobe Audition、Premiere Pro和DaVinci Resolve的AI功能,以及Audacity的AI插件,这些插件可以进行语音转录、噪音抑制、音乐轨道分离和音乐生成。视频中通过实际操作展示了这些工具的强大功能和简便的安装过程,让观众对AI音频工具的潜力和未来充满期待。
Takeaways
- 🎉 视频中介绍了四个令人难以置信的免费AI音频工具,适合对AI音乐生成器感兴趣的观众。
- 🚀 这些AI工具可以本地运行,无需连接互联网,这在技术上是一个突破。
- 🎙️ Adobe Audition 和 Premiere Pro 已经集成了语音增强功能,可以提升录音质量。
- 🎶 DaVinci Resolve 提供了声音隔离等工具,与 Intel 合作,增强了音频处理能力。
- 🔧 Audacity 是一个易于安装AI插件的工具,最新版本为3.4.2。
- 📝 使用Audacity的AI插件,用户可以在本地计算机上进行音频转录,无需云服务。
- 🔇 AI工具还能够进行噪音抑制,与传统的降噪工具相比,提供了更多的选项。
- 🎵 介绍了AI驱动的低频茎分离功能,可以从音乐轨道中分离出不同的组件。
- 🎹 展示了AI音乐生成功能,可以根据文本提示生成全新的音乐轨道。
- 🔧 通过调整设置和使用更强大的硬件,可以提高AI音乐生成的质量和速度。
- 🌟 Audacity 未来的更新将带来更多的AI工具和功能,以帮助用户达到前所未有的音频处理水平。
Q & A
视频介绍了哪些免费的AI音频工具?
-视频介绍了四个免费的AI音频工具,包括Adobe Audition和Adobe Premiere Pro中的语音增强功能,DaVinci Resolve中的声音隔离功能,以及Audacity中的AI插件。
如何安装Audacity的AI插件?
-首先确保你正在运行最新版本的Audacity(视频录制时为3.4.2版本)。然后访问视频描述中提供的链接下载一些文件,解压后拖入Audacity安装文件夹中。打开Audacity,进入编辑菜单,选择偏好设置,在模块中启用mod-vino,并重启Audacity以启用插件。
Audacity的AI插件提供了哪些功能?
-Audacity的AI插件提供了转录、噪音抑制、音乐分离和音乐生成等功能。
使用Audacity的AI插件进行转录时,有哪些选项可以选择?
-可以选择使用CPU或GPU进行处理,选择转录或翻译模式,并设置源语言和目标语言。
如何使用Audacity的AI插件进行噪音抑制?
-在效果菜单中选择Open Vino效果,然后选择噪音抑制,可以选择使用CPU或GPU进行处理,然后应用插件以减少背景噪音。
Audacity的AI插件中的音乐分离功能可以分离出哪些音轨?
-音乐分离功能可以分离出音乐伴奏、人声、鼓点、贝斯和其他乐器等不同的音轨。
如何使用Audacity的AI插件生成新的音乐?
-在生成菜单中选择音乐生成,输入文本提示(如音乐风格和乐器),设置参数(如时长、种子、推断步骤等),然后应用以生成新的音乐。
视频中提到的AI音乐生成技术的基础是什么?
-视频中提到AI音乐生成技术可能基于一种类似于稳定扩散(stable diffusion)的音频技术。
Audacity的AI插件有哪些优势?
-Audacity的AI插件的优势在于它们完全免费,可以在本地运行,不需要互联网连接,且易于安装和使用。
视频作者对Audacity的未来更新有何期待?
-视频作者期待Audacity未来能继续与Intel合作,开发更多新的AI工具和插件,将Audacity提升到前所未有的水平。
视频中提到的AI音频工具对于音频制作者意味着什么?
-这些AI音频工具为音频制作者提供了强大的本地处理能力,无需依赖云服务或第三方平台,可以更加便捷和高效地进行音频编辑和创作。
Outlines
🎵 介绍免费的AI音频工具
本段落介绍了四个免费的AI音频工具,特别强调了这些工具可以在本地计算机上运行,无需连接互联网。首先提到了Adobe Audition和Premiere Pro中的集成语音增强功能,然后重点介绍了DaVinci Resolve的音频处理能力。接着,介绍了Audacity这个工具,以及如何安装AI插件来提升音频质量。最后,展示了如何使用这些工具进行音频转录、噪音抑制和音频生成等操作。
🔊 AI音频降噪与分离
这一部分详细介绍了如何使用AI工具进行音频降噪和音乐分离。首先,展示了如何使用AI降噪功能快速清除背景噪音,然后比较了AI降噪和传统降噪方法的效果。接下来,介绍了AI音乐分离功能,可以分离出音乐中的人声、鼓点、贝斯和其他乐器,这对于音乐制作和编辑非常有用。
🎶 AI音乐生成与风格重混
本段落探讨了AI在音乐生成和风格重混方面的应用。首先,介绍了如何使用AI工具将现有音乐重混成不同的风格,例如将合成器音乐转换为钢琴混音。然后,展示了如何从头开始,仅使用文本提示生成全新的音乐曲目。虽然生成的音乐可能需要进一步的调整和优化,但这项技术展示了巨大的潜力和未来的可能性。
🚀 AI音频工具的未来展望
最后一部分讨论了AI音频工具的未来发展方向。提到了Audacity与Intel的合作,以及他们计划继续开发新的AI工具,以推动音频编辑和创作达到前所未有的水平。产品经理Martin Keery的讲话强调了这一点。视频最后鼓励观众关注这些工具的未来发展,并分享他们如何在自己的工作中使用这些工具。
Mindmap
Keywords
💡AI工具
💡音频转录
💡降噪
💡音乐风格重混
💡音乐生成
💡本地运行
💡Audacity
💡插件
💡OpenVINO
💡Whisper模型
💡音乐分离
Highlights
介绍了四个令人难以置信的免费AI音频工具。
AI音乐生成器可以在本地计算机上运行,无需连接互联网。
Adobe Premiere Pro集成了语音增强功能。
DaVinci Resolve提供了声音隔离等功能。
介绍了与Intel合作的全新AI音频工具。
Audacity是一款易于安装AI插件的工具。
使用Audacity的AI插件可以进行音频转录。
展示了如何使用Whisper模型在本地计算机上进行转录。
Audacity的AI插件还包括噪音抑制功能。
比较了AI工具和传统工具在降噪方面的效果。
介绍了AI驱动的低频鼓点分离功能。
展示了如何使用AI工具分离音乐和人声。
介绍了如何使用AI工具分离鼓点、低音、其他乐器和人声。
讨论了AI音乐生成的热门话题,并展示了如何在Audacity中生成音乐。
展示了如何使用文本提示在Audacity中生成全新的音乐曲目。
Audacity的产品管理人Martin Keery谈到了未来AI工具的发展计划。
视频提供了关于这些免费工具的价值和独特性的总结。
Transcripts
In this video, I'm going to show you four AI tools for audio that I cannot believe
are free.
And you're going to want to stick around to the end
if you're interested in AI music generators that run locally
on your computer with no need to be connected to the Internet.
This is groundbreaking.
Let me show you what it is.
Now, you might say this is the dark horse of A.I.
in Audio.
There's been Adobe Audition and Adobe Premiere Pro Now Premiere have integrated
speech enhancement.
If I enhance the speech, does it get any better?
Do I sound like I'm in a pro studio without a big, noisy, blowing fan?
This is a pretty cool feature. It makes things sound better.
DaVinci Resolve have done things such as vocal isolation
this is Mike
Russell on Music Radio Creative Dot Com!
this editor that completely threw me out of nowhere
a load of a surprise introduced for tools in partnership with Intel
to get you sounding great and making new audio using A.I.
on your computer with no internet. It's incredible.
Audacity is the tool I'm talking about,
and it's incredibly easy to install these A.I.
plugins.
All you need to do is make sure you're running the latest version.
At the time of recording this video, which is audacity 3.4.2.
And then you go to the web link that I'll link down below
in the description to this video.
Download a couple of files.
Once you've done that, extract them, go into the folders
and drag them into the folder on your computer where audacity is installed.
Once you've done that, you'll open up audacity.
You'll go into the edit menu, you'll look for preferences,
and then when preferences load, go to modules and make sure you go for mod
dash, open vino and change it from new to enabled.
After that, you're going to need to restart audacity
and you'll then find that the plug ins are available and enabled under effect.
They're here at the bottom.
You'll also find them under generates and also analyze as well.
So let's get started and look at the first I plug in, which is transcription.
Now, this is incredible.
Usually you'd have to use a cloud service.
In fact, I've been happily paying money to have my stuff transcribed
using things such as Openai's Whisper and other
great tools that I've mentioned in other videos on my channel.
But now you can do it locally on your computer using audacity.
And I'll show you how.
Let me open an audio file.
Here is a podcast that I've been working with recently,
and you'll see I'm
just going to transcribe a little bit, so I'm actually going to zoom
in a little bit closer and we'll maybe just take this little sample here.
Let's play it back for good measure.
All right, James, so you are an urban planner
and a skater, and I need the design guide from you.
I need to know what cities.
Okay, perfect.
That's a nice little sample of audio, so I can work with that.
Now, all I need to do is click analyze, and then I'll click Open Vino,
Whisper Transcription.
Now, if I'm not mistaken, Whisper is the model the opened I created.
It's one of the best transcription models that exists in the present day.
So this is using the whisper model locally on your computer.
That is insane
because I've been happily paying for API credits to do this in the past.
So what do we got here?
Well, we've got the inference device so you can use your CPU
or if you've got a nice fat GPU, you can of course select that.
The whisper model is only one at the moment.
The mode is obviously transcribe,
but you could also use this to translate it into another language.
So I could click, translate and select any language that's there.
But let's start off with transcribe and source language would just leave it
as auto click apply.
And that literally just took seconds to process.
Obviously I sped that up for this video and then you'll see.
Alright James, are you an urban planner and a skater?
And I need the design guide from you.
I need to know the cities and even puts the punctuation in there
so incredibly well.
If I want to take this transcript out, all I need to do
is go to file export other you'll see export labels is grayed out.
That is because I need to select the track there with my transcript on.
Then I can go file export other export labels.
And there we go.
I can save it and I can actually go into my downloads folder.
Boom, Open the transcription and you'll see it's right here.
I've got that transcription
with some timestamps that seem actually meaningless to me.
That might actually be the point
in the original recording at which this audio was said.
so there is the Transcribe feature inside Audacity with the new AI plug ins,
if you like what you see so far fro like also subscribe to my channel
because I'm always covering stuff like this and leave a comment.
Let me know how you'll use it.
It's actually good to have the local ability to transcribe
maybe, or an enterprise is you want to
throw your audio up into the cloud or share it with third party companies.
This is something you can keep completely local, completely on your own computer.
So let's go in and have a look at something else.
This is the ability to do some noise suppression with AI.
Now, if I play this clip I've selected here.
Okay, James, now I've spoken to.
So you can hear there's a lot of background noise going on now.
If I go into effect open vino effects and I look for noise suppression,
I can again instantly using either CPU or GPU,
use this noise suppression model to remove that background noise.
Let's give it a try.
And there you go.
In a matter of a few seconds, we've actually got a much cleaner track.
Let's listen to the noise suppression tool.
Okay, James.
Now I've spoken to a skater in Paris.
this could be a great plug in.
I'm really curious though, how it stacks up
against the OG effect noise removal and repair Noise reduction.
Get the profile.
Once we've done that, I'll then select that existing clip,
go into effect noise removal, repair, noise reduction,
and we'll just run it on default settings, which should be reasonable.
Now, that took just like literally one second as opposed to like a minute
also using the eight ball. Let's listen to the difference.
Okay, James, Now I've spoken to a skater.
Okay?
So it's it's kind of moved the noise down in the background,
but it hasn't totally removed.
It just can undo,
go back in again and try one more time, maybe make the settings more aggressive.
The noise reduction in decibel levels will make that harsher so it removes
more of the noise, will crank the sensitivity up a little bit.
We'll leave the smoothing as is.
Okay, James.
Now I've spoken to a skater in Paris.
Okay, so with some more aggressive settings on the non
AI tool, the noise reduction tool inside audacity,
we are getting quite a nice clean piece of audio.
We're retaining more of the fidelity, some more of the frequencies in the audio,
but I am starting to notice
some artifacting in the audio that I wasn't getting with the
AI tools, so both are worth trying and running on your audio.
But now audacity has both the
original noise reduction and AI powered noise reduction.
I think it's brilliant and the fact that this runs
totally locally on your computer is incredible.
Now let's move on to the next two tools.
And these are the real dark horses, the ones that you're really going to want
to try out.
And I think it's incredible that these run 100% on your own computer
without needing to use anything that's in the cloud.
First of all, I'm going to open a new file
and it's actually a jingle I've got here from a while ago,
and I'll play it to you so you can hear it in its entirety.
Be gonna the
one for you.
Right?
So this is now a AI powered low call stem separation.
What that means is you can split up a music track
into its different components.
This is great if you want to get just the music backing of a piece of music
or if you want to get just the vocals from the music, you can isolate
the background music or the sung vocals and it's pretty incredible.
so I'm just going to double click
this track to make sure that selected and then it should work.
If Now we get this pop up and I'm just going to leave it to separate
the instruments and the vocals and I'm going to set my CPU.
But if you have a GPU of course you can do that too.
So let's solo the music.
I'm absolutely I'm blown away by that.
That is 100% local on device music separation.
It's pulled the vocals right out of it Let's go and solo
the vocals, see how good they are.
I'd be Poppy country.
Hey. Yeah.
94.3 right off the bat.
Okay. Yeah.
We're definitely losing some fidelity in the vocals.
They're not as clean and bright, but a little bit of NQ could probably fix
that ever so slightly and get you a really nice acapella and I'm sure
it'll vary from track to track that you're putting into audacity as well.
But for a free open source on device tool that uses A.I.
to analyze your tracks and split them like that, that's incredible.
But it doesn't stop there again, like unsubscribe
if you're enjoying these tools because like, I'm blown away, these are insane.
I can see a million use cases.
So let's go back in because there's one thing I didn't show you.
When we go into that music separation effects, you can actually pop down
the separation mode and it gives you another version
which gives you drums, bass, vocals and other, which is insane.
Let's apply this and see what kind of a job
it does on this very short piece of audio with music and vocals.
Okay, And here we go.
So we've got the drums here.
Let's just start playing from where the drum beats are.
That is nuts.
That is totally insane.
So that's a mix of drums.
It's split the drums perfectly out. That music track.
Let's go to the mix with the bass now.
This is so incredibly cool.
Okay, so you're remixing tracks.
Have you doing anything like that?
This is going to be an absolute insane addition to your audio tools.
We've got the other instruments.
Okay,
So it's basically pulled the synths out of that track, which is incredible.
And I think the final track here is just going to be the vocals only part.
Be Funky.
Gonna be okay?
Yeah.
94.3 rather than right.
I am just incredibly stunned by the great job
that that all has done with splitting up so I can split just vocals and music,
or I can split it down
into like the bass, the drums, the other instruments and the vocals.
But there is one more tool
that I want to show you, and I think I've saved the best till last.
Air Music generation is a hot topic right now.
People are using tools such as MOBA to do this.
This is a very popular tool.
There are also other tools that can generative, make audio tracks matter.
I've got music gen, Google have been getting into that space as well,
but this tool allows you to do it right here Inside Audacity.
Now there are two ways you can use this tool.
First of all, I'm just going to solo these other instruments and select them all.
You can use
original music that you've already got and remix that to be something else,
or you can generate something completely new using a text prompt.
So first I'm just going to remix this little piece of music
and then we'll attempt to use it to generate a completely new music
track, like Out of the Blue, just using Artificial intelligence.
So here we go. Just to remind you,
it sounds like that.
So I'm going to double click, select everything,
go to effects, open video effects, and we'll go for music style remix.
And here you'll see there's a bunch of different settings
now I'm going to set these all to CPU, but of course if you've got a GPU,
you can absolutely use your GPU for this, it'll probably be quicker.
And then the prompt here is just at the top.
What's a remix to?
So I'm just going to put piano, okay, I want to remix those synths
into a kind of piano mix.
I'm going to leave everything else as is
to see exactly what this does just on its default settings.
Let's apply. We might need to wait a little while.
Of course I'll speed that weight up and then we'll listen to the result.
This is
okay.
I've got to say, I'm not super thrilled with that remix.
It was like a bit weird and boxy like,
but who knows, maybe I prompted it or had some settings wrong there.
Now obviously I could try with other problems ATO, a kick
drums, big bass.
You don't?
Well, if you're going for psychedelic sounds,
then that's definitely going to work.
So okay, maybe I haven't quite mastered the music style generation plug in
and I probably need to spend some more time refining my prompts
and also the other settings to figure out how it works.
But I did want to show you how the completely brand new music Generation
feature works by starting a brand new Audacity window and going to generate.
So this is music generation. That's right.
I Music generation on your computer.
Let's see how this one works.
Now, immediately when you open it up, it gives you the opportunity
to set a plethora of settings and choose again between CPU and GPU.
If you got it, I'm
just going to stick myself on the CPU for all of this generation.
And the box I'm going to focus on here really is the kind of music.
So essentially the text prompt we use to make something new duration
here, I've set to 10 seconds so it can be generated in a relatively short time
to test this feature out.
Everything else I'm going to leave on the default settings for you.
So we'll just type in something like Future Trance
electric guitar solo.
Let's try that.
Okay.
It's okay.
As a music track, I'm not quite sure it got my prompt
So this time I've typed in relaxing piano music.
Let's generate
again.
We're getting very interesting creations here,
so I guess the better, the quality of the prompt
and the better the settings you use, the better the audio is going to be.
So I'm asking for tropical house music this time instead of the Ogbe.
Let's go for something else like vibes and the strength I'm to change this to one
the seed.
Well, that would just be a custom seed to start the music generation from.
So I'll leave that down to the age determination garden scale.
Let's turn that to five
and see if that makes a difference and we'll leave the inference steps at 20.
In fact, actually, I'm
going to try going out on a limb here and increase them to 30.
We've also got a scheduler so we can mess about and try something else.
Let's try a different scheduler and see if that one makes a difference.
See what we get from this Generation.
Okay, so a little bit of tweaking.
It did a few more inference steps and came out with a result like that.
So you can see a lot of playing, a lot of tweaking and also maybe
using a meaty GPU that can make that generation time a little bit faster.
For me, those tracks were generated in just around 5 minutes for 10 seconds,
so you can see how it would take quite a long time to produce a full track
using just a CPU alone, maybe faster with the GPU.
Just a little.
No, under the hood, I think it's using refuge in,
which is kind of like a stable diffusion for audio.
So this technology is literally the worst it's ever going to be.
It's going to get much, much better.
and I do think that audacity have great plans,
But don't let me tell you that
I'll let the product manager, Martin Keery, speak his very own words.
this is just a first step.
We hope to continue partnering with Intel to develop all kinds of new A.I.
tools in the future to help take audacity to a level no one's ever seen before.
So just listening to what Martin said there alone, I think there are some really
good things in store.
If you're a fan of AI and audio generation and creation and using Audacity,
I'm definitely going to be watching this Ed, in the future like a hawk
to see what new features plug ins and AI tools are introduced to it.
I hope you found value in this video and for tools that you didn't know existed
that work for free
and on your computer without the need for an internet connection.
That is pretty cool.
You've enjoyed do like this video.
Subscribe to my channel and leave a comment down below.
Let me know how you'll be using these tools in your own work.
5.0 / 5 (0 votes)
AI music is getting good!! Udio vs. Suno
Replay: The EASIEST way to create AI Cover Songs!
AI神助攻,轻松驾驭ChatGPT的五大神器,,一跃成为GPT达人 | 回到Axton
6款工具帮你自动赚钱,轻松上手帮你打开全新的收入渠道,赚钱效率高出100倍,用好这几款AI人工智能工具,你会发现赚钱从来没如此简单过
Best AI Music Generator in 2024 - SUNO vs UDIO
How a "Real" Musician can use Suno v3 AI Generated Music