Free AI Audio Tools You Won't Believe Exist
Summary
TLDRこの動画では、無料で使える4つのAIオーディオツールを紹介します。特に注目すべきは、インターネット接続なしでローカルで動作するAIミュージックジェネレーターです。Adobe Audition、DaVinci Resolve、AudacityなどのツールがAIを活用して音声の向上や分離、ノイズ抑制を提供しています。特にAudacityは、オープンソースで無料で利用でき、Whisperモデルを利用したローカルな音声認識や楽曲のセパレーション、スタイルリミックス、完全に新しい音楽の生成など、驚くべき機能を提供しています。これらのツールは全てローカルで動作し、データのプライバシーを確保しながら、オーディオ制作を革新的に行うことができます。
Takeaways
- 🎶 AIツールを使ってオーディオを編集することができる
- 🚀 これらのAIオーディオツールは無料で利用できる
- 💻 インターネット接続なしでローカルで動作するAIミュージックジェネレーターがある
- 🎧 Adobe AuditionとDaVinci ResolveもAIを活用したオーディオ編集機能を提供している
- 🔧 Audacityを使ってAIプラグインを簡単にインストールできる
- 📝 音声認識機能を使ってコンピューター上でローカルにテキストを転写できる
- 🔇 ノイズサプレッションをAIで行うことで、より清晰な音声を取り出すことが可能
- 🎼 AIを使えば、音楽のステンスを分離し、カラオケやバックイングミュージックを作ることができる
- 🎵 AudacityのAIプラグインを使って、既存の音楽をリミックスしたり、全く新しい音楽を生成することができる
- 🛠️ これらのAIツールは改善され、将来的にはより高度な機能が追加される予定
- 🔗 これらのAIオーディオツールを使って、オーディオ編集と創作の可能性を広げることができる
Q & A
ビデオで紹介されたオーディオツールは何ですか?
-ビデオで紹介されたオーディオツールは、Adobe Audition、Adobe Premiere Pro、DaVinci Resolve、そしてAudacityです。
Adobe Premiere Proで新たに統合された機能は何ですか?
-Adobe Premiere Proで新たに統合された機能は、スピーチエンハンスです。
DaVinci ResolveでどのようなAIツールが使用されていますか?
-DaVinci Resolveで使用されているAIツールは、ボーカル分離技術を提供するもので、Mike Russell on Music Radio Creative Dot Comがその例です。
AudacityでAIプラグインをインストールするにはどうすればいいですか?
-AudacityでAIプラグインをインストールするには、最新バージョンのAudacity(ビデオの录制時点では3.4.2)を実行し、ビデオで提供されたリンクからファイルをダウンロードし、解凍してAudacityのインストールフォルダにドラッグ&ドロップします。次に、Audacityを開き、編集メニューから設定を開き、モジュールを選択してmod-vinoを有効にします。
AudacityのTranscriptionプラグインはどのように使用されますか?
-AudacityのTranscriptionプラグインを使用するには、オーディオファイルを開いた後、分析をクリックし、Open VinoのWhisper Transcriptionを選んでください。インファレンスデバイス(CPUまたはGPU)を選択し、モデルを選んで、翻訳またはトランスクリプトに適切なモードを設定します。
AudacityのNoise Suppression機能はどのように動作しますか?
-AudacityのNoise Suppression機能は、効果を開くことで使用できます。Open Vinoエフェクトからノイズ抑制を選択し、CPUまたはGPUを使用してバックグラウンドノイズを除去します。
AIが駆動する音楽セパレーション機能についてどう思いますか?
-AIが駆動する音楽セパレーション機能は、音楽を異なるコンポーネントに分離することができる非常に素晴らしい機能です。これにより、音楽のバックイングやボーカルを抽出することができます。
Audacityで新しい音楽を生成する方法は何ですか?
-Audacityで新しい音楽を生成するには、音楽スタイルリミックスエフェクトを使用するか、テキストプロンプトを使用して完全に新しい音楽を生成することができます。
AudacityのAIプラグインの将来についてどう思いますか?
-AudacityのAIプラグインの将来は非常に明るく、Intelとの協力を通じて新しいAIツールを開発し、Audacityを誰も見たことがないレベルに引き上げる予定です。
オーディオ生成に関する最終的な考慮事項は何ですか?
-オーディオ生成に関する最終的な考慮事項は、生成時間と品質です。CPUを使用する場合、フルトラックの生成にはかなりの時間がかかる可能性がありますが、GPUを使用すると高速化できます。また、プロンプトの品質と設定により、生成されるオーディオの品質が大きく変わります。
ビデオの制作者がAudacityとAIツールにどのように役立つと感じましたか?
-ビデオの制作者は、AudacityとAIツールが非常に強力で、オーディオエディティングや音楽生成をローカルで行うことができる点を高く評価しています。また、インターネット接続が必要なく、プライバシーを保護する利点も強調しています。
Outlines
🎵 AIオーディオツールの紹介
この動画では、オーディオ生成に役立つ4つのAIツールを紹介します。特に、インターネット接続なしでローカルで動作するAI音楽ジェネレーターに注目します。Adobe Audition、Premiere Pro、DaVinci Resolveなどの有名なツールに加え、Audacityというツールを使ってAIプラグインを利用する方法も紹介します。
🔊 AIによる音声トランスクリプト
AudacityのAIプラグインを使って、クラウドサービスを使わずにコンピューター上で音声をトランスクリプトする方法を説明します。OpenVINOとWhisperというモデルを使って、オーディオファイルをトランスクリプトし、エクスポートする方法を示します。
🎶 AIが音楽を分離する
AIの力を借りて、音楽をその構成要素に分離する方法を紹介します。AudacityのAIプラグインを使って、ボーカル、ベース、ドラム、その他の楽器を個別に取り出すことができます。この技術を使って、リミックスや新しい曲の創作を行うことができます。
🎵 AIによる音楽生成
Audacity内で新しい音楽を生成する方法を説明します。テキストプロンプトを使って、全く新しい曲を生成することができます。AIが音楽スタイルをリミックスする方法や、新しい音楽を生成するプロセス、設定の調整方法についても触れています。
🌟 AIオーディオツールの未来
Audacityの将来的なAIツールとプラグインの開発について話します。Intelとのパートナーシップを通じて、より多くのAIオーディオツールが登場する予測をしています。この動画を通じて、これらの無料のAIオーディオツールの価値を広めることを目的としています。
Mindmap
Keywords
💡AI tools
💡audio transcription
💡noise suppression
💡music separation
💡AI music generation
💡local processing
💡Adobe Audition
💡DaVinci Resolve
💡Audacity
💡OpenVINO
💡Whisper
Highlights
The video introduces four AI tools for audio that are free and can run locally on your computer without the need for an internet connection.
Adobe Audition and Adobe Premiere Pro have integrated speech enhancement features, allowing users to improve audio quality and make it sound like it was recorded in a professional studio.
DaVinci Resolve has introduced tools in partnership with Intel for vocal isolation, enabling users to create high-quality audio without the need for expensive equipment.
Audacity, a widely used audio editing tool, has introduced AI plugins that can be easily installed to enhance its capabilities.
The Open Vino Whisper Transcription plugin allows for local transcription of audio, eliminating the need for cloud services and API credits.
The AI-powered noise suppression tool in Audacity can quickly remove background noise from audio clips, improving audio quality.
Audacity's AI tools can also perform source separation, allowing users to isolate vocals, music, and other individual components of a track.
The music style remix feature can transform existing music tracks into new styles, such as turning synth music into a piano mix.
Audacity's music generation feature enables users to create completely new music tracks using only a text prompt, without any existing music as a base.
The AI tools in Audacity are powered by Open Vino and Intel, indicating a collaboration that has led to the development of these innovative audio processing capabilities.
The video demonstrates the practical applications of these AI tools, such as improving the quality of podcasts, music production, and audio editing for various purposes.
The product manager of Audacity, Martin Keery, mentions that these AI tools are just the first step, with plans for future development and collaboration with Intel to enhance Audacity's capabilities.
The video encourages viewers to experiment with the AI tools in Audacity and explore their potential uses in their own work, highlighting the versatility and accessibility of these plugins.
The video concludes by emphasizing the value of these free AI audio tools that can be used without an internet connection, offering a groundbreaking solution for audio processing.
Transcripts
In this video, I'm going to show you four AI tools for audio that I cannot believe
are free.
And you're going to want to stick around to the end
if you're interested in AI music generators that run locally
on your computer with no need to be connected to the Internet.
This is groundbreaking.
Let me show you what it is.
Now, you might say this is the dark horse of A.I.
in Audio.
There's been Adobe Audition and Adobe Premiere Pro Now Premiere have integrated
speech enhancement.
If I enhance the speech, does it get any better?
Do I sound like I'm in a pro studio without a big, noisy, blowing fan?
This is a pretty cool feature. It makes things sound better.
DaVinci Resolve have done things such as vocal isolation
this is Mike
Russell on Music Radio Creative Dot Com!
this editor that completely threw me out of nowhere
a load of a surprise introduced for tools in partnership with Intel
to get you sounding great and making new audio using A.I.
on your computer with no internet. It's incredible.
Audacity is the tool I'm talking about,
and it's incredibly easy to install these A.I.
plugins.
All you need to do is make sure you're running the latest version.
At the time of recording this video, which is audacity 3.4.2.
And then you go to the web link that I'll link down below
in the description to this video.
Download a couple of files.
Once you've done that, extract them, go into the folders
and drag them into the folder on your computer where audacity is installed.
Once you've done that, you'll open up audacity.
You'll go into the edit menu, you'll look for preferences,
and then when preferences load, go to modules and make sure you go for mod
dash, open vino and change it from new to enabled.
After that, you're going to need to restart audacity
and you'll then find that the plug ins are available and enabled under effect.
They're here at the bottom.
You'll also find them under generates and also analyze as well.
So let's get started and look at the first I plug in, which is transcription.
Now, this is incredible.
Usually you'd have to use a cloud service.
In fact, I've been happily paying money to have my stuff transcribed
using things such as Openai's Whisper and other
great tools that I've mentioned in other videos on my channel.
But now you can do it locally on your computer using audacity.
And I'll show you how.
Let me open an audio file.
Here is a podcast that I've been working with recently,
and you'll see I'm
just going to transcribe a little bit, so I'm actually going to zoom
in a little bit closer and we'll maybe just take this little sample here.
Let's play it back for good measure.
All right, James, so you are an urban planner
and a skater, and I need the design guide from you.
I need to know what cities.
Okay, perfect.
That's a nice little sample of audio, so I can work with that.
Now, all I need to do is click analyze, and then I'll click Open Vino,
Whisper Transcription.
Now, if I'm not mistaken, Whisper is the model the opened I created.
It's one of the best transcription models that exists in the present day.
So this is using the whisper model locally on your computer.
That is insane
because I've been happily paying for API credits to do this in the past.
So what do we got here?
Well, we've got the inference device so you can use your CPU
or if you've got a nice fat GPU, you can of course select that.
The whisper model is only one at the moment.
The mode is obviously transcribe,
but you could also use this to translate it into another language.
So I could click, translate and select any language that's there.
But let's start off with transcribe and source language would just leave it
as auto click apply.
And that literally just took seconds to process.
Obviously I sped that up for this video and then you'll see.
Alright James, are you an urban planner and a skater?
And I need the design guide from you.
I need to know the cities and even puts the punctuation in there
so incredibly well.
If I want to take this transcript out, all I need to do
is go to file export other you'll see export labels is grayed out.
That is because I need to select the track there with my transcript on.
Then I can go file export other export labels.
And there we go.
I can save it and I can actually go into my downloads folder.
Boom, Open the transcription and you'll see it's right here.
I've got that transcription
with some timestamps that seem actually meaningless to me.
That might actually be the point
in the original recording at which this audio was said.
so there is the Transcribe feature inside Audacity with the new AI plug ins,
if you like what you see so far fro like also subscribe to my channel
because I'm always covering stuff like this and leave a comment.
Let me know how you'll use it.
It's actually good to have the local ability to transcribe
maybe, or an enterprise is you want to
throw your audio up into the cloud or share it with third party companies.
This is something you can keep completely local, completely on your own computer.
So let's go in and have a look at something else.
This is the ability to do some noise suppression with AI.
Now, if I play this clip I've selected here.
Okay, James, now I've spoken to.
So you can hear there's a lot of background noise going on now.
If I go into effect open vino effects and I look for noise suppression,
I can again instantly using either CPU or GPU,
use this noise suppression model to remove that background noise.
Let's give it a try.
And there you go.
In a matter of a few seconds, we've actually got a much cleaner track.
Let's listen to the noise suppression tool.
Okay, James.
Now I've spoken to a skater in Paris.
this could be a great plug in.
I'm really curious though, how it stacks up
against the OG effect noise removal and repair Noise reduction.
Get the profile.
Once we've done that, I'll then select that existing clip,
go into effect noise removal, repair, noise reduction,
and we'll just run it on default settings, which should be reasonable.
Now, that took just like literally one second as opposed to like a minute
also using the eight ball. Let's listen to the difference.
Okay, James, Now I've spoken to a skater.
Okay?
So it's it's kind of moved the noise down in the background,
but it hasn't totally removed.
It just can undo,
go back in again and try one more time, maybe make the settings more aggressive.
The noise reduction in decibel levels will make that harsher so it removes
more of the noise, will crank the sensitivity up a little bit.
We'll leave the smoothing as is.
Okay, James.
Now I've spoken to a skater in Paris.
Okay, so with some more aggressive settings on the non
AI tool, the noise reduction tool inside audacity,
we are getting quite a nice clean piece of audio.
We're retaining more of the fidelity, some more of the frequencies in the audio,
but I am starting to notice
some artifacting in the audio that I wasn't getting with the
AI tools, so both are worth trying and running on your audio.
But now audacity has both the
original noise reduction and AI powered noise reduction.
I think it's brilliant and the fact that this runs
totally locally on your computer is incredible.
Now let's move on to the next two tools.
And these are the real dark horses, the ones that you're really going to want
to try out.
And I think it's incredible that these run 100% on your own computer
without needing to use anything that's in the cloud.
First of all, I'm going to open a new file
and it's actually a jingle I've got here from a while ago,
and I'll play it to you so you can hear it in its entirety.
Be gonna the
one for you.
Right?
So this is now a AI powered low call stem separation.
What that means is you can split up a music track
into its different components.
This is great if you want to get just the music backing of a piece of music
or if you want to get just the vocals from the music, you can isolate
the background music or the sung vocals and it's pretty incredible.
so I'm just going to double click
this track to make sure that selected and then it should work.
If Now we get this pop up and I'm just going to leave it to separate
the instruments and the vocals and I'm going to set my CPU.
But if you have a GPU of course you can do that too.
So let's solo the music.
I'm absolutely I'm blown away by that.
That is 100% local on device music separation.
It's pulled the vocals right out of it Let's go and solo
the vocals, see how good they are.
I'd be Poppy country.
Hey. Yeah.
94.3 right off the bat.
Okay. Yeah.
We're definitely losing some fidelity in the vocals.
They're not as clean and bright, but a little bit of NQ could probably fix
that ever so slightly and get you a really nice acapella and I'm sure
it'll vary from track to track that you're putting into audacity as well.
But for a free open source on device tool that uses A.I.
to analyze your tracks and split them like that, that's incredible.
But it doesn't stop there again, like unsubscribe
if you're enjoying these tools because like, I'm blown away, these are insane.
I can see a million use cases.
So let's go back in because there's one thing I didn't show you.
When we go into that music separation effects, you can actually pop down
the separation mode and it gives you another version
which gives you drums, bass, vocals and other, which is insane.
Let's apply this and see what kind of a job
it does on this very short piece of audio with music and vocals.
Okay, And here we go.
So we've got the drums here.
Let's just start playing from where the drum beats are.
That is nuts.
That is totally insane.
So that's a mix of drums.
It's split the drums perfectly out. That music track.
Let's go to the mix with the bass now.
This is so incredibly cool.
Okay, so you're remixing tracks.
Have you doing anything like that?
This is going to be an absolute insane addition to your audio tools.
We've got the other instruments.
Okay,
So it's basically pulled the synths out of that track, which is incredible.
And I think the final track here is just going to be the vocals only part.
Be Funky.
Gonna be okay?
Yeah.
94.3 rather than right.
I am just incredibly stunned by the great job
that that all has done with splitting up so I can split just vocals and music,
or I can split it down
into like the bass, the drums, the other instruments and the vocals.
But there is one more tool
that I want to show you, and I think I've saved the best till last.
Air Music generation is a hot topic right now.
People are using tools such as MOBA to do this.
This is a very popular tool.
There are also other tools that can generative, make audio tracks matter.
I've got music gen, Google have been getting into that space as well,
but this tool allows you to do it right here Inside Audacity.
Now there are two ways you can use this tool.
First of all, I'm just going to solo these other instruments and select them all.
You can use
original music that you've already got and remix that to be something else,
or you can generate something completely new using a text prompt.
So first I'm just going to remix this little piece of music
and then we'll attempt to use it to generate a completely new music
track, like Out of the Blue, just using Artificial intelligence.
So here we go. Just to remind you,
it sounds like that.
So I'm going to double click, select everything,
go to effects, open video effects, and we'll go for music style remix.
And here you'll see there's a bunch of different settings
now I'm going to set these all to CPU, but of course if you've got a GPU,
you can absolutely use your GPU for this, it'll probably be quicker.
And then the prompt here is just at the top.
What's a remix to?
So I'm just going to put piano, okay, I want to remix those synths
into a kind of piano mix.
I'm going to leave everything else as is
to see exactly what this does just on its default settings.
Let's apply. We might need to wait a little while.
Of course I'll speed that weight up and then we'll listen to the result.
This is
okay.
I've got to say, I'm not super thrilled with that remix.
It was like a bit weird and boxy like,
but who knows, maybe I prompted it or had some settings wrong there.
Now obviously I could try with other problems ATO, a kick
drums, big bass.
You don't?
Well, if you're going for psychedelic sounds,
then that's definitely going to work.
So okay, maybe I haven't quite mastered the music style generation plug in
and I probably need to spend some more time refining my prompts
and also the other settings to figure out how it works.
But I did want to show you how the completely brand new music Generation
feature works by starting a brand new Audacity window and going to generate.
So this is music generation. That's right.
I Music generation on your computer.
Let's see how this one works.
Now, immediately when you open it up, it gives you the opportunity
to set a plethora of settings and choose again between CPU and GPU.
If you got it, I'm
just going to stick myself on the CPU for all of this generation.
And the box I'm going to focus on here really is the kind of music.
So essentially the text prompt we use to make something new duration
here, I've set to 10 seconds so it can be generated in a relatively short time
to test this feature out.
Everything else I'm going to leave on the default settings for you.
So we'll just type in something like Future Trance
electric guitar solo.
Let's try that.
Okay.
It's okay.
As a music track, I'm not quite sure it got my prompt
So this time I've typed in relaxing piano music.
Let's generate
again.
We're getting very interesting creations here,
so I guess the better, the quality of the prompt
and the better the settings you use, the better the audio is going to be.
So I'm asking for tropical house music this time instead of the Ogbe.
Let's go for something else like vibes and the strength I'm to change this to one
the seed.
Well, that would just be a custom seed to start the music generation from.
So I'll leave that down to the age determination garden scale.
Let's turn that to five
and see if that makes a difference and we'll leave the inference steps at 20.
In fact, actually, I'm
going to try going out on a limb here and increase them to 30.
We've also got a scheduler so we can mess about and try something else.
Let's try a different scheduler and see if that one makes a difference.
See what we get from this Generation.
Okay, so a little bit of tweaking.
It did a few more inference steps and came out with a result like that.
So you can see a lot of playing, a lot of tweaking and also maybe
using a meaty GPU that can make that generation time a little bit faster.
For me, those tracks were generated in just around 5 minutes for 10 seconds,
so you can see how it would take quite a long time to produce a full track
using just a CPU alone, maybe faster with the GPU.
Just a little.
No, under the hood, I think it's using refuge in,
which is kind of like a stable diffusion for audio.
So this technology is literally the worst it's ever going to be.
It's going to get much, much better.
and I do think that audacity have great plans,
But don't let me tell you that
I'll let the product manager, Martin Keery, speak his very own words.
this is just a first step.
We hope to continue partnering with Intel to develop all kinds of new A.I.
tools in the future to help take audacity to a level no one's ever seen before.
So just listening to what Martin said there alone, I think there are some really
good things in store.
If you're a fan of AI and audio generation and creation and using Audacity,
I'm definitely going to be watching this Ed, in the future like a hawk
to see what new features plug ins and AI tools are introduced to it.
I hope you found value in this video and for tools that you didn't know existed
that work for free
and on your computer without the need for an internet connection.
That is pretty cool.
You've enjoyed do like this video.
Subscribe to my channel and leave a comment down below.
Let me know how you'll be using these tools in your own work.
5.0 / 5 (0 votes)
FaceFusion语音+视频口型同步功能,本地安装升级详细步骤。
🎸SONAUTO: Nova IA Musical GRÁTIS chegou para competir com o SUNO! Vale a pena? #sonauto
Create Free AI Music Inspired by Any Singer or Musician - MusicFX
I Bought Used USB Drives from Facebook..
How a "Real" Musician can use Suno v3 AI Generated Music
Ryuichi Sakamoto and Carsten Nicolai (Alva Noto): Two musical innovators