The ULTIMATE Raspberry Pi 5 NAS
Summary
TLDRيحتوي النص على تجربتي في بناء نظام التخزين السحابي (NAS) باستخدام Raspberry Pi، بدءًا من النسخ البسيطة حتى النسخ الضخمة مثل مشروع Petabyte Pi. مع التركيز على تحديات الأداء وتوفر القوة، استخدمت Raspberry Pi 5 لبناء نسخة NAS بتكلفة أقل من 150 دولار. استخدمت HAT SATA من Radxa ال価45 دولار، بجانب Raspberry Pi 5 ومجموعة صغيرة من العناصر لإنشاء نظام التخزين. تم التحقق من الأداء من خلال المقارنة مع النسخ السابقة، ومحاولة تحسين الشبكة لتكون 2.5G بدلاً من 1Gbps باستخدام HatNET! 2.5G من Pineberry Pi. بعد التثبيت وإعداد البرامج، مثل Open Media Vault (OMV)، أجريت تقييمات للأداء وتحققت من الحرارة والاستهلاك في الطاقة. وجد أن القراءة كانت أكثر دقة من الكتابة، وأن كفاءة الطاقة كانت مميزة مقارنة بالأجهزة الفعلية. وأشار إلى أنه يمكن أن يكون هناك تحسينات أخرى مع التكنولوجيا الحديثة مثل ال_chip RK3588، ولكن الفائدة الكبيرة هي سهولة الاستخدام والدعم الجيد للبرامج.
Takeaways
- 🚀 Raspberry Pi 5比以前的版本更快,支持PCI Express,并且现在可以轻松购买到。
- 💾 使用Radxa SATA HAT和Raspberry Pi 5,可以构建一个成本低于$150的小型NAS设备。
- 🔌 Raspberry Pi 5通过GPIO引脚为附加的SATA HAT提供电源,简化了电源管理。
- 🔩 组装时需要注意FFC(Flexible Flat Cable)的连接,避免使用过多力量导致损坏。
- 📶 Raspberry Pi 5的网络速度为千兆,而其他NAS设备支持更高的2.5Gbps网络速度。
- 🔥 在长时间运行时,SATA控制器芯片可能会变得很热,建议使用风扇或散热器进行冷却。
- 📈 使用RAID 0配置时,Raspberry Pi 5 NAS的写入速度接近900MB/s,而读取速度可达120MB/s。
- 🌟 使用Open Media Vault (OMV) 7进行管理,提供了一个用户友好的界面来设置RAID和文件系统。
- ⚙️ 使用ZFS RAIDZ1配置提供了数据保护,即使一个驱动器失败,数据也保持完整。
- 💻 在Windows PC上进行的文件复制测试显示,写入速度可达150MB/s,而在Mac上则较慢。
- 🔌 整个系统的功耗在8到22瓦特之间,相比其他NAS设备更为节能。
Q & A
Raspberry Pi 5 NAS相较于前代产品有哪些改进?
-Raspberry Pi 5 NAS相较于前代产品,速度更快,支持PCI Express,并且更容易购买到。尽管价格略高于Pi 4,但考虑到市面上现成的4盘位NAS价格通常在300美元以上,Pi 5 NAS在成本效益上更具优势。
在构建Raspberry Pi NAS时,使用Radxa发送的SATA HAT有什么优势?
-Radxa发送的SATA HAT体积小巧,成本低廉(45美元),并且已经开始发货。它提供了5个SATA连接,可以通过外部接口连接更多硬盘,且可以通过12V电源插座或ATX Molex电源供电,为Pi 5提供电力。
Raspberry Pi 5的网络性能如何?
-Raspberry Pi 5支持千兆以太网,虽然不及一些其他NAS支持的2.5千兆,但通过使用Pineberry Pi的HatNET! 2.5G扩展卡和HatBRICK! Commander,可以实现2.5千兆网络连接。
在构建NAS时,散热问题有多重要?
-散热非常重要,因为长时间运行的NAS会产生大量热量。在视频中,作者发现SATA控制器的温度可以达到60摄氏度,因此建议使用风扇或散热器来保持设备冷却。
Raspberry Pi NAS的性能如何?
-在RAID 0配置下,Raspberry Pi NAS的写入速度接近900MB/s,随机读取速度为687MB/s,随机写入速度为758MB/s。在实际使用中,通过Samba共享,数据传输速度可以达到110MB/s,这比之前的Pi 4 NAS快得多。
在Raspberry Pi NAS上使用ZFS文件系统有什么优势?
-ZFS文件系统提供了数据完整性检查、自我修复、以及高级的RAID功能如RAIDZ1,这可以在失去一个硬盘的情况下保护数据不丢失。此外,ZFS还支持压缩和缓存,可以提高存储效率和读写性能。
Raspberry Pi NAS的功耗如何?
-Raspberry Pi NAS在运行时的功耗在8到16瓦特之间,这比许多市面上的成品NAS设备更为节能,后者在空闲时可能使用10到20瓦特,而在高负载时可能使用25到30瓦特。
在Raspberry Pi 5上使用PCI Express Gen 3x2控制器有什么限制?
-尽管Raspberry Pi 5支持PCI Express Gen 3x2,但使用的JMB585控制器在实际应用中只能达到PCIe Gen 2的速度,这意味着无法充分利用Gen 3的带宽。
如何通过Open Media Vault (OMV)在Raspberry Pi NAS上创建和管理RAID阵列?
-通过OMV的图形界面,用户可以选择创建RAID阵列,如RAIDZ1,并应用配置以创建存储池。OMV还允许用户添加文件系统、启用Samba共享,并进行数据的读写操作。
在Raspberry Pi NAS上使用eSATA接口有什么好处?
-eSATA接口允许用户通过外部连接增加更多的硬盘驱动器,从而扩展NAS的存储容量。这为用户提供了更大的灵活性,尤其是在内部空间有限的情况下。
为什么在使用Mac进行网络文件复制测试时可能会得到不准确的结果?
-根据视频内容,Mac OS在处理文件复制时可能不是最佳选择,即使它拥有10千兆位的网络能力。作者在Windows PC上进行的测试显示,写入速度几乎达到了千兆位网络的极限,而在Mac上则没有达到这样的速度。
Outlines
🤖 Raspberry Pi NAS 构建与挑战
视频的第一部分介绍了使用Raspberry Pi构建网络附加存储(NAS)的尝试,从小型全SSD NAS到Petabyte Pi项目。尽管Raspberry Pi 4和Compute Module 4的性能勉强够用,但网络传输速度始终无法达到每秒100兆字节。提到了两个有前景的项目:Wiretrustee SATA板和Radxa Taco,但都因Raspberry Pi短缺而失败。随着Raspberry Pi 5的发布,视频作者提出了是否能以更低的成本构建一个性能良好的Pi NAS的问题,并计划使用Radxa发送的SATA HAT进行尝试。
🔧 组装和配置过程
第二部分详细描述了组装过程,包括连接SATA HAT、确保电源供应、安装冷却设备以及设置网络接口。作者提到了可能需要的额外组件,如OLED显示屏和风扇控制板,但这些尚未可用。此外,还讨论了电源配置和使用FFC(Flexible Flat Cable)连接Pi的PCI Express连接器到SATA HAT的细节。
💻 系统测试和性能评估
在这一部分,作者通过SSH连接到新构建的NAS系统,并使用不同的命令来检查硬件是否被系统正确识别。经过一些配置后,成功识别了硬盘驱动器,并展示了如何启用PCI Express和检查连接速度。接着,作者进行了磁盘性能测试,包括RAID 0和RAID 10配置,并讨论了温度管理的重要性。
📡 网络性能和文件传输测试
作者探索了通过Samba共享进行文件传输的性能,包括写入和读取速度的测试。他发现读取速度非常快,几乎达到了线速,但写入速度则存在一些瓶颈。此外,还尝试了2.5千兆网络适配器,并使用HatBRICK! Commander来连接多个PCIe设备,成功实现了超过1千兆的网络速度。
🔩 硬件兼容性和系统安装
在这一部分,作者尝试了Open Media Vault(OMV)7的安装,并探索了其用户界面和管理功能。他讨论了如何创建RAID阵列和文件系统,并指出了在使用过程中遇到的一些问题,例如RAID管理的缺失。尽管如此,他还是成功地创建了存储池,并进行了文件系统的添加和共享。
🔋 能效和性能比较
作者比较了使用ZFS与RAID 0在写入和读取性能上的差异,并讨论了系统的能效。他指出,尽管写入速度略低于RAID 0,但ZFS在读速上表现更好,且系统在整体功耗上非常高效。此外,他还提到了使用不同硬件平台(如RK3588芯片)可能带来的效率和速度提升。
📝 结论和未来计划
在视频的最后,作者总结了整个构建过程,强调了DIY NAS的成本效益和性能,并提出了对未来实验的展望。他还提到了在Mac上进行网络文件复制测试时遇到的一些问题,并建议不要使用Mac进行此类基准测试。最后,他以对未来Raspberry Pi Compute Module 5的期待结束了视频。
Mindmap
Keywords
💡Raspberry Pi NAS
💡Raspberry Pi 5
💡SATA HAT
💡PCI Express
💡DIY NAS
💡FFC أو Flat Flexible Circuit board
💡Thermal Imaging
💡RAID
💡ZFS
💡HatNET! 2.5G
💡HatBRICK! Commander
Highlights
The Raspberry Pi 5 is capable of functioning as a Network Attached Storage (NAS) device, offering a more affordable alternative to off-the-shelf 4-bay NASes costing upwards of $300.
A DIY NAS using a Raspberry Pi 5, Radxa's SATA HAT, and other components can be assembled for less than $150, presenting a cost-effective solution for data storage.
The Raspberry Pi 5's introduction of PCI Express support is a significant upgrade from the Pi 4, enabling faster data transfer rates.
The use of a 12V power supply and GPIO pins can adequately power the Pi 5 and the SATA HAT, eliminating the need for separate power sources.
The Pi 5's single gigabit Ethernet port may be a bottleneck compared to other NAS devices that support 2.5 gigabit networking.
The JMB585 PCIe Gen 3x2 controller used in the SATA HAT limits bandwidth even when the Pi 5's bus is upgraded to Gen 3.
The Raspberry Pi OS 64-bit light version and Open Media Vault (OMV) were tested as potential operating systems for the DIY NAS.
The assembled NAS demonstrated nearly 900 megabytes per second read speeds in a RAID 0 configuration using four drives.
The HatNET! 2.5G and HatBRICK! Commander from Pineberry Pi were used to achieve 2.5 gigabit networking on the Raspberry Pi 5.
The 2.5 gigabit network connection showed a steady performance with peak read speeds reaching over 250 megabytes per second.
The use of ZFS on Open Media Vault provided data redundancy with RAIDZ1, allowing for the loss of one drive without data loss.
The Raspberry Pi 5-based NAS was found to be more power-efficient compared to prebuilt NAS devices, with a consumption of 15-16 watts under load.
The final cost of the DIY NAS setup, including additional components, was under $200, offering a significant cost saving over commercial options.
The DIY approach to building a NAS requires users to manage software, maintenance, and updates, which may not be suitable for everyone.
Thermal imaging revealed that the SATA controller chip reached temperatures of up to 60 degrees Celsius, indicating the need for active cooling.
The experiment concluded that the Raspberry Pi 5-based NAS is a viable option for read-heavy applications, but may not achieve full 2.5 gigabit write speeds.
Benchmarking network file copies on a Mac was found to be less reliable, with significantly higher speeds achieved when switching to a Windows PC.
The video concludes with the potential for further experimentation with the Raspberry Pi 5's PCI Express capabilities and the anticipation of a Compute Module 5.
Transcripts
I've built a bunch of Raspberry Pi NASes, from a little tiny all-SSD NAS to the biggest
one on Earth, the Petabyte Pi project.
But the Pi 4 and Compute Module 4 were just barely adequate.
I could never get even 100 megabytes per second or the network, even with SSDs.
The two most promising projects, the Wiretrustee SATA board and Radxa Taco, were both dead
in the water.
They launched right before the great Pi shortages, when you couldn't get a Raspberry Pi for
love or money.
But the Raspberry Pi 5 is here now.
It's faster, it has PCI Express—and best of all, you can actually get one.
Yeah, it's a little more expensive than the Pi 4, but with off-the-shelf 4-bay NASes costing
$300 and up, could we actually build a Pi NAS for less?
And would it be any good?
Well, today I'm going to see.
And to do it, I'll use this tiny SATA HAT that Radxa sent.
This costs $45, and it's already shipping.
Add a 12V power supply, a Raspberry Pi 5, a fan and microSD card, and we have a tiny
NAS for less than $150.
But will bottlenecks kill this thing like they did with the Pi 4?
I mean, the Pi 5 only gets a gigabit, those other NASes can do 2.5.
And they have hot-swap drive bays...
And vendor support!
So yeah, comparing just on price alone is silly.
There's always going to be trade-offs when you go DIY.
But this thing should have a lot fewer compromises than the jankier builds I did in the past.
At least, I hope.
And 2.5 gig networking?
I might have a fix for that.
I'm going to put this thing together and see if it could be the ultimate Raspberry Pi 5
NAS.
I do not know exactly what tools will be required, and I don't know what's in the box.
Hopefully it includes everything I need.
But Radxa usually does a pretty good job including all the little bits and bobs you
need for this.
Looks like it includes this extra cable.
This is, after all, the 'Penta' SATA HAT,
so five SATA connections.
I have four drives here, but you can add on another one using this strange external– I
guess this might be eSATA or something?
But it has SATA and power from this board.
Something important to think about is how you're going to supply power to it.
I know some people in comments have mentioned, "Oh, you need to supply power to the Pi and
this board."
But no, I believe that you can just power this board through the 12-volt barrel jack
or through an ATX Molex power supply here.
So if you have it in a PC case or something, you could do it that way.
And this will supply power to the Pi 5 through the GPIO pins.
This should be able to provide adequate power as long as the power circuitry on here is
good enough to take that 12-volt signal and give a clean 3 to 5 amps on the Pi's 5-volt
rail.
This doesn't have the normal PCI Express connector that you see on the Pi 5.
So the Pi 5 has this little guy here.
This has a much larger connector with more pins.
That could be an interesting thing.
I believe that they have an adapter for it, though.
So yeah, here it is.
So this is called an FFC or Flat Flexible Circuit board.
And it looks like they've included two, which is nice because these little connectors are
a little bit delicate.
You can see how thin they are.
They're kind of like paper-thin.
But these are Flat Flexible Circuit boards or FFCs.
And they connect from the Pi's PCI Express connector here over to this guy here.
And the GPIO pins over here are going to provide power to the Pi.
At least that's my hope.
There is a getting started guide on here, but I'm going to YOLO this thing and see what
happens.
One important thing whenever you're doing these is make sure you get the connector seated
all the way.
And it should go in pretty easy.
If you're pushing hard, then you're going to break the cable.
So don't do that.
If you're pushing hard, you might need to pull this little connection up and always
do it on both sides so that it doesn't come off.
Because if it comes off, it might break and then you would not have a way to hold the
cable down.
Push down on this little top part and this cable is now affixed to the Pi very well.
And then I'm going to plug it into here.
So it looks like it goes like this.
The funny thing is these kind of connectors are often used inside of cameras and other
things that are put together at factories.
And there they're very careful.
They have their methodologies.
They even have tools to help with it.
When you give these things to people in the general public, like you and me, we tend to
break our first one.
So I guess it is a really good idea that they included a second one here.
They probably have some screws too.
Let's check.
Yeah, there's a little kit full of screws here.
There's some standoffs and things.
And then now I'm going to put this in.
I'm going to carefully put this over and plug in the GPIO pins that provide power.
But that fits nicely together.
There is a connector here for an OLED and fan control board that sits on top of the
hard drives at the top.
They don't have that available yet.
I think they used to make it.
I don't know if they needed to revise it for this or what, but I asked about it and it's
not yet available.
So it would be nice to have that, especially, these are not that hot of drives, but if you
use hard drives, if you use 2.5 inch hard drives, then those can get pretty toasty and
it's nice to have a fan blowing air over them.
I just realized I don't have any fan on the Pi itself and I probably should do that because
it could get pretty hot and toasty inside here.
Let's get our little active cooler here.
I hope this will fit.
I don't know if there was a warning against using this, but the Pi does need some sort
of cooling, whether it's a heat sink or a fan.
There's no fan built into this.
It would be cool if there was a little fan under here or an option for one, but it doesn't
seem like that's the case.
Okay, please still fit.
Looks like it will fit.
Oh no, you know what?
The barrel plug is just touching on the top of the heat sink.
There's literally just three of the fins on the heat sink.
You know what I might do?
I might see if I can bend those off.
Take this back off again.
I'm going to pull this connection off.
This is a terrible idea.
I would not recommend doing it.
Just bending this back and forth.
There's one.
Shouldn't affect the performance that badly.
I removed the middle portion from the middle point up of these three little fins on the
heat sink.
There's a side view of it.
You can kind of make it out.
It's kind of hard to make out.
Sorry about that.
Let's get this all back together now and see if it fits.
This time, if I go down, it can go down all the way.
Look at that!
That's just enough clearance.
As long as it works in the end, it's all good.
I use this huge guy.
Just give these a little snug.
Generally, I'd use a nut driver for this, but this works in a pinch.
Literally.
[voiceover Jeff] My top-down recorder decided to corrupt the rest of the video, so I lost all that footage.
But in that footage, I mentioned the board uses the JMB585 PCIe Gen 3x2 controller, which
means even if we upgrade the Pi 5's bus to Gen 3 from its normal Gen 2, we'll miss out
on a little bandwidth.
And also, the kit comes with two side supports that hold all the 2.5" drives together, though
there may be a case available at some point in the future.
They actually had one in the past when it was sold for the ROCK 4 or Pi 4, I think, but
I'm guessing that they'll have to make another batch if they get enough interest in this
new version of the Penta SATA hat.
Okay, so everything is put together now.
It's all looking nice, and I think there will be enough airflow.
There's holes in the sides, holes in the middle, so enough air will convect through for these
drives at least.
And I have a 5A 12V power supply.
This should be adequate for these drives and the Raspberry Pi 5.
I'd budget maybe 3 to 5 watts per drive, or if you have 3.5" drives, maybe a little more,
and you might want to get an 8A or maybe even 10 or 12A power supply.
But definitely don't use a 2A power supply and expect this to work.
It's going to have all kinds of issues.
I also have Raspberry Pi OS, 64-bit light version, and I might try Open Media Vault.
I'm going to take the microSD card and put it into the slot, and then I'll grab this
power adapter.
One other reason why I'm over at the desk is I have my little, this is a Zigbee– Third
Reality Zigbee outlet that has power measurement built in, which is very handy for testing.
I'll go ahead and bring that up on here.
If I go to Home Assistant and then go to Power, you can see that right now there's 0 watts
because there's nothing plugged into it.
Power is going to come in.
Looks like they wanted to align the power with the USB-C port, not that that matters.
First I'm going to plug in network, and I'll plug in power and we'll see what happens.
Hopefully no sparks.
All right.
I have a green light on the board, and the Pi is booting up.
Power usage is up to 14.2 watts at boot, and now the Pi is doing its reboot, so it's going
to reboot a couple times this first time that I turn it on because it expands the file system
to fill up the microSD card, all that kind of stuff.
So we'll fast forward a bit until it's all booted up, and then we can log into it on
the network and see if it's actually working.
I don't see any lights.
There's just one green LED on the board over here, but I don't see any other lights.
So I don't know if there's lights per hard drive.
So I'm going to log into it and we'll see what we can see.
SSH pi at pi-nas.local.
There it is.
And if I say lsblk, hopefully we see those hard drives.
No, we're not seeing them.
Let's try lspci.
And I'm not seeing the device at all.
I don't see any errors in here.
Let's go to the URL on this box and see if there's any other tips that we're missing.
rock.sh/penta-sata-hat.
...penta-sata-hat.
So we did that.
We did that.
Oh. [hehe]
So maybe I should actually do that.
Let's try that.
Go in here.
You'd think it would do it automatically, but it does not.
So we're going to enable PCI Express, save and reboot.
So save that and reboot.
So let's check again.
There we go.
We have one, two, three, four hard drives.
And if I say lspci, I can see the Jmicron SATA controller.
Now, right now it should be PCI Express Gen 2.
We can check that with sudo lspci -vvvv.
This is going to give us all the information about PCI Express devices.
And if I go up to here, this is AHCI.
That's the kernel module for the SATA controller.
And we can go up to the top section.
See, it's Jmicron JMB585.
And if I go down to link capabilities, it says speed 8 gigatransfers per second width
x2.
That's PCIe Gen 3x2.
But the status says it's 5 gigatransfers x1.
So definitely less bandwidth than the chip is capable of.
So I'm going to try PCIe Gen 3.
And I can do that following my own guide.
If I go down here, turn that on like this and reboot.
And we'll see if it gives us Gen 3 speeds instead of Gen 2 speeds, which would give
us the maximum performance that we can get on the Pi 5.
I have four drives that have nothing on them.
I'm going to try-- we should probably just benchmark the drives first in like RAID 10
just to see what the maximum speed is or maybe even RAID 0.
So let's do that.
It'll take a couple minutes.
And we have blinking!
So you can see that the LEDs actually do work.
I didn't see those when I was looking earlier, but it has some LEDs.
And you can see them blinking when the drives are accessed.
So nice job.
I should check.
It does feel a little bit hot.
InfraRay, I found them at CES.
And they actually sent me home with a couple goodies.
This is the P2.
And the reason why I wanted them to send me home with one to test was it has this snap-on
macro lens that you can see individual resistors or things on your PCB very close up, which
is kind of cool.
But their software is a little bit iffy.
Not the best software that I've used for IR cameras.
But the camera itself is really good quality and works better than my old Seek thermal.
But let's check the temperatures on here.
And it looks like the drives themselves-- well, they're a little bit reflective.
So we might not be seeing the actual drive value.
But the board is up to 50 degrees or so.
The SATA controller is down there.
It looks like it's the hottest part of this thing.
And it is getting up there to 60 degrees Celsius.
So it might be good to have at least an active fan blowing down on top.
There's the cold soda can;
16 degrees Celsius.
And there's the hot SATA chip.
So I'm going to put this cover on and see up nice and close.
If I get in there, we can see that the chip itself is 60 degrees Celsius.
So it's pretty toasty in there.
I would definitely do a fan or heat sink on this if you're going to deploy this long term.
Another fun thing with thermal imaging is you can see all kinds of fun details.
Like, you can see that this is where my hand was resting.
And if I just put my hand on the table and take it off, there's a hand print.
And apparently this little screen on here also generates a teeny tiny bit of heat.
And now it has my fingerprint on it, which is also warm.
Looks like the formatting is finished.
And what's our next step here?
Mount the array.
OK, mount RAID 0.
So now let's do a disk benchmark on it.
And I'll run the disk benchmarks and see how fast this array can go.
OK, here goes FIO.
Hey, that's not bad at all.
850 to [8]60 megabytes per second.
And that's mebibytes.
So let's see how fast it was in megabytes.
Almost 900 megabytes per second across all four drives in RAID 0, of course.
But random reads of 687 megabytes per second and random writes of 758.
And then we have 4K block size, 44 megs read and 152 megs write at 4K, which is not bad
at all.
I'm interested in seeing-- I think what I'll do is I'll just put a Samba share on this,
and we'll see if we can saturate a 1 Gbps connection continuously.
Restart Samba and create a password.
Now I should be able to connect on my Mac.
pi-nas.local, we'll do the shared directory.
Here it is.
So I'm going to copy over a folder with all of the footage of the build.
It's 100 gigs.
And let's check this out.
Let's see how fast it is.
That is line speed.
110 megabytes per second is pretty typical.
Let's see if it can keep up that data rate.
I can smell that slight off-gassing here.
So I do think that I would put some sort of cooling on here just for that JMB585 chip.
On my other NASes, over 1 gigabit, you can just hammer it, and it'll stay 110, 115 megabytes
the entire time.
This is a lot faster than the Pi 4 NASes I've set up before, though.
And we'll just let the screen recorder keep going at 18 minutes, and we'll just keep moving.
While that's copying, I want to take a brief opportunity to tell you about Open Sauce.
Open Sauce is going to be June 15 to 16 in San Francisco, and I'll be there.
I'll be there along with a ton of other creators in the Maker areas, electronics, hacking,
all kinds of fun things.
If you want to go, there's a way that you can get in for free, and you can come to the
party that's beforehand, where all the other YouTubers and everyone will be present.
If you want to do that, you can apply to be an exhibitor.
They have tons of space for exhibits this year.
It'd be really cool to see your project.
So if you want to do that, go to opensauce.com and apply to be an exhibitor.
Otherwise you can also come as just a normal person who's not exhibiting things too.
So hopefully I'll see you there June 15 to 16.
If not, I will definitely be posting some things on Twitter and maybe something on YouTube.
I don't know.
So make sure you're subscribed.
It copied everything over to the Pi.
Now let's check the read speed.
I'm going to copy it back into a different folder on my local computer.
And we'll see if it can give me 110 megabytes per second.
Oh, look at that.
It's giving me 122, which is a little faster than the write speed.
And you can see that the drives are reading pretty much flat out right now.
I don't know if that'll fill up the cache, but you can see that the data is flowing a
lot more smoothly coming off the Pi than writing to it.
So there are some bottlenecks.
I don't think it's Samba, and I don't think it's the drives themselves.
I think there's a bottleneck somewhere in the Pi's kernel or something when it's writing
through because I had that problem on the Pi 4, but on the Pi 4, it wouldn't even hit
like 120 megabytes per second all the time.
But reading, that's not an issue at all here.
We're cranking at 120 megabytes per second.
I deleted everything off of there, and it looks like obviously read speeds are much
more consistent than write speeds.
But I'm going to try something else that I mentioned at the beginning of this video.
What about 2.5 gig networking?
Now PineBerry Pi makes the HatNET! 2.5G.
This is a 2.5 gigabit hat for the Raspberry Pi 5.
But you'll probably already notice there's a problem.
It has one PCI Express input.
There's only one PCI Express connector on the Raspberry Pi 5.
How do we solve this problem?
Because this needs that, and I want to put it on here too to see if I can get 2.5 gig
networking.
Well, I can try the HatBRICK! Commander from Pineberry Pi.
And yes, they sent me these things.
I would be buying them myself anyway, but I'm going to disclose that Radxa sent me this,
and Pineberry Pi sent me this.
I'm testing these things out to see if they can work together and do some crazy things.
But Pineberry also sent me all of these extra cables of varying lengths.
One thing that can be a problem with when you start connecting multiple things together
is the PCI Express signaling.
So I'm going to try to use the shortest cables I can for these experiments.
But I'm going to basically put this, which is a PCI Express Gen 2 switch, off of the
Pi's bus, and then connect one connector to the SATA drives and the other connector to
the HatNET! 2.5G.
The downside is this is going to make everything be PCI Express Gen 2 speed instead of 3, so
I wouldn't be able to get 800 megabytes per second on these hard drives.
But on the flip side, this is 2.5 gig networking, and if we say, let's say 2 gigs for networking
and 2 gigabits for the hard drives, we might be able to do that to almost saturate 2.5
gig network if the Pi 5 can support that.
I don't know if it can or not.
I don't think it will be able to, but we'll see if any of this even works.
It might also not have enough power.
I don't know.
But I'm going to unplug this.
Okay, we got that connector out of here.
There is some risk here.
If we are mixing these cables from different vendors and connections, there's a little
risk that something's going to go wrong, but hopefully that doesn't happen.
It's definitely not my finest work.
There's an LED on here, and I see a light on the switch, and there's a power LED on
the HatBRICK! Commander, and there's lights on here.
Let's see if this is actually going to work.
lspci...
Hey, look at that.
So we have the switch here.
We have the SATA controller here, and we have the 2.5 gig controller here.
Let's do ip a, and we have an IP address on that.
So let's do an iperf test.
Now we're getting 2 gigabits.
It's not 2.5 gigabits, but it's not nothing.
So coming back only 1.6 gigabits, that's not horrible.
It's still more than a gigabit.
This is probably going to get 2.5 gigabits if you connect it straight to the Pi.
I think that some of the overhead comes out of that packet switching that is running to
the drives as well.
So if I say lsblk, we still have the drives, and they're mounted.
So we'll see if we get any faster write speeds.
It's doing 110, 117.
That's about the same as what we were seeing before.
So we're not getting faster than a gigabit over the 2.5 gig connection, at least for
writes.
I do see a few peaks up to about 125 megabytes per second, so better than a gigabit.
And it's interesting, the overall rate seems a little steadier with the 2.5 gig.
Maybe the Pi's internal controller is a little funky, but I don't know.
But it's giving us a little bit more in the write speeds.
I'm really interested to see the read speeds, though.
Hopefully we can get more than 1 gigabit.
Let's check.
There we go.
217 megabytes, 250 megabytes per second.
That's more what I'm expecting out of a 2.5 gig connection.
So this can put through that data.
It's interesting.
I think it's pulling from RAM because I don't see the drives blinking at all here.
It's probably copying all this data from RAM, and now it's hitting the drives.
And you can see it dips a tiny bit there, so down to 230 megabytes per second.
So Linux usually caches files on the RAM as it's copying them back and forth, so that
if you have a file that you're accessing a lot, it's a lot faster.
But now that it's hitting the drives, it only dipped down 10 megabytes per second, so that's
not bad at all.
So for a read-heavy NAS, this isn't looking like that bad of a setup.
Now that I know that everything is going to work on here hardware-wise, I think it's time
to put OMV on here and see how that runs.
I haven't used OMV 7 yet, so this will be new for me.
I don't think it's that much different than OMV 5 and 6, but let me grab this script and
go over here, and this hopefully will just work.
I'm going to SSH into the Pi and just paste in their script, the installer, and here it
goes.
Let's check power consumption.
So during the install, it was used in between 8 to 10 watts, and it looks like the baseline
for this build is 8 watts with the 2.5 gig network adapter and everything else.
But let's go to pi-nas.local, and does this work?
Maybe I have to use the IP address.
Let's try that.
Well there it is.
I guess it was still booting up.
Okay, so that was not the problem there.
So 'admin' and 'openmediavault' are the password logging in.
There it is.
There's no dashboard, that's okay.
Storage is where we should see our disks.
They should show up.
Yep, 1, 2, 3, 4.
All of them are 8 terabytes.
And I want to create an array.
File systems.
Is this where we create it?
Create and mount a file system, ext4.
But I want to create a RAID array.
How do I create a RAID array?
Am I totally missing something?
I thought there was a thing over here for creating RAID, but I don't see it anymore.
What does this say?
See, this has RAID management, but I'm not seeing RAID management anywhere.
Do you see RAID management anywhere?
We could try ZFS instead of RAID, but that's instead of like mdadmin RAID.
So we can try it out on openmediavault.
I've never tried it on OMV before, but we'll see how it works here.
I like this little END OF LINE here.
I guess a nod back to Tron, the 1974 version.
And we'll do RAIDZ1 since we have three drives.
A RAIDZ1 will use one drive, the equivalent of that for parity data.
That way I could lose one of these four drives and all the data would be intact.
But here we go.
It says pending changes, 21 terabytes available.
Let's apply this.
So now the tank should exist.
Compression is on.
I don't know if I would need compression on, but I'm not going to mess with any of that
right now.
If we go to pools, is there anything else I can do?
Tools, what do we got?
So you can scrub it.
I don't know if it automatically scrubs in here, but it gives us the pool information.
That's nice.
So this is a good interface.
It's not the, maybe not the best thing ever.
And I don't know if it comes with schedules and things by default, but it'd be nice to
have a scheduled snapshot and a pool scrubbing scheduled.
That might be something that you can figure under scheduled tasks.
Yeah, so you'd have to do some of these things.
You'd have to add your own scheduled tasks.
It'd be cool if that added some things by default, but I can see why they don't as well.
But now let's add a file system.
So we have one tank ZFS.
I'll add a shared under tank shared, and we'll just set everyone read/write, right now,
save, turn on Samba enabled 10.0.2.2– 11.
Okay.
So it wants me to use the IP and there's our shared volume.
So let's, I'm going to copy some stuff over to it.
I have this, this folder has a hundred gigabytes.
So I'll do that.
And here it goes.
So it seems similar to the copies that we were getting with RAID 0.
It's interesting.
It goes a little bit faster sometimes than those copies were.
So I'm wondering if ZFS's caching is actually helping here.
So far, I'm pretty impressed.
I think read speeds are where this wins.
Write speeds are where this loses a little bit because you're not going to be able to
get full 2.5 gigabit networking on that.
But but it's better than I was expecting.
And the big win for me, besides the fact that this can be made smaller, if we kind of reconfigure
these boards the big one is the power efficiency, because right now we're using 15 or 16 Watts.
Probably the other NASs that I've built using, you know, prebuilt NASs, they use, they use
10 to 20 Watts idle and they use 25 to 30 Watts when they're doing a lot of stuff.
So this little guy is only using 16 Watts doing the same amount of work which is probably
about half of what most prebuilt NASs would use.
On the flip side, if you build a NAS with the RK3588 chip, you could probably get even
more efficient and more speed.
So there's a couple boards out there that are interesting that I might take a look at
at some point.
But the nice thing is all of this, this is all really well supported.
Like the software just click some buttons and you have everything working.
I haven't always had that same kind of experience when I'm using the RockChip boards.
Some of them are getting pretty good though.
I'm going to go ahead and let this write finish and I'm going to do a full read of that 100
gigs of data and we'll see where we end up.
At the end of the copy, it looks like the system used 22 Watts for a little while while
it was doing some sort of processing.
I don't know what ZFS was doing there.
Maybe that was part of the compression.
I don't know.
It's a lot of power to use at the end there.
The actual performance was interesting.
After that initial part where it was faster than RAID 0, it actually slowed down to a
tiny bit slower than RAID 0 over that long rest of the copy.
That's why it's good to use a large, large file to test the actual performance of your
system because especially with ZFS, it's going to cache a lot in the beginning in RAM and
that throws off how fast your actual disk array is.
But the CPU usage was not too bad.
Power consumption was down around 8 to 16 Watts throughout the whole copy.
But in the end, the file copy was 74 megabytes per second with ZFS in RAID Z1 and it was almost
100 megabytes per second in RAID 0.
Now that's for the writing, which is always going to be a little bit slower with a setup
like this.
Read speeds for both are practically the same.
It's just basically line speed.
It's not hard at all to do the reads.
[interjection Jeff] So this is a little embarrassing.
All those conclusions I have are based on the fact I was benchmarking this all on a Mac.
And I switched to my Windows PC and I was able to get almost line speed for one gigabyte
file copies writing to the Pi and 150 megabytes per second writing over the 2.5 gig network.
So that changes my perspective a little bit on it.
And I think the biggest takeaway is don't use a Mac for benchmarking network file copies,
even if it has 10 gigabit networking and everything else on it seems to be fine.
Mac OS for some reason is not great with file copies.
And I have a whole blog post and I have more details in the GitHub issue linked below.
But let's get back to the video.
[normal Jeff] It's not inconceivable to build a system like this.
All in this one is still under 200 bucks total, including all these extra boards and things.
So but it always goes back to DIY means you're responsible for the software, you're responsible
for maintenance and updates and all that kind of stuff.
So that was a fun experiment.
And I plan on doing some other fun experiments now that I have this little this little board
here that lets me split up PCI Express lanes.
And we'll see how we can bend the Pi 5's PCI Express bus.
It'd be really cool to see a Compute Module 5 expose even more, but we'll see what happens
whenever that comes out.
I know that that was a big change from the Pi 4 to the Compute Module 4.
It gave us PCI Express.
Now we have it on the Pi 5, but I think we might be able to do more in a Compute Module
form factor, but we'll see.
Until next time, I'm Jeff Geerling.
5.0 / 5 (0 votes)
Install LibreChat Locally
Merge Models Locally While Fine-Tuning on Custom Data Locally - LM Cocktail
WROETOSHAW FOOTBALL CHALLENGES!!! Vs Thogden!
"VoT" Gives LLMs Spacial Reasoning AND Open-Source "Large Action Model"
I Tried The UK's WORST Football Hospitality Ticket
Wherever the Dart Lands, I go to a Football Match.