The ULTIMATE Raspberry Pi 5 NAS
Summary
TLDR视频脚本讲述了一个关于构建Raspberry Pi NAS(网络附加存储)的实验。作者从使用Raspberry Pi 4和Compute Module 4开始,由于性能限制,无法达到每秒100兆字节的网络速度。随着Raspberry Pi 5的发布,作者尝试使用新的硬件和Radxa SATA HAT来构建一个成本更低的NAS系统。通过添加12V电源、风扇和microSD卡,作者以不到150美元的成本构建了一个小型NAS。尽管存在一些限制,如千兆以太网连接和缺乏热插拔硬盘托架,但作者通过使用HatBRICK! Commander和HatNET! 2.5G扩展卡,成功实现了2.5G网络连接。最终,作者通过Open Media Vault(OMV)操作系统,实现了RAID配置和网络文件共享,展示了DIY NAS在成本、功耗和性能上的优势。
Takeaways
- 🚀 Raspberry Pi 5的发布带来了更快的处理速度和PCI Express支持,使得基于Pi的NAS(网络附加存储)性能得到提升。
- 💾 使用Radxa提供的SATA HAT和Raspberry Pi 5,可以构建一个成本低廉的NAS系统,总价不到150美元。
- 🔌 Raspberry Pi 5通过GPIO引脚为附加的SATA HAT提供电源,简化了电源管理。
- 🔩 组装过程中需要注意的是,确保所有连接都正确且稳固,避免使用过多力量导致损坏。
- 📶 Raspberry Pi 5虽然只支持千兆以太网,但通过HatBRICK! Commander和HatNET! 2.5G可以探索2.5千兆网络的可能性。
- 🔥 在高负载工作时,Raspberry Pi 5的SATA控制器芯片温度可能升至60摄氏度,因此推荐使用散热风扇或散热片。
- 📈 通过测试,Raspberry Pi 5 NAS在RAID 0配置下达到了接近900MB/s的读写速度,表现令人满意。
- 🌟 Open Media Vault (OMV) 7在Raspberry Pi 5上运行良好,提供了一个用户友好的界面来管理存储和文件共享。
- 💻 使用ZFS文件系统和RAIDZ1配置,虽然写入速度略低于RAID 0,但提供了数据保护和良好的读取性能。
- ⚙️ Raspberry Pi 5 NAS的功耗相对较低,在做大量数据传输时,功率消耗在8到22瓦特之间,能效比较高。
- 🛠️ DIY NAS意味着用户需要自行负责软件的维护和更新,但同时也提供了更多的定制化可能性。
- 📊 使用Windows PC进行网络文件复制的测试结果比Mac更好,突显了不同操作系统在文件传输性能上的差异。
Q & A
Raspberry Pi 5 相比前代产品有哪些显著的提升?
-Raspberry Pi 5 相比前代产品,速度更快,具有 PCI Express 接口,并且用户现在可以相对容易地购买到。尽管价格略高于 Pi 4,但考虑到市面上现成的 4 盘位 NAS 价格通常在 300 美元以上,Pi 5 可能提供了一个成本更低的 NAS 解决方案。
在构建 Raspberry Pi NAS 时,作者遇到了哪些挑战?
-作者在构建 Raspberry Pi NAS 时遇到的挑战包括网络传输速度无法达到每秒 100 兆字节,以及之前尝试的两个有前景的项目(Wiretrustee SATA 板和 Radxa Taco)因 Raspberry Pi 短缺而失败。
作者如何尝试解决 Raspberry Pi 5 的千兆以太网限制?
-作者尝试通过使用 PineBerry Pi 制作的 HatNET! 2.5G 扩展帽和 HatBRICK! Commander 来解决 Raspberry Pi 5 的千兆以太网限制,后者是一个 PCI Express Gen 2 交换机,允许同时连接 SATA 驱动器和 2.5G 网络接口。
作者在测试 Raspberry Pi 5 NAS 的性能时,使用了哪些工具和方法?
-作者使用了 FIO 工具进行磁盘基准测试,通过 SSH 连接到 Raspberry Pi 并运行命令来检查硬盘驱动器是否被识别。此外,作者还使用了 iperf 测试来评估网络性能,并尝试了不同的 RAID 配置,包括 RAID 0 和 RAID Z1。
在构建过程中,作者提到了哪些可能影响性能的因素?
-作者提到了多个可能影响性能的因素,包括 Raspberry Pi 5 的 PCI Express 接口的速度(PCIe Gen 2 vs. Gen 3),SATA 控制器的带宽,以及网络接口的速度。此外,作者还提到了散热问题,特别是对于 JMB585 芯片的散热,以及可能存在的内核或驱动程序瓶颈。
作者为什么选择使用 Open Media Vault (OMV) 作为 NAS 的操作系统?
-作者选择使用 Open Media Vault (OMV) 作为 NAS 的操作系统是因为它易于使用,具有良好的用户界面,并且提供了 RAID 管理、文件系统创建、网络共享和 Samba 集成等功能。
在测试过程中,作者遇到了哪些安装或配置问题?
-在测试过程中,作者遇到了一些安装或配置问题,例如在尝试识别连接的硬盘驱动器时出现问题,需要手动启用 PCI Express。此外,在尝试使用 Open Media Vault 时,作者发现没有直接创建 RAID 的选项,但最终通过使用 ZFS RAIDZ1 配置找到了解决方案。
作者在测试 Raspberry Pi 5 NAS 的散热性能时使用了哪些工具?
-作者在测试 Raspberry Pi 5 NAS 的散热性能时使用了 InfraRay P2 热成像相机,以及通过触摸来感受硬件组件的温度。
作者对于 Raspberry Pi 5 NAS 的最终性能和成本效益有何评价?
-作者认为 Raspberry Pi 5 NAS 在读取速度上表现良好,写入速度虽然受到一些限制,但整体性能超出了他的预期。在成本效益方面,作者指出整个系统的构建成本不到 200 美元,远低于市面上的现成 NAS 设备,且功耗相对较低。
作者在测试中使用了哪些硬盘驱动器,它们的容量是多少?
-作者在测试中使用了四个硬盘驱动器,每个硬盘的容量是 8 兆字节(TB)。
作者在测试 Raspberry Pi 5 NAS 时,提到了哪些可能的软件或硬件升级?
-作者提到了可能的软件升级包括使用不同的 NAS 操作系统,如 Open Media Vault (OMV) 和尝试使用 ZFS 文件系统。在硬件方面,作者考虑了使用 RK3588 芯片的 NAS 设备,以及期待 Raspberry Pi Compute Module 5 可能带来的更多 PCI Express 功能。
Outlines
😀 Raspberry Pi NAS 构建经验分享
视频作者分享了自己构建多种 Raspberry Pi NAS(网络附加存储)设备的经验,从小型全SSD NAS到Petabyte Pi项目。提到了Pi 4和Compute Module 4的性能局限性,以及两个有前景的项目因Pi短缺而失败。随着Raspberry Pi 5的发布,作者探讨了使用Pi 5构建NAS的可能性,包括成本效益、性能以及潜在的瓶颈问题。介绍了使用Radxa SATA HAT的方案,以及如何通过12V电源、风扇和microSD卡构建成本低廉的NAS。
🛠️ NAS 组装与初步测试
作者详细介绍了组装Raspberry Pi 5 NAS的过程,包括连接FFC(Flexible Flat Cable)和GPIO引脚供电。讨论了散热问题,尝试安装散热器并解决了电源插口与散热器冲突的问题。提到了OLED和风扇控制板的缺失,以及使用Zigbee智能插座进行功率测量。最后,作者启动了系统,并计划通过网络登录进行进一步的测试。
🔍 硬件检测与PCIe 配置
在系统启动后,作者检查了硬件是否被正确识别。最初未检测到硬盘驱动器,通过查阅文档和配置PCIe设置后,成功识别了硬盘。作者还检查了PCIe的连接速度,发现当前为Gen 2,尝试提升至Gen 3以提高性能。此外,作者进行了硬盘的基准测试,包括RAID 0和RAID 10配置,并使用热成像相机检查了设备的温度。
📡 网络性能测试与Samba共享设置
作者对构建的NAS进行了网络性能测试,包括通过Samba共享进行文件传输,测试了写入和读取速度。发现读取速度能够达到122MB/s,而写入速度则略低。此外,作者还提到了Open Sauce活动,并鼓励观众参与。在测试中,作者发现Mac OS在网络文件复制方面的性能不佳,建议使用Windows PC进行更准确的基准测试。
🔌 2.5G网络适配与性能提升
作者尝试为NAS添加2.5G网络支持,使用了PineBerry Pi的HatNET! 2.5G适配器和HatBRICK! Commander来实现。尽管存在PCIe接口的限制,但通过使用PCIe开关,成功连接了SATA硬盘和2.5G网络适配器。测试了网络传输速度,发现写入速度有所提升,而读取速度则显著提高,达到了2.5Gbps的预期。
📚 OpenMediaVault 7 安装与RAID管理
作者尝试在NAS上安装OpenMediaVault 7(OMV 7),探索了其RAID管理功能。通过OMV的界面,作者成功创建了ZFS存储池,并配置了RAIDZ1阵列,提供了一定程度的数据保护。在测试中,作者发现ZFS在写入性能上略逊于RAID 0,但读取速度相当,且整体功耗较低。
🔧 总结与未来展望
视频最后,作者总结了整个构建过程,指出了DIY NAS的成本效益、性能和维护责任。提到了使用Raspberry Pi 5和相关硬件构建的NAS总成本低于200美元,并讨论了其功耗效率。作者表达了对未来实验的期待,包括探索Compute Module 5的潜力,并鼓励观众关注后续内容。
Mindmap
Keywords
💡Raspberry Pi NAS
💡PCI Express
💡SATA HAT
💡FFC(Flat Flexible Circuit)
💡RAID
💡ZFS
💡HatNET! 2.5G
💡HatBRICK! Commander
💡功耗
💡散热
💡Open Media Vault
Highlights
构建了多种树莓派NAS设备,从小型全SSD NAS到地球上最大的Pi项目,Petabyte Pi。
树莓派4和Compute Module 4的性能勉强够用,但无法达到每秒100兆字节的网络速度,即使使用SSD。
Wiretrustee SATA板和Radxa Taco两个有前景的项目因树莓派短缺而失败。
树莓派5的发布带来了更快的速度和PCI Express支持,并且可以实际购买到。
使用Radxa发送的小型SATA HAT,成本仅为45美元,已经发货,可以构建成本不到150美元的小型NAS。
树莓派5仅支持千兆网络,而其他NAS支持2.5千兆,但DIY项目总是存在权衡。
可能会有2.5千兆网络的解决方案,作者计划组装并测试是否能够打造终极的树莓派5 NAS。
Radxa的Penta SATA HAT包括五个SATA连接,可以连接四个硬盘,并使用外部连接增加一个。
通过12V电源或ATX Molex电源供应器供电,可以为树莓派5通过GPIO引脚提供电源。
树莓派5使用FFC(柔性扁平电缆)连接到SATA HAT,这种连接非常精细。
组装时需要注意FFC电缆的正确安装,以避免损坏。
在组装过程中可能需要螺丝和支架等小配件,Radxa通常提供所有必需的小配件。
OLED和风扇控制板尚未可用,对于使用较热的2.5英寸硬盘的情况,风扇非常重要。
作者意识到需要为树莓派本身添加散热装置,因为没有内置风扇。
通过修改散热器,成功为树莓派安装了主动式散热器。
使用JMB585 PCIe Gen 3x2控制器的主板,即使升级到Gen 3,也会损失一些带宽。
使用Open Media Vault(OMV)作为NAS操作系统,并成功识别了所有连接的硬盘。
在RAID 0配置下,硬盘阵列的写入速度接近每秒900兆字节,而4K随机写入速度为每秒152兆字节。
通过Samba共享,尝试持续饱和1 Gbps的连接,写入速度稳定在每秒110兆字节。
尝试使用PineBerry Pi的HatNET! 2.5G和HatBRICK! Commander实现2.5千兆网络连接。
在使用2.5千兆网络时,读取速度可以达到每秒250兆字节,而写入速度略有提升,但未超过千兆。
安装并尝试使用Open Media Vault 7,创建了ZFS存储池并进行了测试。
ZFS在RAID Z1配置下的性能与RAID 0相似,但写入速度略慢,而读取速度基本相同。
整个系统的功耗在8到16瓦特之间,与其他NAS相比具有更高的能效。
作者建议不要使用Mac进行网络文件复制的基准测试,因为Mac OS在文件复制方面表现不佳。
整个DIY NAS项目的成本不到200美元,包括所有额外的板卡和组件。
Transcripts
I've built a bunch of Raspberry Pi NASes, from a little tiny all-SSD NAS to the biggest
one on Earth, the Petabyte Pi project.
But the Pi 4 and Compute Module 4 were just barely adequate.
I could never get even 100 megabytes per second or the network, even with SSDs.
The two most promising projects, the Wiretrustee SATA board and Radxa Taco, were both dead
in the water.
They launched right before the great Pi shortages, when you couldn't get a Raspberry Pi for
love or money.
But the Raspberry Pi 5 is here now.
It's faster, it has PCI Express—and best of all, you can actually get one.
Yeah, it's a little more expensive than the Pi 4, but with off-the-shelf 4-bay NASes costing
$300 and up, could we actually build a Pi NAS for less?
And would it be any good?
Well, today I'm going to see.
And to do it, I'll use this tiny SATA HAT that Radxa sent.
This costs $45, and it's already shipping.
Add a 12V power supply, a Raspberry Pi 5, a fan and microSD card, and we have a tiny
NAS for less than $150.
But will bottlenecks kill this thing like they did with the Pi 4?
I mean, the Pi 5 only gets a gigabit, those other NASes can do 2.5.
And they have hot-swap drive bays...
And vendor support!
So yeah, comparing just on price alone is silly.
There's always going to be trade-offs when you go DIY.
But this thing should have a lot fewer compromises than the jankier builds I did in the past.
At least, I hope.
And 2.5 gig networking?
I might have a fix for that.
I'm going to put this thing together and see if it could be the ultimate Raspberry Pi 5
NAS.
I do not know exactly what tools will be required, and I don't know what's in the box.
Hopefully it includes everything I need.
But Radxa usually does a pretty good job including all the little bits and bobs you
need for this.
Looks like it includes this extra cable.
This is, after all, the 'Penta' SATA HAT,
so five SATA connections.
I have four drives here, but you can add on another one using this strange external– I
guess this might be eSATA or something?
But it has SATA and power from this board.
Something important to think about is how you're going to supply power to it.
I know some people in comments have mentioned, "Oh, you need to supply power to the Pi and
this board."
But no, I believe that you can just power this board through the 12-volt barrel jack
or through an ATX Molex power supply here.
So if you have it in a PC case or something, you could do it that way.
And this will supply power to the Pi 5 through the GPIO pins.
This should be able to provide adequate power as long as the power circuitry on here is
good enough to take that 12-volt signal and give a clean 3 to 5 amps on the Pi's 5-volt
rail.
This doesn't have the normal PCI Express connector that you see on the Pi 5.
So the Pi 5 has this little guy here.
This has a much larger connector with more pins.
That could be an interesting thing.
I believe that they have an adapter for it, though.
So yeah, here it is.
So this is called an FFC or Flat Flexible Circuit board.
And it looks like they've included two, which is nice because these little connectors are
a little bit delicate.
You can see how thin they are.
They're kind of like paper-thin.
But these are Flat Flexible Circuit boards or FFCs.
And they connect from the Pi's PCI Express connector here over to this guy here.
And the GPIO pins over here are going to provide power to the Pi.
At least that's my hope.
There is a getting started guide on here, but I'm going to YOLO this thing and see what
happens.
One important thing whenever you're doing these is make sure you get the connector seated
all the way.
And it should go in pretty easy.
If you're pushing hard, then you're going to break the cable.
So don't do that.
If you're pushing hard, you might need to pull this little connection up and always
do it on both sides so that it doesn't come off.
Because if it comes off, it might break and then you would not have a way to hold the
cable down.
Push down on this little top part and this cable is now affixed to the Pi very well.
And then I'm going to plug it into here.
So it looks like it goes like this.
The funny thing is these kind of connectors are often used inside of cameras and other
things that are put together at factories.
And there they're very careful.
They have their methodologies.
They even have tools to help with it.
When you give these things to people in the general public, like you and me, we tend to
break our first one.
So I guess it is a really good idea that they included a second one here.
They probably have some screws too.
Let's check.
Yeah, there's a little kit full of screws here.
There's some standoffs and things.
And then now I'm going to put this in.
I'm going to carefully put this over and plug in the GPIO pins that provide power.
But that fits nicely together.
There is a connector here for an OLED and fan control board that sits on top of the
hard drives at the top.
They don't have that available yet.
I think they used to make it.
I don't know if they needed to revise it for this or what, but I asked about it and it's
not yet available.
So it would be nice to have that, especially, these are not that hot of drives, but if you
use hard drives, if you use 2.5 inch hard drives, then those can get pretty toasty and
it's nice to have a fan blowing air over them.
I just realized I don't have any fan on the Pi itself and I probably should do that because
it could get pretty hot and toasty inside here.
Let's get our little active cooler here.
I hope this will fit.
I don't know if there was a warning against using this, but the Pi does need some sort
of cooling, whether it's a heat sink or a fan.
There's no fan built into this.
It would be cool if there was a little fan under here or an option for one, but it doesn't
seem like that's the case.
Okay, please still fit.
Looks like it will fit.
Oh no, you know what?
The barrel plug is just touching on the top of the heat sink.
There's literally just three of the fins on the heat sink.
You know what I might do?
I might see if I can bend those off.
Take this back off again.
I'm going to pull this connection off.
This is a terrible idea.
I would not recommend doing it.
Just bending this back and forth.
There's one.
Shouldn't affect the performance that badly.
I removed the middle portion from the middle point up of these three little fins on the
heat sink.
There's a side view of it.
You can kind of make it out.
It's kind of hard to make out.
Sorry about that.
Let's get this all back together now and see if it fits.
This time, if I go down, it can go down all the way.
Look at that!
That's just enough clearance.
As long as it works in the end, it's all good.
I use this huge guy.
Just give these a little snug.
Generally, I'd use a nut driver for this, but this works in a pinch.
Literally.
[voiceover Jeff] My top-down recorder decided to corrupt the rest of the video, so I lost all that footage.
But in that footage, I mentioned the board uses the JMB585 PCIe Gen 3x2 controller, which
means even if we upgrade the Pi 5's bus to Gen 3 from its normal Gen 2, we'll miss out
on a little bandwidth.
And also, the kit comes with two side supports that hold all the 2.5" drives together, though
there may be a case available at some point in the future.
They actually had one in the past when it was sold for the ROCK 4 or Pi 4, I think, but
I'm guessing that they'll have to make another batch if they get enough interest in this
new version of the Penta SATA hat.
Okay, so everything is put together now.
It's all looking nice, and I think there will be enough airflow.
There's holes in the sides, holes in the middle, so enough air will convect through for these
drives at least.
And I have a 5A 12V power supply.
This should be adequate for these drives and the Raspberry Pi 5.
I'd budget maybe 3 to 5 watts per drive, or if you have 3.5" drives, maybe a little more,
and you might want to get an 8A or maybe even 10 or 12A power supply.
But definitely don't use a 2A power supply and expect this to work.
It's going to have all kinds of issues.
I also have Raspberry Pi OS, 64-bit light version, and I might try Open Media Vault.
I'm going to take the microSD card and put it into the slot, and then I'll grab this
power adapter.
One other reason why I'm over at the desk is I have my little, this is a Zigbee– Third
Reality Zigbee outlet that has power measurement built in, which is very handy for testing.
I'll go ahead and bring that up on here.
If I go to Home Assistant and then go to Power, you can see that right now there's 0 watts
because there's nothing plugged into it.
Power is going to come in.
Looks like they wanted to align the power with the USB-C port, not that that matters.
First I'm going to plug in network, and I'll plug in power and we'll see what happens.
Hopefully no sparks.
All right.
I have a green light on the board, and the Pi is booting up.
Power usage is up to 14.2 watts at boot, and now the Pi is doing its reboot, so it's going
to reboot a couple times this first time that I turn it on because it expands the file system
to fill up the microSD card, all that kind of stuff.
So we'll fast forward a bit until it's all booted up, and then we can log into it on
the network and see if it's actually working.
I don't see any lights.
There's just one green LED on the board over here, but I don't see any other lights.
So I don't know if there's lights per hard drive.
So I'm going to log into it and we'll see what we can see.
SSH pi at pi-nas.local.
There it is.
And if I say lsblk, hopefully we see those hard drives.
No, we're not seeing them.
Let's try lspci.
And I'm not seeing the device at all.
I don't see any errors in here.
Let's go to the URL on this box and see if there's any other tips that we're missing.
rock.sh/penta-sata-hat.
...penta-sata-hat.
So we did that.
We did that.
Oh. [hehe]
So maybe I should actually do that.
Let's try that.
Go in here.
You'd think it would do it automatically, but it does not.
So we're going to enable PCI Express, save and reboot.
So save that and reboot.
So let's check again.
There we go.
We have one, two, three, four hard drives.
And if I say lspci, I can see the Jmicron SATA controller.
Now, right now it should be PCI Express Gen 2.
We can check that with sudo lspci -vvvv.
This is going to give us all the information about PCI Express devices.
And if I go up to here, this is AHCI.
That's the kernel module for the SATA controller.
And we can go up to the top section.
See, it's Jmicron JMB585.
And if I go down to link capabilities, it says speed 8 gigatransfers per second width
x2.
That's PCIe Gen 3x2.
But the status says it's 5 gigatransfers x1.
So definitely less bandwidth than the chip is capable of.
So I'm going to try PCIe Gen 3.
And I can do that following my own guide.
If I go down here, turn that on like this and reboot.
And we'll see if it gives us Gen 3 speeds instead of Gen 2 speeds, which would give
us the maximum performance that we can get on the Pi 5.
I have four drives that have nothing on them.
I'm going to try-- we should probably just benchmark the drives first in like RAID 10
just to see what the maximum speed is or maybe even RAID 0.
So let's do that.
It'll take a couple minutes.
And we have blinking!
So you can see that the LEDs actually do work.
I didn't see those when I was looking earlier, but it has some LEDs.
And you can see them blinking when the drives are accessed.
So nice job.
I should check.
It does feel a little bit hot.
InfraRay, I found them at CES.
And they actually sent me home with a couple goodies.
This is the P2.
And the reason why I wanted them to send me home with one to test was it has this snap-on
macro lens that you can see individual resistors or things on your PCB very close up, which
is kind of cool.
But their software is a little bit iffy.
Not the best software that I've used for IR cameras.
But the camera itself is really good quality and works better than my old Seek thermal.
But let's check the temperatures on here.
And it looks like the drives themselves-- well, they're a little bit reflective.
So we might not be seeing the actual drive value.
But the board is up to 50 degrees or so.
The SATA controller is down there.
It looks like it's the hottest part of this thing.
And it is getting up there to 60 degrees Celsius.
So it might be good to have at least an active fan blowing down on top.
There's the cold soda can;
16 degrees Celsius.
And there's the hot SATA chip.
So I'm going to put this cover on and see up nice and close.
If I get in there, we can see that the chip itself is 60 degrees Celsius.
So it's pretty toasty in there.
I would definitely do a fan or heat sink on this if you're going to deploy this long term.
Another fun thing with thermal imaging is you can see all kinds of fun details.
Like, you can see that this is where my hand was resting.
And if I just put my hand on the table and take it off, there's a hand print.
And apparently this little screen on here also generates a teeny tiny bit of heat.
And now it has my fingerprint on it, which is also warm.
Looks like the formatting is finished.
And what's our next step here?
Mount the array.
OK, mount RAID 0.
So now let's do a disk benchmark on it.
And I'll run the disk benchmarks and see how fast this array can go.
OK, here goes FIO.
Hey, that's not bad at all.
850 to [8]60 megabytes per second.
And that's mebibytes.
So let's see how fast it was in megabytes.
Almost 900 megabytes per second across all four drives in RAID 0, of course.
But random reads of 687 megabytes per second and random writes of 758.
And then we have 4K block size, 44 megs read and 152 megs write at 4K, which is not bad
at all.
I'm interested in seeing-- I think what I'll do is I'll just put a Samba share on this,
and we'll see if we can saturate a 1 Gbps connection continuously.
Restart Samba and create a password.
Now I should be able to connect on my Mac.
pi-nas.local, we'll do the shared directory.
Here it is.
So I'm going to copy over a folder with all of the footage of the build.
It's 100 gigs.
And let's check this out.
Let's see how fast it is.
That is line speed.
110 megabytes per second is pretty typical.
Let's see if it can keep up that data rate.
I can smell that slight off-gassing here.
So I do think that I would put some sort of cooling on here just for that JMB585 chip.
On my other NASes, over 1 gigabit, you can just hammer it, and it'll stay 110, 115 megabytes
the entire time.
This is a lot faster than the Pi 4 NASes I've set up before, though.
And we'll just let the screen recorder keep going at 18 minutes, and we'll just keep moving.
While that's copying, I want to take a brief opportunity to tell you about Open Sauce.
Open Sauce is going to be June 15 to 16 in San Francisco, and I'll be there.
I'll be there along with a ton of other creators in the Maker areas, electronics, hacking,
all kinds of fun things.
If you want to go, there's a way that you can get in for free, and you can come to the
party that's beforehand, where all the other YouTubers and everyone will be present.
If you want to do that, you can apply to be an exhibitor.
They have tons of space for exhibits this year.
It'd be really cool to see your project.
So if you want to do that, go to opensauce.com and apply to be an exhibitor.
Otherwise you can also come as just a normal person who's not exhibiting things too.
So hopefully I'll see you there June 15 to 16.
If not, I will definitely be posting some things on Twitter and maybe something on YouTube.
I don't know.
So make sure you're subscribed.
It copied everything over to the Pi.
Now let's check the read speed.
I'm going to copy it back into a different folder on my local computer.
And we'll see if it can give me 110 megabytes per second.
Oh, look at that.
It's giving me 122, which is a little faster than the write speed.
And you can see that the drives are reading pretty much flat out right now.
I don't know if that'll fill up the cache, but you can see that the data is flowing a
lot more smoothly coming off the Pi than writing to it.
So there are some bottlenecks.
I don't think it's Samba, and I don't think it's the drives themselves.
I think there's a bottleneck somewhere in the Pi's kernel or something when it's writing
through because I had that problem on the Pi 4, but on the Pi 4, it wouldn't even hit
like 120 megabytes per second all the time.
But reading, that's not an issue at all here.
We're cranking at 120 megabytes per second.
I deleted everything off of there, and it looks like obviously read speeds are much
more consistent than write speeds.
But I'm going to try something else that I mentioned at the beginning of this video.
What about 2.5 gig networking?
Now PineBerry Pi makes the HatNET! 2.5G.
This is a 2.5 gigabit hat for the Raspberry Pi 5.
But you'll probably already notice there's a problem.
It has one PCI Express input.
There's only one PCI Express connector on the Raspberry Pi 5.
How do we solve this problem?
Because this needs that, and I want to put it on here too to see if I can get 2.5 gig
networking.
Well, I can try the HatBRICK! Commander from Pineberry Pi.
And yes, they sent me these things.
I would be buying them myself anyway, but I'm going to disclose that Radxa sent me this,
and Pineberry Pi sent me this.
I'm testing these things out to see if they can work together and do some crazy things.
But Pineberry also sent me all of these extra cables of varying lengths.
One thing that can be a problem with when you start connecting multiple things together
is the PCI Express signaling.
So I'm going to try to use the shortest cables I can for these experiments.
But I'm going to basically put this, which is a PCI Express Gen 2 switch, off of the
Pi's bus, and then connect one connector to the SATA drives and the other connector to
the HatNET! 2.5G.
The downside is this is going to make everything be PCI Express Gen 2 speed instead of 3, so
I wouldn't be able to get 800 megabytes per second on these hard drives.
But on the flip side, this is 2.5 gig networking, and if we say, let's say 2 gigs for networking
and 2 gigabits for the hard drives, we might be able to do that to almost saturate 2.5
gig network if the Pi 5 can support that.
I don't know if it can or not.
I don't think it will be able to, but we'll see if any of this even works.
It might also not have enough power.
I don't know.
But I'm going to unplug this.
Okay, we got that connector out of here.
There is some risk here.
If we are mixing these cables from different vendors and connections, there's a little
risk that something's going to go wrong, but hopefully that doesn't happen.
It's definitely not my finest work.
There's an LED on here, and I see a light on the switch, and there's a power LED on
the HatBRICK! Commander, and there's lights on here.
Let's see if this is actually going to work.
lspci...
Hey, look at that.
So we have the switch here.
We have the SATA controller here, and we have the 2.5 gig controller here.
Let's do ip a, and we have an IP address on that.
So let's do an iperf test.
Now we're getting 2 gigabits.
It's not 2.5 gigabits, but it's not nothing.
So coming back only 1.6 gigabits, that's not horrible.
It's still more than a gigabit.
This is probably going to get 2.5 gigabits if you connect it straight to the Pi.
I think that some of the overhead comes out of that packet switching that is running to
the drives as well.
So if I say lsblk, we still have the drives, and they're mounted.
So we'll see if we get any faster write speeds.
It's doing 110, 117.
That's about the same as what we were seeing before.
So we're not getting faster than a gigabit over the 2.5 gig connection, at least for
writes.
I do see a few peaks up to about 125 megabytes per second, so better than a gigabit.
And it's interesting, the overall rate seems a little steadier with the 2.5 gig.
Maybe the Pi's internal controller is a little funky, but I don't know.
But it's giving us a little bit more in the write speeds.
I'm really interested to see the read speeds, though.
Hopefully we can get more than 1 gigabit.
Let's check.
There we go.
217 megabytes, 250 megabytes per second.
That's more what I'm expecting out of a 2.5 gig connection.
So this can put through that data.
It's interesting.
I think it's pulling from RAM because I don't see the drives blinking at all here.
It's probably copying all this data from RAM, and now it's hitting the drives.
And you can see it dips a tiny bit there, so down to 230 megabytes per second.
So Linux usually caches files on the RAM as it's copying them back and forth, so that
if you have a file that you're accessing a lot, it's a lot faster.
But now that it's hitting the drives, it only dipped down 10 megabytes per second, so that's
not bad at all.
So for a read-heavy NAS, this isn't looking like that bad of a setup.
Now that I know that everything is going to work on here hardware-wise, I think it's time
to put OMV on here and see how that runs.
I haven't used OMV 7 yet, so this will be new for me.
I don't think it's that much different than OMV 5 and 6, but let me grab this script and
go over here, and this hopefully will just work.
I'm going to SSH into the Pi and just paste in their script, the installer, and here it
goes.
Let's check power consumption.
So during the install, it was used in between 8 to 10 watts, and it looks like the baseline
for this build is 8 watts with the 2.5 gig network adapter and everything else.
But let's go to pi-nas.local, and does this work?
Maybe I have to use the IP address.
Let's try that.
Well there it is.
I guess it was still booting up.
Okay, so that was not the problem there.
So 'admin' and 'openmediavault' are the password logging in.
There it is.
There's no dashboard, that's okay.
Storage is where we should see our disks.
They should show up.
Yep, 1, 2, 3, 4.
All of them are 8 terabytes.
And I want to create an array.
File systems.
Is this where we create it?
Create and mount a file system, ext4.
But I want to create a RAID array.
How do I create a RAID array?
Am I totally missing something?
I thought there was a thing over here for creating RAID, but I don't see it anymore.
What does this say?
See, this has RAID management, but I'm not seeing RAID management anywhere.
Do you see RAID management anywhere?
We could try ZFS instead of RAID, but that's instead of like mdadmin RAID.
So we can try it out on openmediavault.
I've never tried it on OMV before, but we'll see how it works here.
I like this little END OF LINE here.
I guess a nod back to Tron, the 1974 version.
And we'll do RAIDZ1 since we have three drives.
A RAIDZ1 will use one drive, the equivalent of that for parity data.
That way I could lose one of these four drives and all the data would be intact.
But here we go.
It says pending changes, 21 terabytes available.
Let's apply this.
So now the tank should exist.
Compression is on.
I don't know if I would need compression on, but I'm not going to mess with any of that
right now.
If we go to pools, is there anything else I can do?
Tools, what do we got?
So you can scrub it.
I don't know if it automatically scrubs in here, but it gives us the pool information.
That's nice.
So this is a good interface.
It's not the, maybe not the best thing ever.
And I don't know if it comes with schedules and things by default, but it'd be nice to
have a scheduled snapshot and a pool scrubbing scheduled.
That might be something that you can figure under scheduled tasks.
Yeah, so you'd have to do some of these things.
You'd have to add your own scheduled tasks.
It'd be cool if that added some things by default, but I can see why they don't as well.
But now let's add a file system.
So we have one tank ZFS.
I'll add a shared under tank shared, and we'll just set everyone read/write, right now,
save, turn on Samba enabled 10.0.2.2– 11.
Okay.
So it wants me to use the IP and there's our shared volume.
So let's, I'm going to copy some stuff over to it.
I have this, this folder has a hundred gigabytes.
So I'll do that.
And here it goes.
So it seems similar to the copies that we were getting with RAID 0.
It's interesting.
It goes a little bit faster sometimes than those copies were.
So I'm wondering if ZFS's caching is actually helping here.
So far, I'm pretty impressed.
I think read speeds are where this wins.
Write speeds are where this loses a little bit because you're not going to be able to
get full 2.5 gigabit networking on that.
But but it's better than I was expecting.
And the big win for me, besides the fact that this can be made smaller, if we kind of reconfigure
these boards the big one is the power efficiency, because right now we're using 15 or 16 Watts.
Probably the other NASs that I've built using, you know, prebuilt NASs, they use, they use
10 to 20 Watts idle and they use 25 to 30 Watts when they're doing a lot of stuff.
So this little guy is only using 16 Watts doing the same amount of work which is probably
about half of what most prebuilt NASs would use.
On the flip side, if you build a NAS with the RK3588 chip, you could probably get even
more efficient and more speed.
So there's a couple boards out there that are interesting that I might take a look at
at some point.
But the nice thing is all of this, this is all really well supported.
Like the software just click some buttons and you have everything working.
I haven't always had that same kind of experience when I'm using the RockChip boards.
Some of them are getting pretty good though.
I'm going to go ahead and let this write finish and I'm going to do a full read of that 100
gigs of data and we'll see where we end up.
At the end of the copy, it looks like the system used 22 Watts for a little while while
it was doing some sort of processing.
I don't know what ZFS was doing there.
Maybe that was part of the compression.
I don't know.
It's a lot of power to use at the end there.
The actual performance was interesting.
After that initial part where it was faster than RAID 0, it actually slowed down to a
tiny bit slower than RAID 0 over that long rest of the copy.
That's why it's good to use a large, large file to test the actual performance of your
system because especially with ZFS, it's going to cache a lot in the beginning in RAM and
that throws off how fast your actual disk array is.
But the CPU usage was not too bad.
Power consumption was down around 8 to 16 Watts throughout the whole copy.
But in the end, the file copy was 74 megabytes per second with ZFS in RAID Z1 and it was almost
100 megabytes per second in RAID 0.
Now that's for the writing, which is always going to be a little bit slower with a setup
like this.
Read speeds for both are practically the same.
It's just basically line speed.
It's not hard at all to do the reads.
[interjection Jeff] So this is a little embarrassing.
All those conclusions I have are based on the fact I was benchmarking this all on a Mac.
And I switched to my Windows PC and I was able to get almost line speed for one gigabyte
file copies writing to the Pi and 150 megabytes per second writing over the 2.5 gig network.
So that changes my perspective a little bit on it.
And I think the biggest takeaway is don't use a Mac for benchmarking network file copies,
even if it has 10 gigabit networking and everything else on it seems to be fine.
Mac OS for some reason is not great with file copies.
And I have a whole blog post and I have more details in the GitHub issue linked below.
But let's get back to the video.
[normal Jeff] It's not inconceivable to build a system like this.
All in this one is still under 200 bucks total, including all these extra boards and things.
So but it always goes back to DIY means you're responsible for the software, you're responsible
for maintenance and updates and all that kind of stuff.
So that was a fun experiment.
And I plan on doing some other fun experiments now that I have this little this little board
here that lets me split up PCI Express lanes.
And we'll see how we can bend the Pi 5's PCI Express bus.
It'd be really cool to see a Compute Module 5 expose even more, but we'll see what happens
whenever that comes out.
I know that that was a big change from the Pi 4 to the Compute Module 4.
It gave us PCI Express.
Now we have it on the Pi 5, but I think we might be able to do more in a Compute Module
form factor, but we'll see.
Until next time, I'm Jeff Geerling.
5.0 / 5 (0 votes)
Risk-Based Alerting (RBA) for Splunk Enterprise Security Explained—Bite-Size Webinar Series (Part 3)
Paying for Cloud Storage is Stupid
Roadmap for Learning SQL
How to Optimize Performance in Unreal Engine 5
6款工具帮你自动赚钱,轻松上手帮你打开全新的收入渠道,赚钱效率高出100倍,用好这几款AI人工智能工具,你会发现赚钱从来没如此简单过
Best AI Music Generator in 2024 - SUNO vs UDIO