The ULTIMATE Raspberry Pi 5 NAS
Summary
TLDRIn this video, the creator explores the potential of building a Raspberry Pi NAS using the new Raspberry Pi 5 and a SATA HAT. Despite initial challenges with bottlenecks and power supply, the end result is a compact and power-efficient NAS setup capable of impressive read and write speeds. The experiment also delves into the possibilities of 2.5 gigabit networking and the use of ZFS file system, all while emphasizing the trade-offs and responsibilities involved in DIY projects.
Takeaways
- ð ïž The Raspberry Pi 5's increased speed and PCI Express support make it a viable option for building a cost-effective NAS (Network Attached Storage) solution.
- ð¡ Despite the Pi 5's gigabit networking limitation, it can still provide competitive read and write speeds, especially for a DIY setup.
- ð§ The use of the Radxa SATA HAT and Pineberry Pi's HatNET! 2.5G allows for a compact and efficient NAS build, although with some compromises on bandwidth.
- ð Power supply considerations are crucial for the setup, with a 5A 12V power supply recommended for 2.5" drives and potentially an 8A or higher for 3.5" drives.
- ð¡ïž Thermal management is important for the longevity and performance of the NAS, with active cooling suggested for the SATA controller due to its high temperature readings.
- ð» The Raspberry Pi OS and Open Media Vault (OMV) provide a user-friendly interface for managing the NAS, although some features like RAID management may not be immediately apparent in OMV 7.
- ð Benchmarking network file copies can produce varying results depending on the operating system used, with Windows showing better performance than Mac in this case.
- ð The experiment showed that the Pi 5 NAS could achieve write speeds of around 74 MB/s with ZFS in RAID Z1 and almost 100 MB/s with RAID 0, suggesting that the system is well-suited for read-heavy tasks.
- ð§ DIY NAS setups offer flexibility and customization but require the user to take responsibility for software, maintenance, and updates.
- ð The potential for future improvements in the Compute Module form factor is hinted at, with the hope for more capabilities in the upcoming Compute Module 5.
Q & A
What is the main goal of the Raspberry Pi 5 NAS project discussed in the transcript?
-The main goal of the Raspberry Pi 5 NAS project is to build a cost-effective Network Attached Storage (NAS) solution using the Raspberry Pi 5, which is faster and has PCI Express capabilities compared to its predecessors.
What are the components required to build the described NAS for less than $150?
-The components required include a 12V power supply, a Raspberry Pi 5, a fan, and a microSD card, in addition to the $45 SATA HAT sent by Radxa.
What is the significance of the Raspberry Pi 5's PCI Express capability for this project?
-The Raspberry Pi 5's PCI Express capability allows for faster data transfer rates and improved overall performance, which is crucial for a NAS system designed to handle storage and data sharing efficiently.
How does the 'Penta' SATA HAT contribute to the NAS build?
-The 'Penta' SATA HAT provides five SATA connections, allowing for the connection of multiple drives and enhancing the storage capacity of the NAS system.
What is the purpose of the Flat Flexible Circuit (FFC) boards included in the SATA HAT package?
-The FFC boards connect the Raspberry Pi's PCI Express connector to the SATA HAT, facilitating data transfer between the drives and the Pi.
What is the maximum power supply rating recommended for the described NAS setup?
-For drives, a 5A 12V power supply is recommended. However, if 3.5 inch drives are used, an 8A or even a 10 or 12A power supply might be more appropriate.
What is the performance difference between RAID 0 and RAID Z1 with ZFS in the context of this NAS setup?
-In terms of write speeds, RAID 0 was slightly faster than RAID Z1 with ZFS. However, for read speeds, both configurations provided practically the same performance, reaching almost line speed.
What is the estimated power consumption of the NAS setup with the 2.5 gig network adapter and other components?
-The estimated power consumption with the 2.5 gig network adapter and other components is around 8 to 16 watts during normal operation, with a peak of 22 watts during certain processes.
Why is the Raspberry Pi 5's PCI Express Gen 3x2 capability not fully utilized in this setup?
-The Raspberry Pi 5's PCI Express Gen 3x2 capability is not fully utilized because the current setup uses a PCI Express Gen 2 switch, which limits the speed to Gen 2 standards due to the need to connect multiple devices to the single available PCI Express connector on the Raspberry Pi 5.
What is the importance of having a cooling solution for the NAS setup?
-A cooling solution is important to prevent overheating, especially for the SATA controller chip, which can reach temperatures up to 60 degrees Celsius during operation. This ensures the longevity and reliability of the NAS system.
What is the main takeaway from the experiment with the Raspberry Pi 5 NAS setup?
-The main takeaway is that it is feasible to build a cost-effective and power-efficient NAS system using the Raspberry Pi 5. The setup offers decent performance, especially for read operations, and can be further optimized with different configurations and software setups.
Outlines
ð ïž Raspberry Pi NAS Building Journey
The paragraph discusses the journey of building various Raspberry Pi Network Attached Storage (NAS) devices, from small SSD NASes to the ambitious Petabyte Pi project. It highlights the limitations of the Raspberry Pi 4 and Compute Module 4, and expresses hope for the new Raspberry Pi 5 with its faster speed and PCI Express support. The author also explores the possibility of building a cost-effective Pi NAS and shares the components needed for this project, including the Radxa SATA HAT and its features.
ð§ Assembling the Raspberry Pi 5 NAS
This paragraph details the assembly process of the Raspberry Pi 5 NAS setup, focusing on the integration of the SATA HAT, power supply considerations, and the connection of the Pi 5. It discusses the challenges of fitting a fan for cooling, the use of an active cooler, and the importance of proper cable connections. The paragraph also touches on the technical aspects of the build, such as the PCIe Gen 3x2 controller and the inclusion of side supports for the 2.5-inch drives.
ð¡ Initial Setup and Troubleshooting
The paragraph describes the initial setup and booting process of the Raspberry Pi 5 NAS, including the use of Raspberry Pi OS and the challenges faced during the process. The author encounters issues with drive recognition and shares the steps taken to resolve them, such as enabling PCI Express and rebooting. It also discusses the hardware components, like the JMB585 SATA controller, and the performance expectations from the setup.
ð Benchmarking and Testing the NAS
This paragraph focuses on the benchmarking and testing of the Raspberry Pi 5 NAS. The author discusses the performance of the drives in RAID 0 configuration, the speed results from disk benchmarks, and the real-world usage testing through file transfers. The paragraph also addresses the thermal performance of the setup, with the use of thermal imaging to identify hot spots and the need for active cooling solutions.
ð Exploring 2.5 Gig Networking and File Sharing
The paragraph explores the possibility of upgrading the network speed to 2.5 gigabits per second using the HatNET! 2.5G from Pineberry Pi and the HatBRICK! Commander as a PCI Express switch. It details the process of setting up the network, the results of the iperf test, and the impact on read and write speeds. The author also shares insights on the potential benefits of using ZFS file system in a RAIDZ1 configuration for data redundancy and performance.
ð Reflecting on the DIY NAS Experience
In this final paragraph, the author reflects on the overall experience of building and testing the DIY Raspberry Pi NAS. It discusses the cost-effectiveness, power efficiency, and performance of the setup compared to prebuilt NAS devices. The author also shares thoughts on the software support and the potential for future improvements with new hardware like the Compute Module 5.
Mindmap
Keywords
ð¡Raspberry Pi
ð¡NAS (Network Attached Storage)
ð¡PCI Express
ð¡SATA (Serial ATA)
ð¡Radxa Taco
ð¡Wiretrustee SATA board
ð¡DIY (Do It Yourself)
ð¡2.5 Gig Networking
ð¡HatNET! 2.5G
ð¡ZFS (Zettabyte File System)
ð¡Open Media Vault (OMV)
Highlights
The Raspberry Pi 5's increased speed and PCI Express support make it a viable option for building a NAS (Network Attached Storage) device.
The project aims to create a cost-effective NAS alternative to commercial 4-bay NASes that typically cost $300 and up.
The use of Radxa's SATA HAT and a Raspberry Pi 5, along with other components, can result in a NAS setup for less than $150.
The Raspberry Pi 5's gigabit network speed is a potential bottleneck compared to other NAS devices that support 2.5 gigabit speeds.
The DIY nature of the project comes with trade-offs, such as lack of vendor support and the need for the builder to manage software updates and maintenance.
The assembly process involves connecting the SATA HAT to the Raspberry Pi 5 via a Flat Flexible Circuit (FFC) board.
Power supply considerations are crucial; the setup can be powered through a 12V barrel jack or an ATX Molex power supply.
The Raspberry Pi 5's lack of a built-in fan suggests the need for additional cooling solutions, especially when dealing with hard drives that can generate significant heat.
The use of a thermal imaging camera reveals that the SATA controller chip can reach temperatures up to 60 degrees Celsius, indicating the importance of active cooling.
Benchmark tests show that the RAID 0 configuration can achieve nearly 900 megabytes per second of read speed, outperforming previous Pi 4 NAS setups.
The experiment with 2.5 gigabit networking using Pineberry Pi's HatNET! 2.5G and HatBRICK! Commander results in a network speed of around 2 gigabits.
The transition to Open Media Vault (OMV) 7 for file management and the creation of a RAIDZ1 array using ZFS demonstrates the system's flexibility and compatibility with various software.
The power efficiency of the Raspberry Pi 5-based NAS is notably better than traditional prebuilt NAS devices, using only 16 Watts during operation.
The project's overall cost remains under $200, making it an affordable DIY solution for those willing to take on the responsibilities of building and maintaining their own NAS.
The experiment highlights the potential of the Raspberry Pi 5's PCI Express bus for future expansion and innovation, such as the possible release of a Compute Module 5.
The importance of using the right tools for benchmarking is emphasized, as Mac OS was found to be less efficient for network file copies compared to Windows PC.
Transcripts
I've built a bunch of Raspberry Pi NASes, from a little tiny all-SSD NAS to the biggest
one on Earth, the Petabyte Pi project.
But the Pi 4 and Compute Module 4 were just barely adequate.
I could never get even 100 megabytes per second or the network, even with SSDs.
The two most promising projects, the Wiretrustee SATA board and Radxa Taco, were both dead
in the water.
They launched right before the great Pi shortages, when you couldn't get a Raspberry Pi for
love or money.
But the Raspberry Pi 5 is here now.
It's faster, it has PCI Expressâand best of all, you can actually get one.
Yeah, it's a little more expensive than the Pi 4, but with off-the-shelf 4-bay NASes costing
$300 and up, could we actually build a Pi NAS for less?
And would it be any good?
Well, today I'm going to see.
And to do it, I'll use this tiny SATA HAT that Radxa sent.
This costs $45, and it's already shipping.
Add a 12V power supply, a Raspberry Pi 5, a fan and microSD card, and we have a tiny
NAS for less than $150.
But will bottlenecks kill this thing like they did with the Pi 4?
I mean, the Pi 5 only gets a gigabit, those other NASes can do 2.5.
And they have hot-swap drive bays...
And vendor support!
So yeah, comparing just on price alone is silly.
There's always going to be trade-offs when you go DIY.
But this thing should have a lot fewer compromises than the jankier builds I did in the past.
At least, I hope.
And 2.5 gig networking?
I might have a fix for that.
I'm going to put this thing together and see if it could be the ultimate Raspberry Pi 5
NAS.
I do not know exactly what tools will be required, and I don't know what's in the box.
Hopefully it includes everything I need.
But Radxa usually does a pretty good job including all the little bits and bobs you
need for this.
Looks like it includes this extra cable.
This is, after all, the 'Penta' SATA HAT,
so five SATA connections.
I have four drives here, but you can add on another one using this strange externalâ I
guess this might be eSATA or something?
But it has SATA and power from this board.
Something important to think about is how you're going to supply power to it.
I know some people in comments have mentioned, "Oh, you need to supply power to the Pi and
this board."
But no, I believe that you can just power this board through the 12-volt barrel jack
or through an ATX Molex power supply here.
So if you have it in a PC case or something, you could do it that way.
And this will supply power to the Pi 5 through the GPIO pins.
This should be able to provide adequate power as long as the power circuitry on here is
good enough to take that 12-volt signal and give a clean 3 to 5 amps on the Pi's 5-volt
rail.
This doesn't have the normal PCI Express connector that you see on the Pi 5.
So the Pi 5 has this little guy here.
This has a much larger connector with more pins.
That could be an interesting thing.
I believe that they have an adapter for it, though.
So yeah, here it is.
So this is called an FFC or Flat Flexible Circuit board.
And it looks like they've included two, which is nice because these little connectors are
a little bit delicate.
You can see how thin they are.
They're kind of like paper-thin.
But these are Flat Flexible Circuit boards or FFCs.
And they connect from the Pi's PCI Express connector here over to this guy here.
And the GPIO pins over here are going to provide power to the Pi.
At least that's my hope.
There is a getting started guide on here, but I'm going to YOLO this thing and see what
happens.
One important thing whenever you're doing these is make sure you get the connector seated
all the way.
And it should go in pretty easy.
If you're pushing hard, then you're going to break the cable.
So don't do that.
If you're pushing hard, you might need to pull this little connection up and always
do it on both sides so that it doesn't come off.
Because if it comes off, it might break and then you would not have a way to hold the
cable down.
Push down on this little top part and this cable is now affixed to the Pi very well.
And then I'm going to plug it into here.
So it looks like it goes like this.
The funny thing is these kind of connectors are often used inside of cameras and other
things that are put together at factories.
And there they're very careful.
They have their methodologies.
They even have tools to help with it.
When you give these things to people in the general public, like you and me, we tend to
break our first one.
So I guess it is a really good idea that they included a second one here.
They probably have some screws too.
Let's check.
Yeah, there's a little kit full of screws here.
There's some standoffs and things.
And then now I'm going to put this in.
I'm going to carefully put this over and plug in the GPIO pins that provide power.
But that fits nicely together.
There is a connector here for an OLED and fan control board that sits on top of the
hard drives at the top.
They don't have that available yet.
I think they used to make it.
I don't know if they needed to revise it for this or what, but I asked about it and it's
not yet available.
So it would be nice to have that, especially, these are not that hot of drives, but if you
use hard drives, if you use 2.5 inch hard drives, then those can get pretty toasty and
it's nice to have a fan blowing air over them.
I just realized I don't have any fan on the Pi itself and I probably should do that because
it could get pretty hot and toasty inside here.
Let's get our little active cooler here.
I hope this will fit.
I don't know if there was a warning against using this, but the Pi does need some sort
of cooling, whether it's a heat sink or a fan.
There's no fan built into this.
It would be cool if there was a little fan under here or an option for one, but it doesn't
seem like that's the case.
Okay, please still fit.
Looks like it will fit.
Oh no, you know what?
The barrel plug is just touching on the top of the heat sink.
There's literally just three of the fins on the heat sink.
You know what I might do?
I might see if I can bend those off.
Take this back off again.
I'm going to pull this connection off.
This is a terrible idea.
I would not recommend doing it.
Just bending this back and forth.
There's one.
Shouldn't affect the performance that badly.
I removed the middle portion from the middle point up of these three little fins on the
heat sink.
There's a side view of it.
You can kind of make it out.
It's kind of hard to make out.
Sorry about that.
Let's get this all back together now and see if it fits.
This time, if I go down, it can go down all the way.
Look at that!
That's just enough clearance.
As long as it works in the end, it's all good.
I use this huge guy.
Just give these a little snug.
Generally, I'd use a nut driver for this, but this works in a pinch.
Literally.
[voiceover Jeff] My top-down recorder decided to corrupt the rest of the video, so I lost all that footage.
But in that footage, I mentioned the board uses the JMB585 PCIe Gen 3x2 controller, which
means even if we upgrade the Pi 5's bus to Gen 3 from its normal Gen 2, we'll miss out
on a little bandwidth.
And also, the kit comes with two side supports that hold all the 2.5" drives together, though
there may be a case available at some point in the future.
They actually had one in the past when it was sold for the ROCK 4 or Pi 4, I think, but
I'm guessing that they'll have to make another batch if they get enough interest in this
new version of the Penta SATA hat.
Okay, so everything is put together now.
It's all looking nice, and I think there will be enough airflow.
There's holes in the sides, holes in the middle, so enough air will convect through for these
drives at least.
And I have a 5A 12V power supply.
This should be adequate for these drives and the Raspberry Pi 5.
I'd budget maybe 3 to 5 watts per drive, or if you have 3.5" drives, maybe a little more,
and you might want to get an 8A or maybe even 10 or 12A power supply.
But definitely don't use a 2A power supply and expect this to work.
It's going to have all kinds of issues.
I also have Raspberry Pi OS, 64-bit light version, and I might try Open Media Vault.
I'm going to take the microSD card and put it into the slot, and then I'll grab this
power adapter.
One other reason why I'm over at the desk is I have my little, this is a Zigbeeâ Third
Reality Zigbee outlet that has power measurement built in, which is very handy for testing.
I'll go ahead and bring that up on here.
If I go to Home Assistant and then go to Power, you can see that right now there's 0 watts
because there's nothing plugged into it.
Power is going to come in.
Looks like they wanted to align the power with the USB-C port, not that that matters.
First I'm going to plug in network, and I'll plug in power and we'll see what happens.
Hopefully no sparks.
All right.
I have a green light on the board, and the Pi is booting up.
Power usage is up to 14.2 watts at boot, and now the Pi is doing its reboot, so it's going
to reboot a couple times this first time that I turn it on because it expands the file system
to fill up the microSD card, all that kind of stuff.
So we'll fast forward a bit until it's all booted up, and then we can log into it on
the network and see if it's actually working.
I don't see any lights.
There's just one green LED on the board over here, but I don't see any other lights.
So I don't know if there's lights per hard drive.
So I'm going to log into it and we'll see what we can see.
SSH pi at pi-nas.local.
There it is.
And if I say lsblk, hopefully we see those hard drives.
No, we're not seeing them.
Let's try lspci.
And I'm not seeing the device at all.
I don't see any errors in here.
Let's go to the URL on this box and see if there's any other tips that we're missing.
rock.sh/penta-sata-hat.
...penta-sata-hat.
So we did that.
We did that.
Oh. [hehe]
So maybe I should actually do that.
Let's try that.
Go in here.
You'd think it would do it automatically, but it does not.
So we're going to enable PCI Express, save and reboot.
So save that and reboot.
So let's check again.
There we go.
We have one, two, three, four hard drives.
And if I say lspci, I can see the Jmicron SATA controller.
Now, right now it should be PCI Express Gen 2.
We can check that with sudo lspci -vvvv.
This is going to give us all the information about PCI Express devices.
And if I go up to here, this is AHCI.
That's the kernel module for the SATA controller.
And we can go up to the top section.
See, it's Jmicron JMB585.
And if I go down to link capabilities, it says speed 8 gigatransfers per second width
x2.
That's PCIe Gen 3x2.
But the status says it's 5 gigatransfers x1.
So definitely less bandwidth than the chip is capable of.
So I'm going to try PCIe Gen 3.
And I can do that following my own guide.
If I go down here, turn that on like this and reboot.
And we'll see if it gives us Gen 3 speeds instead of Gen 2 speeds, which would give
us the maximum performance that we can get on the Pi 5.
I have four drives that have nothing on them.
I'm going to try-- we should probably just benchmark the drives first in like RAID 10
just to see what the maximum speed is or maybe even RAID 0.
So let's do that.
It'll take a couple minutes.
And we have blinking!
So you can see that the LEDs actually do work.
I didn't see those when I was looking earlier, but it has some LEDs.
And you can see them blinking when the drives are accessed.
So nice job.
I should check.
It does feel a little bit hot.
InfraRay, I found them at CES.
And they actually sent me home with a couple goodies.
This is the P2.
And the reason why I wanted them to send me home with one to test was it has this snap-on
macro lens that you can see individual resistors or things on your PCB very close up, which
is kind of cool.
But their software is a little bit iffy.
Not the best software that I've used for IR cameras.
But the camera itself is really good quality and works better than my old Seek thermal.
But let's check the temperatures on here.
And it looks like the drives themselves-- well, they're a little bit reflective.
So we might not be seeing the actual drive value.
But the board is up to 50 degrees or so.
The SATA controller is down there.
It looks like it's the hottest part of this thing.
And it is getting up there to 60 degrees Celsius.
So it might be good to have at least an active fan blowing down on top.
There's the cold soda can;
16 degrees Celsius.
And there's the hot SATA chip.
So I'm going to put this cover on and see up nice and close.
If I get in there, we can see that the chip itself is 60 degrees Celsius.
So it's pretty toasty in there.
I would definitely do a fan or heat sink on this if you're going to deploy this long term.
Another fun thing with thermal imaging is you can see all kinds of fun details.
Like, you can see that this is where my hand was resting.
And if I just put my hand on the table and take it off, there's a hand print.
And apparently this little screen on here also generates a teeny tiny bit of heat.
And now it has my fingerprint on it, which is also warm.
Looks like the formatting is finished.
And what's our next step here?
Mount the array.
OK, mount RAID 0.
So now let's do a disk benchmark on it.
And I'll run the disk benchmarks and see how fast this array can go.
OK, here goes FIO.
Hey, that's not bad at all.
850 to [8]60 megabytes per second.
And that's mebibytes.
So let's see how fast it was in megabytes.
Almost 900 megabytes per second across all four drives in RAID 0, of course.
But random reads of 687 megabytes per second and random writes of 758.
And then we have 4K block size, 44 megs read and 152 megs write at 4K, which is not bad
at all.
I'm interested in seeing-- I think what I'll do is I'll just put a Samba share on this,
and we'll see if we can saturate a 1 Gbps connection continuously.
Restart Samba and create a password.
Now I should be able to connect on my Mac.
pi-nas.local, we'll do the shared directory.
Here it is.
So I'm going to copy over a folder with all of the footage of the build.
It's 100 gigs.
And let's check this out.
Let's see how fast it is.
That is line speed.
110 megabytes per second is pretty typical.
Let's see if it can keep up that data rate.
I can smell that slight off-gassing here.
So I do think that I would put some sort of cooling on here just for that JMB585 chip.
On my other NASes, over 1 gigabit, you can just hammer it, and it'll stay 110, 115 megabytes
the entire time.
This is a lot faster than the Pi 4 NASes I've set up before, though.
And we'll just let the screen recorder keep going at 18 minutes, and we'll just keep moving.
While that's copying, I want to take a brief opportunity to tell you about Open Sauce.
Open Sauce is going to be June 15 to 16 in San Francisco, and I'll be there.
I'll be there along with a ton of other creators in the Maker areas, electronics, hacking,
all kinds of fun things.
If you want to go, there's a way that you can get in for free, and you can come to the
party that's beforehand, where all the other YouTubers and everyone will be present.
If you want to do that, you can apply to be an exhibitor.
They have tons of space for exhibits this year.
It'd be really cool to see your project.
So if you want to do that, go to opensauce.com and apply to be an exhibitor.
Otherwise you can also come as just a normal person who's not exhibiting things too.
So hopefully I'll see you there June 15 to 16.
If not, I will definitely be posting some things on Twitter and maybe something on YouTube.
I don't know.
So make sure you're subscribed.
It copied everything over to the Pi.
Now let's check the read speed.
I'm going to copy it back into a different folder on my local computer.
And we'll see if it can give me 110 megabytes per second.
Oh, look at that.
It's giving me 122, which is a little faster than the write speed.
And you can see that the drives are reading pretty much flat out right now.
I don't know if that'll fill up the cache, but you can see that the data is flowing a
lot more smoothly coming off the Pi than writing to it.
So there are some bottlenecks.
I don't think it's Samba, and I don't think it's the drives themselves.
I think there's a bottleneck somewhere in the Pi's kernel or something when it's writing
through because I had that problem on the Pi 4, but on the Pi 4, it wouldn't even hit
like 120 megabytes per second all the time.
But reading, that's not an issue at all here.
We're cranking at 120 megabytes per second.
I deleted everything off of there, and it looks like obviously read speeds are much
more consistent than write speeds.
But I'm going to try something else that I mentioned at the beginning of this video.
What about 2.5 gig networking?
Now PineBerry Pi makes the HatNET! 2.5G.
This is a 2.5 gigabit hat for the Raspberry Pi 5.
But you'll probably already notice there's a problem.
It has one PCI Express input.
There's only one PCI Express connector on the Raspberry Pi 5.
How do we solve this problem?
Because this needs that, and I want to put it on here too to see if I can get 2.5 gig
networking.
Well, I can try the HatBRICK! Commander from Pineberry Pi.
And yes, they sent me these things.
I would be buying them myself anyway, but I'm going to disclose that Radxa sent me this,
and Pineberry Pi sent me this.
I'm testing these things out to see if they can work together and do some crazy things.
But Pineberry also sent me all of these extra cables of varying lengths.
One thing that can be a problem with when you start connecting multiple things together
is the PCI Express signaling.
So I'm going to try to use the shortest cables I can for these experiments.
But I'm going to basically put this, which is a PCI Express Gen 2 switch, off of the
Pi's bus, and then connect one connector to the SATA drives and the other connector to
the HatNET! 2.5G.
The downside is this is going to make everything be PCI Express Gen 2 speed instead of 3, so
I wouldn't be able to get 800 megabytes per second on these hard drives.
But on the flip side, this is 2.5 gig networking, and if we say, let's say 2 gigs for networking
and 2 gigabits for the hard drives, we might be able to do that to almost saturate 2.5
gig network if the Pi 5 can support that.
I don't know if it can or not.
I don't think it will be able to, but we'll see if any of this even works.
It might also not have enough power.
I don't know.
But I'm going to unplug this.
Okay, we got that connector out of here.
There is some risk here.
If we are mixing these cables from different vendors and connections, there's a little
risk that something's going to go wrong, but hopefully that doesn't happen.
It's definitely not my finest work.
There's an LED on here, and I see a light on the switch, and there's a power LED on
the HatBRICK! Commander, and there's lights on here.
Let's see if this is actually going to work.
lspci...
Hey, look at that.
So we have the switch here.
We have the SATA controller here, and we have the 2.5 gig controller here.
Let's do ip a, and we have an IP address on that.
So let's do an iperf test.
Now we're getting 2 gigabits.
It's not 2.5 gigabits, but it's not nothing.
So coming back only 1.6 gigabits, that's not horrible.
It's still more than a gigabit.
This is probably going to get 2.5 gigabits if you connect it straight to the Pi.
I think that some of the overhead comes out of that packet switching that is running to
the drives as well.
So if I say lsblk, we still have the drives, and they're mounted.
So we'll see if we get any faster write speeds.
It's doing 110, 117.
That's about the same as what we were seeing before.
So we're not getting faster than a gigabit over the 2.5 gig connection, at least for
writes.
I do see a few peaks up to about 125 megabytes per second, so better than a gigabit.
And it's interesting, the overall rate seems a little steadier with the 2.5 gig.
Maybe the Pi's internal controller is a little funky, but I don't know.
But it's giving us a little bit more in the write speeds.
I'm really interested to see the read speeds, though.
Hopefully we can get more than 1 gigabit.
Let's check.
There we go.
217 megabytes, 250 megabytes per second.
That's more what I'm expecting out of a 2.5 gig connection.
So this can put through that data.
It's interesting.
I think it's pulling from RAM because I don't see the drives blinking at all here.
It's probably copying all this data from RAM, and now it's hitting the drives.
And you can see it dips a tiny bit there, so down to 230 megabytes per second.
So Linux usually caches files on the RAM as it's copying them back and forth, so that
if you have a file that you're accessing a lot, it's a lot faster.
But now that it's hitting the drives, it only dipped down 10 megabytes per second, so that's
not bad at all.
So for a read-heavy NAS, this isn't looking like that bad of a setup.
Now that I know that everything is going to work on here hardware-wise, I think it's time
to put OMV on here and see how that runs.
I haven't used OMV 7 yet, so this will be new for me.
I don't think it's that much different than OMV 5 and 6, but let me grab this script and
go over here, and this hopefully will just work.
I'm going to SSH into the Pi and just paste in their script, the installer, and here it
goes.
Let's check power consumption.
So during the install, it was used in between 8 to 10 watts, and it looks like the baseline
for this build is 8 watts with the 2.5 gig network adapter and everything else.
But let's go to pi-nas.local, and does this work?
Maybe I have to use the IP address.
Let's try that.
Well there it is.
I guess it was still booting up.
Okay, so that was not the problem there.
So 'admin' and 'openmediavault' are the password logging in.
There it is.
There's no dashboard, that's okay.
Storage is where we should see our disks.
They should show up.
Yep, 1, 2, 3, 4.
All of them are 8 terabytes.
And I want to create an array.
File systems.
Is this where we create it?
Create and mount a file system, ext4.
But I want to create a RAID array.
How do I create a RAID array?
Am I totally missing something?
I thought there was a thing over here for creating RAID, but I don't see it anymore.
What does this say?
See, this has RAID management, but I'm not seeing RAID management anywhere.
Do you see RAID management anywhere?
We could try ZFS instead of RAID, but that's instead of like mdadmin RAID.
So we can try it out on openmediavault.
I've never tried it on OMV before, but we'll see how it works here.
I like this little END OF LINE here.
I guess a nod back to Tron, the 1974 version.
And we'll do RAIDZ1 since we have three drives.
A RAIDZ1 will use one drive, the equivalent of that for parity data.
That way I could lose one of these four drives and all the data would be intact.
But here we go.
It says pending changes, 21 terabytes available.
Let's apply this.
So now the tank should exist.
Compression is on.
I don't know if I would need compression on, but I'm not going to mess with any of that
right now.
If we go to pools, is there anything else I can do?
Tools, what do we got?
So you can scrub it.
I don't know if it automatically scrubs in here, but it gives us the pool information.
That's nice.
So this is a good interface.
It's not the, maybe not the best thing ever.
And I don't know if it comes with schedules and things by default, but it'd be nice to
have a scheduled snapshot and a pool scrubbing scheduled.
That might be something that you can figure under scheduled tasks.
Yeah, so you'd have to do some of these things.
You'd have to add your own scheduled tasks.
It'd be cool if that added some things by default, but I can see why they don't as well.
But now let's add a file system.
So we have one tank ZFS.
I'll add a shared under tank shared, and we'll just set everyone read/write, right now,
save, turn on Samba enabled 10.0.2.2â 11.
Okay.
So it wants me to use the IP and there's our shared volume.
So let's, I'm going to copy some stuff over to it.
I have this, this folder has a hundred gigabytes.
So I'll do that.
And here it goes.
So it seems similar to the copies that we were getting with RAID 0.
It's interesting.
It goes a little bit faster sometimes than those copies were.
So I'm wondering if ZFS's caching is actually helping here.
So far, I'm pretty impressed.
I think read speeds are where this wins.
Write speeds are where this loses a little bit because you're not going to be able to
get full 2.5 gigabit networking on that.
But but it's better than I was expecting.
And the big win for me, besides the fact that this can be made smaller, if we kind of reconfigure
these boards the big one is the power efficiency, because right now we're using 15 or 16 Watts.
Probably the other NASs that I've built using, you know, prebuilt NASs, they use, they use
10 to 20 Watts idle and they use 25 to 30 Watts when they're doing a lot of stuff.
So this little guy is only using 16 Watts doing the same amount of work which is probably
about half of what most prebuilt NASs would use.
On the flip side, if you build a NAS with the RK3588 chip, you could probably get even
more efficient and more speed.
So there's a couple boards out there that are interesting that I might take a look at
at some point.
But the nice thing is all of this, this is all really well supported.
Like the software just click some buttons and you have everything working.
I haven't always had that same kind of experience when I'm using the RockChip boards.
Some of them are getting pretty good though.
I'm going to go ahead and let this write finish and I'm going to do a full read of that 100
gigs of data and we'll see where we end up.
At the end of the copy, it looks like the system used 22 Watts for a little while while
it was doing some sort of processing.
I don't know what ZFS was doing there.
Maybe that was part of the compression.
I don't know.
It's a lot of power to use at the end there.
The actual performance was interesting.
After that initial part where it was faster than RAID 0, it actually slowed down to a
tiny bit slower than RAID 0 over that long rest of the copy.
That's why it's good to use a large, large file to test the actual performance of your
system because especially with ZFS, it's going to cache a lot in the beginning in RAM and
that throws off how fast your actual disk array is.
But the CPU usage was not too bad.
Power consumption was down around 8 to 16 Watts throughout the whole copy.
But in the end, the file copy was 74 megabytes per second with ZFS in RAID Z1 and it was almost
100 megabytes per second in RAID 0.
Now that's for the writing, which is always going to be a little bit slower with a setup
like this.
Read speeds for both are practically the same.
It's just basically line speed.
It's not hard at all to do the reads.
[interjection Jeff] So this is a little embarrassing.
All those conclusions I have are based on the fact I was benchmarking this all on a Mac.
And I switched to my Windows PC and I was able to get almost line speed for one gigabyte
file copies writing to the Pi and 150 megabytes per second writing over the 2.5 gig network.
So that changes my perspective a little bit on it.
And I think the biggest takeaway is don't use a Mac for benchmarking network file copies,
even if it has 10 gigabit networking and everything else on it seems to be fine.
Mac OS for some reason is not great with file copies.
And I have a whole blog post and I have more details in the GitHub issue linked below.
But let's get back to the video.
[normal Jeff] It's not inconceivable to build a system like this.
All in this one is still under 200 bucks total, including all these extra boards and things.
So but it always goes back to DIY means you're responsible for the software, you're responsible
for maintenance and updates and all that kind of stuff.
So that was a fun experiment.
And I plan on doing some other fun experiments now that I have this little this little board
here that lets me split up PCI Express lanes.
And we'll see how we can bend the Pi 5's PCI Express bus.
It'd be really cool to see a Compute Module 5 expose even more, but we'll see what happens
whenever that comes out.
I know that that was a big change from the Pi 4 to the Compute Module 4.
It gave us PCI Express.
Now we have it on the Pi 5, but I think we might be able to do more in a Compute Module
form factor, but we'll see.
Until next time, I'm Jeff Geerling.
5.0 / 5 (0 votes)
You Need To Install An Adblocker Right Now...
This Server CANNOT Lose DataâŠ
Building the ENDGAME invisible PC
World's Smallest Gaming Monitor
HW News - Intel is a Cluster, NVIDIA Blackwell Boosts Production, Sony "Still Learning"
HW News - Unstable Intel CPUs, New Ryzen CPUs, Legion Go "2," RGB Light Staining GPUs