At some point though, SSDs will beat hard drives on total price (including electricity). I’d like a small and efficient ECC option for then.
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
[1] Gracemont (E-core): https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-c... | https://youtu.be/agUwkj1qTCs (Intel Austin architect, 2021)
[2] "Xfinity using WiFi signals in your house to detect motion", 400 comments, https://news.ycombinator.com/item?id=44426726#44427986
https://www.minisforum.com/pages/n5_pro
https://store.minisforum.com/en-de/products/minisforum-n5-n5...
no RAM 1.399€
16GB RAM 1.459€
48GB RAM 1.749€
96GB RAM 2.119€
96GB DDR5 SO-DIMM costs around 200€ to 280€ in Germany.https://geizhals.de/?cat=ramddr3&xf=15903_DDR5~15903_SO-DIMM...
I wonder if that 128GB kit would work, as the CPU supports up to 256GB
https://www.amd.com/en/products/processors/laptop/ryzen-pro/...
I can't force the page to show USD prices.
This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
FLASHSTOR 6 Gen2 (FS6806X) $1000 - https://www.asustor.com/en/product?p_id=90
LOCKERSTOR 4 Gen3 (AS6804T) $1300 - https://www.asustor.com/en/product?p_id=86
Until there is something in this class with PCIe 4.0, I think we're close to maxing out the IO of these devices.
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I doubt it, but would like to know for sure.
My first experience with these cheap mini PCs was with a Beelink and it was very positive and makes me question the longevity of the hardware. For a NAS, that’s important to me.
I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.
I just want a backup (with history) of the data-SSD. The backup can be a single drive + perhaps remote storage
The entire cabinet uses under 1kwh/day, costing me under $40/year here, compared to my previous Synology and home-made NAS which used 300-500w, costing $300+/year. Sure I paid about $1500 in total when I bought the QNAP and the NVMe drives but just the electricity savings made the expense worth it, let alone the performance, features etc.
Either way, on my most recent NAS build, I didn't bother with a server-grade motherboard, figuring that the standard consumer DDR5 ECC was probably good enough.
You can install a third-party OS on it.
https://www.phoronix.com/news/Intel-IGEN6-IBECC-Driver
Not every new CPU has it, for example, the Intel N95, N97, N100, N200, i3-N300, and i3-N305 all have it, but the N150 doesn't!
It's kind of disappointing that the low power NAS devices reviewed here, the only one with support for IBECC had a limited BIOS that most likely was missing this option. The ODROID H4 series, CWWK NAS products, AOOSTAR, and various N100 ITX motherboards all support it.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
However, Jeff's content is awesome like always
Helps a ton with response times with any NAS thats primarily spinning rust, especially if dealing with decent amount of small files.
Small/portable low-power SSD-based NASs have been commercialized since 2016 or so. Some people call them "NASbooks", although I don't think that term ever gained critical MAS (little joke there).
Examples: https://www.qnap.com/en/product/tbs-464, https://www.qnap.com/en/product/tbs-h574tx, https://www.asustor.com/en/product?p_id=80
Just something to be aware of.
No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.
They're on 24/ and run monthly scrubs, as well as monthly checksum verification of my backup images, and not noticed any issues so far.
I had some correctable errors which got fixed when changing SATA cable a few times, and some from a disk that after 7 years of 24/7 developed a small run of bad sectors.
That said, you got ECC so you should be able to monitor corrected memory errors.
Matt Ahrens himself (one of the creators of ZFS) had said there's nothing particular about ZFS:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...
Sun (and now Oracle) officially recommended using ECC ever since it was intended to be an enterprise product running on 24/7 servers, where it makes sense that anything that is going to be cached in RAM for long periods is protected by ECC.
In that sense it was a "must-have", as business-critical functions require that guarantee.
Now that you can use ZFS on a number of operating systems, on many different architectures, even a Raspberry Pi, the business-critical-only use-case is not as prevalent.
ZFS doesn't intrinsically require ECC but it does trust that the memory functions correctly which you have the best chance of achieving by using ECC.
(I assume M.2 cards are the same, but have not confirmed.)
If this isn’t running 24/7, I’m not sure I would trust it with my most precious data.
Also, these things are just begging for a 10Gbps Ethernet port, since you're going to lose out on a ton of bandwidth over 2.5Gbps... though I suppose you could probably use the USB-C port for that.
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more compact form factor and it fits up to 12 NVMes. I will miss TrueNAS though, but it would be so much smaller.
https://www.aliexpress.com/item/1005006369887180.html
Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.
IPMI could be replaced with NanoKVM or JetKVM...
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB
Not really seeing that in these minis. Either the devices under test haven't been optimized for low power, or their Linux installs have non-optimal configs for low power. My NUC 12 draws less than 4W, measured at the wall, when operating without an attached display and with Wi-Fi but no wired network link. All three of the boxes in the review use at least twice as much power at idle.
My use case is a backup server for my macs and cold storage for movies.
6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).
Very quiet so I can have it in my living room plugged into my TV. < 10W power.
I have no room for a big noisy server.
*well, they allowed on all CPUs, but after zen3 they saw how much money intel was making and joined in. now you must get a "PRO" cpu, to get ECC support, even on mobile (but good luck finding ECC sodimm).
DDR5 ECC is not good enough. What if you have faulty RAM and ECC is constantly correcting it without you knowing it? There's no value in that. You need the OS to be informed so that you are aware of it. It also does not protect errors which occur between the RAM and the CPU.
This is similar to HDDs using ECC. Without SMART you'd have a problem, but part of SMART is that it allows you to get a count of ECC-corrected errors so that you can be aware of the state of the drive.
True ECC takes the role of SMART in regards of RAM, it's just that it only reports that: ECC-corrected errors.
On a NAS, where you likely store important data, true ECC does add value.
Most models I find reuse the most powerful usb-c port as ... recharging port so unusable as DC UPS.
Context: my home server is my old https://frame.work motherboard running proxmox VE with 64GB RAM and 4 TB NVME, powered by usb-c and drawing ... 2 Watt at idle.
4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a 1Gbps link at ~110MBps over SMB 3. But that comes with the heat and potential reliability issues of spinning disks.
I have seen consumer SSDs, namely Samsung 8xx EVO drives have significant latency issues in a RAID config where saturating the drives caused 1+ second latency. This was on Windows Server 2019 using either a SAS controller or JBOD + Storage Spaces. Replacing the drives with used Intel drives resolved the issue.
For instance, most reads from a media NAS will probably be biased towards both newly written files, and sequentially (next episode). This is a use case CPU cache usually deals with transparently when reading from RAM.
What in the WORLD is preventing these systems from getting at least 10gbps interfaces? I have been waiting for years and years and years and years and the only thing on the market for small systems with good networking is weird stuff that you have to email Qotom to order direct from China and _ONE_ system from Minisforum.
I'm beginning to think there is some sort of conspiracy to not allow anything smaller than a full size ATX desktop to have anything faster than 2.5gbps NICs. (10gbps nics that plug into NVMe slots are not the solution.)
I do this. One mergerfs mount with an ssd and three hdds made to look like one disk. Mergerfs is set to write to the ssd if it’s not full, and read from the ssd first.
A chron job moves out the oldest files on the ssd once per night to the hdds (via a second mergerfs mount without the ssd) if the ssd is getting full.
I have a fourth hdd that uses snap raid to protect the ssd and other hdds.
While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.
How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?
I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.
By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.
Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.
I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two 10gbe ports (though unrealistic given the size, if they were sfp+ that'd be incredible), drives split across two different nvme raid controllers. I don't care how expensive or loud it is or how much power it uses, I just want a coffee-cup sized cube that can handle the kind of shit you'd typically bring a rack along for. It's 2025.
Price and price. Like another commenter said, there is at least one 10Gbe mini NAS out there, but it's several times more expensive.
What's the use case for the 10GbE? Is ~200MB/sec not enough?
I think the segment for these units is low price, small size, shared connectivity. The kind of thing you tuck away in your house invisibly and silently, or throw in a bag to travel with if you have a few laptops that need shared storage. People with high performance needs probably already have fast nvme local storage is probably the thinking.
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
I have an 8 drive NAS running 7200 RPM drives, which is on a wall mounted shelf drilled into the studs.
On the other side of that wall is my home office.
I had to put the NAS on speaker springs [1] to not go crazy from the hum :)
[1] https://www.amazon.com.au/Nobsound-Aluminum-Isolation-Amplif...
Why not a single large capacity M.2 SSD using 4 full lanes and proper backup with a cheaper , larger capacity and more reliable spinning disk?
It’d be great if you could fully utilise the M.2 speed but they are not about that.
Why not a single large M.2? Price.
2TB ssd are super cheap. But most systems don't have the expandability to add a bunch of them. So I fully get the incentive here, being able to add multiple drives. Even if you're not reaping additional speed.
Modern Power MOSFETs are cheaper and more efficient. 10 Years ago 80Gold efficiency was a bit expensive and 80Bronze was common.
Today, 80Gold is cheap and common and only 80Platinum reaches into the exotic level.
Not the "cube" sized, but surprisingly small still. I've got one under the desk, so I don't even register it is there. Stuffed it with 4x 4TB drives for now.
Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).
I'm hopeful 4/8 TB NVMe drives will come down in price someday but they've been remarkably steady for a few years.
No issues so far. The system is completely stable. Though, I did add a separate fan at the bottom of the Odroid case to help cool the NVMe SSDs. Even with the single lane of PCIe, the 2.5gbit/s networking gets maxed out. Maybe I could try bonding the 2 networking ports but I don't have any client devices that could use it.
I had an eye on the Beelink ME Mini too, but I don't think the NVMe disks are sufficiently cooled under load, especially on the outer side of the disks.
I know u can patch microcode at runtime/boot but I don’t think that covers all vulnerabilities
To be sure... is the data compressible, or repeated? I have encountered an SSD that silently performed compression on the data I wrote to it (verified by counting its stats on blocks written). I don't know if there are SSDs that silently deduplicate the data.
(An obvious solution is to copy data from /dev/urandom. But beware of the CPU cost of /dev/urandom; on a recent machine, it takes 3 seconds to read 1GB from /dev/urandom, so that would be the bottleneck in a write test. But at least for a read test, it doesn't matter how long the data took to write.)
If I could get the same unit for like $299 I'd run it like that for my NAS too, as long as I could run a full backup to another device (and a 3rd on the cloud with Glacier of course).
Linux ISOs?
What would you use instead?
ZFS is better than raw RAID, but 1 parity per 5 data disks is a pretty good match for the reliability you can expect out of any one machine.
Much more important than better parity is having backups. Maybe more important than having any parity, though if you have no parity please use JBOD and not RAID-0.
I'm dreaming of this: mini-nas connected direct to my tv via HDMI or USB. I think I'd want HMDI and let the nas handle streaming/decoding. But if my TV can handle enough formats. maybe USB will do.
anyone have experience with this?
I've been using a combination of media server on my Mac with client on Apple TV and I have no end of glitches.
As you want to bring the data server right to the TV, and you'll output the video via HDMI, just use any PC. There are plenty of them designed for this (usually they're fanless for reducing noise)... search "home theater PC."
You can install Kodi as the interface/organizer for playing your media files. It handles the all the formats... the TV is just the ouput.
A USB CEC adapter will also allow you to use your TV remote with Kodi.
It gets a lot of use in my household. I have my server (a headless Intel iGPU box) running it in docker with the Intel iGPU encoder passed through.
I let the iGPU default encode everything realtime, and now that plex has automatic subtitle sync, my main source of complaints is gone. I end up with a wide variety of formats as my wife enjoys obscure media.
One of the key things that helped a lot was segregating Anime to it own TV collection so that anime specific defaults can be applied there.
You can also run a client on one of these machines directly, but then you are dealing with desktop Linux.
There was some stuff in DDR5 that made ECC harder to implement (unlike DDR4 where pretty much everything AMD made supported unbuffered ECC by default), but its still ridiculous how hard it is to find something that supports DDR5 ECC that doesn't suck down 500W at idle.
Since RAID is not meant for backup, but for reliability, losing a drive while restoring will kill your storage pool and having to restore the whole data from a backup (e.g. from a cloud drive)is probably not what you want, since it takes time where the device is offline. If you rely on RAID5 without having a backup you're done.
So I have a RAID1, which is simple, reliable and easy to maintain. Replacing 2 drives with higher capacity ones and increasing the storage is easy.
Fujitsu D3417-B12
Intel Xeon 1225
64GB ecc
WD SN850x 2TB
mATX case
Pico PSU 150
For backup I use a 2TB enterprise HDD and ZFS sendFor snapshotting i use zfs-auto-snapshot
So really nothing recommendable for buying today. You could go for this
https://www.aliexpress.com/item/1005006369887180.html
Or an old Fujitsu Celsius W580 Workstation with a Bojiadafast ATX Power Supply Adapter, if you need harddisks.
Unfortunately there is no silver bullet these days. The old stuff is... well too old or no longer available and the new stuff is either to pricey, lacks features (ECC and 2.5G mainly) or to power hungry.
A year ago there were bargains for Gigabyte MC12-LE0 board available for < 50bucks, but nowadays these cost about 250 again. These boards also had the problem of drawing too much power for an ultra low power homelab.
If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 with a gaming board (like ASUS ROG Strix B550-F Gaming) with ECC RAM, which is supported on some boards.
If your peak power draw is <200W, I would recommend an efficient <450W power supply.
Another aspect: Buying a 120 bucks power supply that is 1.2% more efficient than a 60 bucks one is just a waste of money.
If your odds of disk failure in a rebuild are "only" 10x normal failure rate, and it takes a week, 5 disks will all survive that week 98% of the time. That's plenty for a NAS.
There are many similar articles.
Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.
Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.
https://geizhals.de/?cat=ramddr3&sort=r&xf=1454_49152%7E1590...
Kingston Server Premier SO-DIMM 48GB, DDR5-5600, CL46-45-45, ECC KSM56T46BD8KM-48HM for 250€
Which then means 500€ for the 96GB
If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.
In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.
Hence the first sentence of my three sentence post.
I currently do not have time for a clear how to, but some relevant references would be:
https://www.freedesktop.org/software/systemd/man/latest/syst...
https://www.krose.org/~krose/measured_boot
Integrating this better into Proxmox projects is definitively something I'd like to see sooner or later.
It's a file server (when did we started calling these "NAS"?) with Samba, NFS but also some database stuff. No VMs or dockers. Just a file and database server.
It has full disk encryption with TPM unlocking with my custom keys so it can boot unattended. I'm quote happy with it.