Most active commenters
  • sandreas(12)
  • (4)
  • Dylan16807(4)

←back to thread

429 points ingve | 57 comments | | HN request time: 2.285s | source | bottom
1. sandreas ◴[] No.44466616[source]
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.

You give up so much by using an all in mini device...

No Upgrades, no ECC, harder cooling, less I/O.

I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.

So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...

However, Jeff's content is awesome like always

replies(8): >>44466782 #>>44466835 #>>44467230 #>>44467786 #>>44467994 #>>44468973 #>>44470088 #>>44475321 #
2. ◴[] No.44466782[source]
3. samhclark ◴[] No.44466835[source]
I think you're right generally, but I wanna call out the ODROID H4 models as an exception to a lot of what you said. They are mostly upgradable (SODIMM RAM, SATA ports, M.2 2280 slots), and it does support in-band ECC which kinda checks the ECC box. They've got a Mini-ITX adapter for $15 so it can fit into existing cases too.

No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.

replies(3): >>44467114 #>>44467141 #>>44472357 #
4. sandreas ◴[] No.44467114[source]
Well, if you would like to go mini (with ECC and 2.5G) you could take a look at this one:

https://www.aliexpress.com/item/1005006369887180.html

Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.

IPMI could be replaced with NanoKVM or JetKVM...

replies(1): >>44470227 #
5. geek_at ◴[] No.44467141[source]
Not sure about the odroid but I got myself the nas kit from friendly elec. With the largest ram it was about 150 bucks and comes with 2,5g ethernet and 4 NVME slots. No fan and keeps fairly cool even under load.

Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB

https://wiki.friendlyelec.com/wiki/index.php/CM3588_NAS_Kit

6. fnord77 ◴[] No.44467230[source]
these little boxes are perfect for my home

My use case is a backup server for my macs and cold storage for movies.

6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).

Very quiet so I can have it in my living room plugged into my TV. < 10W power.

I have no room for a big noisy server.

replies(2): >>44467725 #>>44468532 #
7. sandreas ◴[] No.44467725[source]
While I get your point about size, I'd not use RAID-5 for my personal homelab. I'd also say that 6x2TB drives are not the optimal solution for low power consumption. You're also missing out server quality BIOS, Design/Stability/x64 and remote management. However, not bad.

While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.

How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?

replies(3): >>44468123 #>>44469640 #>>44469909 #
8. cyanydeez ◴[] No.44467786[source]
I've had a synology since 2015. Why, besides the drives themselves, would most home labs need to upgrade?

I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.

By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.

replies(3): >>44467837 #>>44468007 #>>44468548 #
9. sandreas ◴[] No.44467837[source]
Understandable... Well, the bottleneck for a Proxmox Server often is RAM - sometimes CPU cores (to share between VMs). This might not be the case for a NAS-only device.

Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.

I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.

10. ndiddy ◴[] No.44467994[source]
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
replies(4): >>44468216 #>>44469623 #>>44473236 #>>44473616 #
11. ◴[] No.44468007[source]
12. epistasis ◴[] No.44468123{3}[source]
I'm interested in learning more about your setup. What sort of system did you put together for $350? Is it a normal ATX case? I really like the idea of running proxmox but I don't know how to get something cheap!
replies(1): >>44470274 #
13. acranox ◴[] No.44468216[source]
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
replies(1): >>44468553 #
14. UltraSane ◴[] No.44468532[source]
Storing backups and movies on NVMe ssds is just a waste of money.
replies(1): >>44470375 #
15. dragontamer ◴[] No.44468548[source]
> except maybe power supply.

Modern Power MOSFETs are cheaper and more efficient. 10 Years ago 80Gold efficiency was a bit expensive and 80Bronze was common.

Today, 80Gold is cheap and common and only 80Platinum reaches into the exotic level.

replies(1): >>44470403 #
16. markhahn ◴[] No.44468553{3}[source]
such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)
replies(4): >>44468862 #>>44468863 #>>44472399 #>>44473209 #
17. philjohn ◴[] No.44468862{4}[source]
There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.

Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).

replies(1): >>44470211 #
18. 1over137 ◴[] No.44468863{4}[source]
Letting hdds spin down is generally not advisable in a NAS, unless you access it really rarely perhaps.
replies(2): >>44470213 #>>44471560 #
19. ◴[] No.44468973[source]
20. throw0101d ◴[] No.44469623[source]
> […] mechanical hard drives are still way more cost effective for storing lots of data than NVMe.

Linux ISOs?

21. Dylan16807 ◴[] No.44469640{3}[source]
> I'd not use RAID-5 for my personal homelab.

What would you use instead?

ZFS is better than raw RAID, but 1 parity per 5 data disks is a pretty good match for the reliability you can expect out of any one machine.

Much more important than better parity is having backups. Maybe more important than having any parity, though if you have no parity please use JBOD and not RAID-0.

replies(2): >>44470252 #>>44470281 #
22. j45 ◴[] No.44469909{3}[source]
I agreed with this generally until learning the long way why RAID 5 minimum is the only way to have some peace of mind and always a nas with at least 1-2 extra bays than you need.

Storage is easier as an appliance that just runs.

23. layoric ◴[] No.44470088[source]
No ECC is the biggest trade off for me, but the C236 express chipset has very little choice for CPUs, they are all 4 core 8 thread. Ive got multiple x99 platform systems and for a long time they were the king of cost efficiency, but lately the ryzen laptop chips are becoming too good to pass up, even without ECC. Eg Ryzen 5825u minis
replies(1): >>44470733 #
24. sandreas ◴[] No.44470211{5}[source]
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...
replies(1): >>44470705 #
25. sandreas ◴[] No.44470213{5}[source]
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...

(see above, same question)

replies(1): >>44475243 #
26. a012 ◴[] No.44470227{3}[source]
That looks pretty slick with a standard hsf for the CPU, thanks for sharing
27. sandreas ◴[] No.44470252{4}[source]
I'd almost always use RAID-1 or if I had > 4 disks, maybe RAID-6. RAID-5 seems very cost effective at first, but if you loose a drive the probability of losing another one in the restoring process is pretty high (I don't have the numbers, but I researched that years ago). The disk-replacement process produces very high load on the non defective disks and the more you have the riskier the process. Another aspect is that 5 drives draw way more power than 2 and you cannot (easily) upgrade the capacity, although ZFS offers a feature for RAID5-expansion.

Since RAID is not meant for backup, but for reliability, losing a drive while restoring will kill your storage pool and having to restore the whole data from a backup (e.g. from a cloud drive)is probably not what you want, since it takes time where the device is offline. If you rely on RAID5 without having a backup you're done.

So I have a RAID1, which is simple, reliable and easy to maintain. Replacing 2 drives with higher capacity ones and increasing the storage is easy.

28. sandreas ◴[] No.44470274{4}[source]
My current config:

  Fujitsu D3417-B12
  Intel Xeon 1225
  64GB ecc
  WD SN850x 2TB
  mATX case
  Pico PSU 150
For backup I use a 2TB enterprise HDD and ZFS send

For snapshotting i use zfs-auto-snapshot

So really nothing recommendable for buying today. You could go for this

https://www.aliexpress.com/item/1005006369887180.html

Or an old Fujitsu Celsius W580 Workstation with a Bojiadafast ATX Power Supply Adapter, if you need harddisks.

Unfortunately there is no silver bullet these days. The old stuff is... well too old or no longer available and the new stuff is either to pricey, lacks features (ECC and 2.5G mainly) or to power hungry.

A year ago there were bargains for Gigabyte MC12-LE0 board available for < 50bucks, but nowadays these cost about 250 again. These boards also had the problem of drawing too much power for an ultra low power homelab.

If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 with a gaming board (like ASUS ROG Strix B550-F Gaming) with ECC RAM, which is supported on some boards.

29. timc3 ◴[] No.44470281{4}[source]
I would run 2 or more parity disks always. I have had disks fail and rebuilding with only one parity drive is scary (have seen rebuilds go bad because a second drive failed whilst rebuilding).

But agree about backups.

replies(1): >>44470465 #
30. sandreas ◴[] No.44470375{3}[source]
Absolutely. I don't store movies at all but if I would, I would add a USB-based solution that could be turned off via shelly plug / tasmota remotely.
31. sandreas ◴[] No.44470403{3}[source]
A 80Bronze 300W can still be more efficient than a 750W 80Platinum on mainly low loads. Additionally, some of the devices are way more efficient than they are certified for. A well known example is the Corsair RM550x (2021).

If your peak power draw is <200W, I would recommend an efficient <450W power supply.

Another aspect: Buying a 120 bucks power supply that is 1.2% more efficient than a 60 bucks one is just a waste of money.

32. Dylan16807 ◴[] No.44470465{5}[source]
Were those arrays doing regular scrubs, so that they experience rebuild-equivalent load every month or two and it's not a sudden shock to them?

If your odds of disk failure in a rebuild are "only" 10x normal failure rate, and it takes a week, 5 disks will all survive that week 98% of the time. That's plenty for a NAS.

replies(1): >>44471796 #
33. espadrine ◴[] No.44470705{6}[source]
Here is someone that had significant corruption until they stopped: https://www.xda-developers.com/why-not-to-spin-down-nas-hard...

There are many similar articles.

replies(2): >>44471098 #>>44473457 #
34. mytailorisrich ◴[] No.44470733[source]
For a home NAS, ECC is as needed as it is on your laptop.
replies(1): >>44474508 #
35. philjohn ◴[] No.44471098{7}[source]
I wonder if they were just hit with the bathtub curve?

Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.

Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.

replies(1): >>44471586 #
36. Dr4kn ◴[] No.44471560{5}[source]
Spin down isn't as problematic today. It really depends on your setup and usage.

If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.

In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.

37. ◴[] No.44471586{8}[source]
38. dwedge ◴[] No.44471796{6}[source]
If the drives are the same age and large parts of the drive haven't been read from for a long time until the rebuild you might find it already failed. Anecdotally around 12 years ago the chances of a second disk failing during a raid 5 rebuild (in our setup) was probably more like 10-20%
replies(1): >>44471914 #
39. Dylan16807 ◴[] No.44471914{7}[source]
> and large parts of the drive haven't been read from for a long time

Hence the first sentence of my three sentence post.

replies(1): >>44472596 #
40. ilkhan4 ◴[] No.44472357[source]
You can get a 1 -> 4 M.2 adapter for these as well which would give each one a 1x PCIe lane (same as all these other boards). If you still want spinning rust, these also have built-in power for those and SATA ports so you only need a 12-19v power supply. No idea why these aren't more popular as a basis for a NAS.
41. olavgg ◴[] No.44472399{4}[source]
I experimented with spindowns, but the fact is, many applications needs to write to disk several times per minute. Because of this I only use SSD's now. Archived files are moved to the Cloud. I think Google Disk is one of the best alternatives out there, as it has true data streaming built in the MacOS or Windows clients. It feels like an external hard drive.
42. dwedge ◴[] No.44472596{8}[source]
If I wanted to deal with snark I'd reply to people on Reddit.
replies(1): >>44475109 #
43. sixothree ◴[] No.44473209{4}[source]
I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.
replies(1): >>44473625 #
44. ThatMedicIsASpy ◴[] No.44473236[source]
Low power, low noise, low profile system, LOW ENTRY COST. I can easily get a beelink me mini or two and build a NAS + offsite storage. Two 1TB SSDs for a mirror are around 100€, two new 1TB HDDs are around 80€.

You are thinking in dimensions normal people have no need for. Just the numbers alone speaks volumes, 12TB, 6 hdds, 8TB NVMes, 2.5GB LAN.

45. billfor ◴[] No.44473457{7}[source]
I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.
46. kllrnohj ◴[] No.44473616[source]
It depends on what you consider "lots" of data. For >20tb yes absolutely obviously by a landslide. But if you just want self-hosted Google Drive or Dropbox you're in the 1-4TB range where mechanical drives are a very bad value as they have a pretty significant price floor. WD Blue 1tb hdd is $40 while WD Blue 1tb nvme is $60. The HDD still has a strict price advantage, but the nvme drive uses way less power, is more reliable, doesn't have spinup time (consumer usage is very infrequently accessed, keeping the mechanical drives spinning continuously gets into that awkward zone of worthwhile)

And these prices are getting low enough, especially with this NUC-based solutions, to actually be price competitive with the low tiers of drive & dropbox while also being something you actually own and control. Dropbox still charges $120/yr for the entry level plan of just 2TB after all. 3x WD Blue NVMEs + an N150 and you're at break-even in 3 years or less

replies(1): >>44475314 #
47. sandreas ◴[] No.44473625{5}[source]
Did you consider ZFS with L2ARC? The extra caching device might make this possible...
replies(1): >>44474830 #
48. vbezhenar ◴[] No.44474508{3}[source]
ECC is essential indeed for any computer. But the laptop situation is truly dire, while it's possible to find some NAS with ECC support.
replies(1): >>44474854 #
49. dsr_ ◴[] No.44474830{6}[source]
That's not how L2ARC works. It's not how the ZIL SLOG works, either.

If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.

An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.

A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.

50. mytailorisrich ◴[] No.44474854{4}[source]
Most computers don't have ECC. So it might be essential in theory but in practice things work fine without (for standard personal, even work, use cases).
51. Dylan16807 ◴[] No.44475109{9}[source]
My goal isn't to be rude, but when you skip over a critical part of what I'm saying it causes a communication issue. Are you correcting my numbers, or intentionally giving numbers for a completely different scenario, or something in between? Is it none of those and you weren't taking my comment seriously enough to read 50 words? The way you replied made it hard to tell.

So I made a simple comment to point out the conflict, a little bit rude but not intended to escalate the level of rudeness, and easier for both of us than writing out a whole big thing.

52. gosub100 ◴[] No.44475243{6}[source]
It's probably decades old anecdata from people who re commissioned old drives that were on the shelf for many years. The theory is that the grease on the spindle dries up and seizes up the platters.
53. gknoy ◴[] No.44475314{3}[source]
I appreciate you laying it out like that. I've seen these NVME NAS things mentioned and had been thinking that the reliability of SSDs was so much worse than HDDs.
replies(1): >>44476351 #
54. asciimov ◴[] No.44475321[source]
The selling point for the people in the Plex community is the N100/N150 include Intel’s Quicksync which gives you video hardware transcoding without a dedicated video card. It’ll handle 3 to 4 4K transcoded streams.

There are several sub $150 units that allow you to upgrade the ram, limited to one 32gb stick max. You can use an nvme to sata adapter to add plenty of spinning rust or connect it to a das.

While I wouldn’t throw any vms on these, you have enough headroom for non-ai home sever apps.

replies(2): >>44475620 #>>44476898 #
55. MrDarcy ◴[] No.44475620[source]
We’ve been able to buy used OptiPlex 3060 or 3070’s for about $100 for years now and they tick all the boxes for Plex and QuickSync. Only two NVME and one SATA slot though, so maybe not ideal for a NAS but definitely fits the power and thermal profile, and it’s nice to reuse perfectly good hardware.
56. kllrnohj ◴[] No.44476351{4}[source]
SSDs are just limited write cycles whereas HDDs literally spin themselves to death. In a simple consumer NAS usage, like if this was just photo backup, that basically means SSDs will last forever. Meanwhile those HDDs start hitting borrowed time at 5-8 years, regardless of write cycles.
57. sandreas ◴[] No.44476898[source]
The device I linked below (https://www.aliexpress.com/item/1005006369887180.html) has a XEON, ECC, 2xNVMe, SATA, 2.5GBit and everything else you need in a very small box.

Intel also means it has QuickSync, so you won't need to buy a N150. However, I tend to be sceptic about these aliexpress-boxes, too. Reliable server manufacturers, like Dell, HP, Lenovo or Fujitsu (RIP) are way more reliable.