Most active commenters
  • sandreas(4)

←back to thread

432 points ingve | 22 comments | | HN request time: 2s | source | bottom
Show context
sandreas ◴[] No.44466616[source]
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.

You give up so much by using an all in mini device...

No Upgrades, no ECC, harder cooling, less I/O.

I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.

So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...

However, Jeff's content is awesome like always

replies(8): >>44466782 #>>44466835 #>>44467230 #>>44467786 #>>44467994 #>>44468973 #>>44470088 #>>44475321 #
1. ndiddy ◴[] No.44467994[source]
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
replies(4): >>44468216 #>>44469623 #>>44473236 #>>44473616 #
2. acranox ◴[] No.44468216[source]
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
replies(1): >>44468553 #
3. markhahn ◴[] No.44468553[source]
such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)
replies(4): >>44468862 #>>44468863 #>>44472399 #>>44473209 #
4. philjohn ◴[] No.44468862{3}[source]
There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.

Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).

replies(1): >>44470211 #
5. 1over137 ◴[] No.44468863{3}[source]
Letting hdds spin down is generally not advisable in a NAS, unless you access it really rarely perhaps.
replies(2): >>44470213 #>>44471560 #
6. throw0101d ◴[] No.44469623[source]
> […] mechanical hard drives are still way more cost effective for storing lots of data than NVMe.

Linux ISOs?

7. sandreas ◴[] No.44470211{4}[source]
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...
replies(1): >>44470705 #
8. sandreas ◴[] No.44470213{4}[source]
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...

(see above, same question)

replies(1): >>44475243 #
9. espadrine ◴[] No.44470705{5}[source]
Here is someone that had significant corruption until they stopped: https://www.xda-developers.com/why-not-to-spin-down-nas-hard...

There are many similar articles.

replies(2): >>44471098 #>>44473457 #
10. philjohn ◴[] No.44471098{6}[source]
I wonder if they were just hit with the bathtub curve?

Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.

Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.

replies(1): >>44471586 #
11. Dr4kn ◴[] No.44471560{4}[source]
Spin down isn't as problematic today. It really depends on your setup and usage.

If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.

In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.

12. ◴[] No.44471586{7}[source]
13. olavgg ◴[] No.44472399{3}[source]
I experimented with spindowns, but the fact is, many applications needs to write to disk several times per minute. Because of this I only use SSD's now. Archived files are moved to the Cloud. I think Google Disk is one of the best alternatives out there, as it has true data streaming built in the MacOS or Windows clients. It feels like an external hard drive.
14. sixothree ◴[] No.44473209{3}[source]
I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.
replies(1): >>44473625 #
15. ThatMedicIsASpy ◴[] No.44473236[source]
Low power, low noise, low profile system, LOW ENTRY COST. I can easily get a beelink me mini or two and build a NAS + offsite storage. Two 1TB SSDs for a mirror are around 100€, two new 1TB HDDs are around 80€.

You are thinking in dimensions normal people have no need for. Just the numbers alone speaks volumes, 12TB, 6 hdds, 8TB NVMes, 2.5GB LAN.

16. billfor ◴[] No.44473457{6}[source]
I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.
17. kllrnohj ◴[] No.44473616[source]
It depends on what you consider "lots" of data. For >20tb yes absolutely obviously by a landslide. But if you just want self-hosted Google Drive or Dropbox you're in the 1-4TB range where mechanical drives are a very bad value as they have a pretty significant price floor. WD Blue 1tb hdd is $40 while WD Blue 1tb nvme is $60. The HDD still has a strict price advantage, but the nvme drive uses way less power, is more reliable, doesn't have spinup time (consumer usage is very infrequently accessed, keeping the mechanical drives spinning continuously gets into that awkward zone of worthwhile)

And these prices are getting low enough, especially with this NUC-based solutions, to actually be price competitive with the low tiers of drive & dropbox while also being something you actually own and control. Dropbox still charges $120/yr for the entry level plan of just 2TB after all. 3x WD Blue NVMEs + an N150 and you're at break-even in 3 years or less

replies(1): >>44475314 #
18. sandreas ◴[] No.44473625{4}[source]
Did you consider ZFS with L2ARC? The extra caching device might make this possible...
replies(1): >>44474830 #
19. dsr_ ◴[] No.44474830{5}[source]
That's not how L2ARC works. It's not how the ZIL SLOG works, either.

If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.

An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.

A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.

20. gosub100 ◴[] No.44475243{5}[source]
It's probably decades old anecdata from people who re commissioned old drives that were on the shelf for many years. The theory is that the grease on the spindle dries up and seizes up the platters.
21. gknoy ◴[] No.44475314[source]
I appreciate you laying it out like that. I've seen these NVME NAS things mentioned and had been thinking that the reliability of SSDs was so much worse than HDDs.
replies(1): >>44476351 #
22. kllrnohj ◴[] No.44476351{3}[source]
SSDs are just limited write cycles whereas HDDs literally spin themselves to death. In a simple consumer NAS usage, like if this was just photo backup, that basically means SSDs will last forever. Meanwhile those HDDs start hitting borrowed time at 5-8 years, regardless of write cycles.