←back to thread

429 points ingve | 5 comments | | HN request time: 2.919s | source
Show context
sandreas ◴[] No.44466616[source]
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.

You give up so much by using an all in mini device...

No Upgrades, no ECC, harder cooling, less I/O.

I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.

So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...

However, Jeff's content is awesome like always

replies(8): >>44466782 #>>44466835 #>>44467230 #>>44467786 #>>44467994 #>>44468973 #>>44470088 #>>44475321 #
ndiddy ◴[] No.44467994[source]
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
replies(4): >>44468216 #>>44469623 #>>44473236 #>>44473616 #
acranox ◴[] No.44468216[source]
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
replies(1): >>44468553 #
markhahn ◴[] No.44468553[source]
such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)
replies(4): >>44468862 #>>44468863 #>>44472399 #>>44473209 #
philjohn ◴[] No.44468862[source]
There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.

Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).

replies(1): >>44470211 #
1. sandreas ◴[] No.44470211[source]
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...
replies(1): >>44470705 #
2. espadrine ◴[] No.44470705[source]
Here is someone that had significant corruption until they stopped: https://www.xda-developers.com/why-not-to-spin-down-nas-hard...

There are many similar articles.

replies(2): >>44471098 #>>44473457 #
3. philjohn ◴[] No.44471098[source]
I wonder if they were just hit with the bathtub curve?

Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.

Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.

replies(1): >>44471586 #
4. ◴[] No.44471586{3}[source]
5. billfor ◴[] No.44473457[source]
I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.