←back to thread

440 points ingve | 1 comments | | HN request time: 0.368s | source
Show context
sandreas ◴[] No.44466616[source]
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.

You give up so much by using an all in mini device...

No Upgrades, no ECC, harder cooling, less I/O.

I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.

So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...

However, Jeff's content is awesome like always

replies(10): >>44466782 #>>44466835 #>>44467230 #>>44467786 #>>44467994 #>>44468973 #>>44470088 #>>44475321 #>>44479249 #>>44479523 #
ndiddy ◴[] No.44467994[source]
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
replies(4): >>44468216 #>>44469623 #>>44473236 #>>44473616 #
acranox ◴[] No.44468216[source]
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
replies(1): >>44468553 #
markhahn ◴[] No.44468553[source]
such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)
replies(4): >>44468862 #>>44468863 #>>44472399 #>>44473209 #
sixothree ◴[] No.44473209[source]
I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.
replies(1): >>44473625 #
sandreas ◴[] No.44473625[source]
Did you consider ZFS with L2ARC? The extra caching device might make this possible...
replies(1): >>44474830 #
1. dsr_ ◴[] No.44474830[source]
That's not how L2ARC works. It's not how the ZIL SLOG works, either.

If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.

An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.

A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.