Most active commenters
  • wtallis(4)

←back to thread

108 points Krontab | 22 comments | | HN request time: 0.24s | source | bottom
1. xyse53 ◴[] No.46276223[source]
I've noticed there aren't a lot of reasonable home/sb m.2 NVME NAS options for main boards and enclosures.

SATA SSD still seems like the way you have to go for a 5 to 8 drive system (boot disk + 4+ raid6).

replies(5): >>46276416 #>>46276486 #>>46276576 #>>46277298 #>>46279401 #
2. pzmarzly ◴[] No.46276416[source]
When it comes to ready-made home/SMB-grade NASes, in recent year or two plenty of options popped up: Terramaster F8, Flashstor 6 or 12, BeeLink ME mini N150 (6x NVMe). It's just QNAP and Synology who seem not interested.
replies(1): >>46282665 #
3. poly2it ◴[] No.46276486[source]
How well does buying PCIe to M.2 adapters work for a custom NAS? Slot-wise you should be able to get 16 M.2 devices per motherboard with for example a Supermicro consumer board.
replies(3): >>46276673 #>>46276738 #>>46276797 #
4. rpcope1 ◴[] No.46276576[source]
It seems like it's rare to find M.2 with the sort of things you'd want in a NAS like PLP, reasonably high DWPD, good controllers, etc. and you've also got to contend with the problem of managing heat in a way I had never seen with 2.5 or 3.5 drives. I would imagine the sort of people doing NVMe for NAS/SAN/servers are all probably using U.2 or U.3 (I know I do).
replies(2): >>46276750 #>>46278721 #
5. wtallis ◴[] No.46276673[source]
Can you point to a specific motherboard? 16 separate PCIe links of any width sounds rather high for a consumer platform.
replies(1): >>46276902 #
6. toast0 ◴[] No.46276738[source]
The difficulty with pcie to m.2 adapters is you usually can't use bifurcation below x4 and active PCIe switches got very expensive after PCIe 3.0.

Used multiport SATA HBA cards are inexpensive on eBay. Multiport nvme cards are either passive for bifurcation and give you 4x x4 for an x16 slot or are active and very expensive.

I don't see how you get to 16 m.2 devices on a consumer socket without lots of expense.

replies(2): >>46282674 #>>46299840 #
7. 8cvor6j844qw_d6 ◴[] No.46276750[source]
Its also quite difficult to find 2280 M.2 SATA SSD. Had an old laptop that only takes 2280 M.2 SATA SSD.

Its always one of the 2. M.2 but PCIe/NVMe, or SATA but not M.2.

replies(1): >>46277254 #
8. crote ◴[] No.46276797[source]
I don't think there are any consumer boards which support this?

In practice you can put 4 drives in the x16 slot intended for a GPU, 1 drive each in any remaining PCIe slots, plus whatever is available onboard. 8 should be doable, but I doubt you can go beyond 12.

I know there are some $2000 PCIe cards with onboard switches so you can stick 8 NVMe drives on there - even with an x1 upstream connection - but at that point you're better off going for a Threadripper board.

9. poly2it ◴[] No.46276902{3}[source]
C9X299-RPGF

https://www.supermicro.com/en/products/motherboard/C9X299-RP...

replies(2): >>46277193 #>>46277252 #
10. crote ◴[] No.46277193{4}[source]
That's a workstation board, not a regular consumer board, and it is over 5 years old by now - it has even been discontinued by Supermicro.

Buiding a new system with that in 2025 would be a bit silly.

11. toast0 ◴[] No.46277252{4}[source]
A few generations old, and HEDT, which isn't exactly consumer but ok. I see one for $100 on ebay, so that's not awful either.

Even that gives you one m.2 slot, and 8/8/8/16 on the x16 slots, if you have the right cpu. Assuming those are can all bifurcate down to x4 (which is most common), that gets you 10 m.2 slots out of the 40 lanes. That's more than you'd get on a modern desktop board, but it's not 16 either.

For home use, you're in a tricky spot; can't get it in one box, so horizontal scaling seems like a good avenue. But in order to do horizontal scaling, you probably need high speed networking, and if you take lanes for that, you don't have many lanes left for storage. Anyway, I don't think there's much simple software to scale out storage over multiple nodes; there's stuff out there, but it's not simple and it's not really targeted towards a small node count. But, if you don't really need high speed, a big array of spinning disks is still approachable.

12. barrkel ◴[] No.46277254{3}[source]
Fwiw, SATA and NVMe are mutually incompatible concepts for a single device; SATA drives use AHCI to wrap ATA commands in a SCSI-shaped queuing mechanism called command lists over the SATA bus, while NVMe (M.2/U.2/add-in) drives talk NVMe protocol (multiple queues) over PCIe.
replies(1): >>46277943 #
13. nonameiguess ◴[] No.46277298[source]
I don't now if you consider it "reasonable" but the Gigabye Aorus TRX boards even from 6 years ago came with a free PCIE expansion card that held 8 M2 sticks, up to 32 TB on a consumer board. It's eATX, of course, so quite a bit bigger than an appliance NAS, and the socket is for a threadripper, more suitable for a hypervisor than a NAS, but if you're willing to blow five to ten grand and be severely overprovisioned, you can build a hell of a rig.
replies(1): >>46277985 #
14. wtallis ◴[] No.46277943{4}[source]
For a drive, yes, SATA and NVMe are mutually exclusive. The M.2 slot can provide both options. But if you have a machine with a M.2 slot that's only wired for SATA but not PCIe, your choices for drives to put in that slot have been quite limited for a long time.
replies(1): >>46278309 #
15. wtallis ◴[] No.46277985[source]
Are you sure? I've seen plenty of motherboards bundle a PCIe riser to passively bifurcate the PCIe slot to support four M.2 drives in an x16 slot or two in an x8 slot, but doing eight M.2 drives in one PCIe slot would either require a PCIe switch that would be too expensive for a free bundled card, or require PCIe bifurcation down to two lanes per link, which I don't think any workstation CPUs have ever supported. And 32TB is possible with just four M.2 SSDs.
16. verall ◴[] No.46278309{5}[source]
There were even M.2 PCIe-connected AHCI drives - both not-SATA and not-NVMe. Samsung SM951 was one. You can find them on ebay but not otherwise.
replies(1): >>46278393 #
17. wtallis ◴[] No.46278393{6}[source]
At least the Samsung and SanDisk PCIe AHCI M.2 drives were only for PC OEMs and were not officially sold as retail products. There were gray-market resellers, but overall it was a niche and short-lived format. Especially because any system that shipped with a PCIe M.2 slot could gain NVMe capability if the OEM deigned to release an appropriate UEFI firmware update.
18. zamadatix ◴[] No.46278721[source]
I've been doing my home NASes in m.2 NVMe for years now with 12 disks on one and 22 disks on another (backup still HDD though):

DWPD: Between the random teamgroup drives in the main NAS and WD Red Pro HDDs in the backup, the write limits are actually about the same. With the bonus reads are infinite on the SSDs, so things like scheduled ZFS scrubs don't count as 100 TB of usage across the pool each time.

Heat: Actually easier to manage than the HDDs. The drives are smaller (so denser for the same wattage) but the peak wattage is lower than the idle spinning wattage of the HDDs and there isn't a large physical buffer between the hot parts and the airflow. My normal case airflow keeps them at <60C under sustained benching of all of the drives raw, and more like <40 C given ZFS doesn't like to go more than 8 GB/s in this setup anyways. If you select $600 top end SSDs with high wattage controllers shipping with heatsinks you might have more of a problem, otherwise it's like 100 W max for the 22 drives and easy enough to cool.

PLP: More problematic if this is part of your use case, as NVMe drives with PLP will typically lead you straight into enterprise pricing. Personally my use case is more "on demand large file access" with extremely low churn data regularly backed up for the long term and I'm not at a loss if I have an issue and need to roll back to yesterday's data, but others who use things more as an active drive may have different considerations.

The biggest downsides I ran across were:

- Loading up all of the lanes on a modern consumer board works in theory, can be buggy as hell in practice. Anything from the boot becoming EXTREMELY long to just not working at all sometimes to PCIe errors during operation. Used Epyc in a normal PC case is the way to go instead.

- It costs more, obviously

- Not using a chassis designed for massive numbers of drives with hot-swap access can be quite the pain to troubleshoot install for.

The biggest upsides (other than the obvious ones) I ran across were:

- No spinup drain on the PSU

- No need to worry about drive powersaving/idling <- pairs with -> whole solution is quiet enough to sit in my living room without hearing drive whine.

- I don't look like a struggling fool trying to move a full chassis around :)

19. ekropotin ◴[] No.46279401[source]
If you want to go big in capacity, which is something you usually want for NAS, m.2 becomes super expensive.
20. tracker1 ◴[] No.46282665[source]
Probably because QNAP and Synology pricing is rent seeking behavior on per drive bay pricing models.
21. tracker1 ◴[] No.46282674{3}[source]
Not to mention, the physical x16 slot may be running in x8 mode if you're using a video card.
22. rasz ◴[] No.46299840{3}[source]
PCI switches got very expensive after Broadcom bought PLX.