←back to thread

432 points ingve | 7 comments | | HN request time: 0.946s | source | bottom
Show context
jauntywundrkind ◴[] No.44465837[source]
Would be nice to see what those little N100 / N150 (or big brother N305 / N350) can do with all that NVMe. Raw throughput is pretty whatever but hypothetically if the CPU isn't too gating, there's some interesting IOps potential.

Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.

replies(1): >>44465941 #
dwood_dev ◴[] No.44465941[source]
The problem quickly becomes PCIe lanes. The N100/150/305 only have 9 PCIe 3.0 lanes. 5Gbe is fine, but to go to 10Gbe you need x2.

Until there is something in this class with PCIe 4.0, I think we're close to maxing out the IO of these devices.

replies(1): >>44466039 #
1. geerlingguy ◴[] No.44466039[source]
Not only the lanes, but putting through more than 6 Gbps of IO on multiple PCIe devices on the N150 bogs things down. It's only a little faster than something like a Raspberry Pi, there are a lot of little IO bottlenecks (for high speed, that is, it's great for 2.5 Gbps) if you do anything that hits CPU.
replies(2): >>44466165 #>>44466844 #
2. dwood_dev ◴[] No.44466165[source]
The CPU bottleneck would be resolved by the Pentium Gold 8505, but it still has the same 9 lanes of PCIe 3.0.

I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.

3. lostlogin ◴[] No.44466844[source]
This is what baffles me - 2.5gbps.

I want smaller, cooler, quieter, but isn’t the key attribute of SSDs their speed? A raid array of SSDs can surely achieve vastly better than 2.5gbps.

replies(3): >>44467466 #>>44467492 #>>44468336 #
4. p_ing ◴[] No.44467466[source]
A single SSD can (or at least NVMe can). You have to question whether or not you need it -- what are you doing that you would go line-speed a large portion of time that the time savings are worth it. Or it's just a toy, totally cool too.

4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a 1Gbps link at ~110MBps over SMB 3. But that comes with the heat and potential reliability issues of spinning disks.

I have seen consumer SSDs, namely Samsung 8xx EVO drives have significant latency issues in a RAID config where saturating the drives caused 1+ second latency. This was on Windows Server 2019 using either a SAS controller or JBOD + Storage Spaces. Replacing the drives with used Intel drives resolved the issue.

replies(1): >>44468370 #
5. jrockway ◴[] No.44467492[source]
2.5Gbps is selected for price reasons. Not only is the NIC cheap, but so is the networking hardware.

But yeah, if you want fast storage just stick the SSD in your workstation, not on a mini PC hanging off your 2.5Gbps network.

6. jauntywundrkind ◴[] No.44468336[source]
Even if the throughput isn't high, it sure is nice having the instant response time & amazing random access performance of a ssd.

2TB ssd are super cheap. But most systems don't have the expandability to add a bunch of them. So I fully get the incentive here, being able to add multiple drives. Even if you're not reaping additional speed.

7. lostlogin ◴[] No.44468370{3}[source]
My use is a bit into the cool-toy category. I like having VMs where the NAS has the VMs and the backups, and like having the server connect to the NAS to access the VMs.

Probably a silly arrangement but I like it.