Most active commenters
  • wtallis(4)
  • zamadatix(3)
  • saltcured(3)

←back to thread

108 points Krontab | 30 comments | | HN request time: 2.166s | source | bottom
Show context
Neil44 ◴[] No.46276482[source]
Samsung makes fast expensive storage but even cheap storage can max out SATA, hence there's no point Samsung trying to compete in the dwindling SATA space.
replies(1): >>46277119 #
1. mwambua ◴[] No.46277119[source]
Does this mean that we'll start to see SATA replaced with faster interfaces in the future? Something like U.2/U.3 that's currently available to the enterprise?
replies(4): >>46277392 #>>46277882 #>>46278452 #>>46279721 #
2. barrkel ◴[] No.46277392[source]
It's more likely that third party integrators will look after the demand for SSD SAS/SATA devices, and the demand won't go away because SAS multiplexers are cheap and NVMe/PCIe is point to point and expensive to make switching hardware for.

Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.

3. zamadatix ◴[] No.46277882[source]
NVMe via m.2 remains more than fine for covering the consumer SSD use cases.
replies(1): >>46278011 #
4. zokier ◴[] No.46278011[source]
Problem is that you only get pitiful amount of m2 slots in mainstream motherboards.
replies(6): >>46278102 #>>46278125 #>>46278488 #>>46278499 #>>46280050 #>>46282571 #
5. wtallis ◴[] No.46278102{3}[source]
Three is not pitiful. Three is plenty for mainstream use cases, which is what mainstream motherboards are designed for.
replies(2): >>46278478 #>>46279525 #
6. Night_Thastus ◴[] No.46278125{3}[source]
A lot of modern boards come with 3 or more - that's what mine has. And with modern density, that's a LOT of storage. I have two 4TB drives!

You could even get more using a PCIe NVME expansion card, since it's all over PCIe anyways.

7. Aurornis ◴[] No.46278452[source]
The first NVMe over PCIe consumer drive was launched a decade ago.

It's hard to even find new PC builds using SATA drives.

SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.

replies(1): >>46281544 #
8. ComputerGuru ◴[] No.46278478{4}[source]
We used to have motherboards with six or twelve SATA ports. And SATA HDDs have way more capacity than the paltry (yet insanely expensive) options available with NVMe.
replies(2): >>46278707 #>>46278729 #
9. Aurornis ◴[] No.46278488{3}[source]
Most consumer motherboards have 2-3 M.2 slots.

You can buy cheap add-in cards to use PCIe slots as M.2 slots, too.

If you need even more slots, there are add-in cards with PCIe switches which allow you to install 10+ M.2 drives into a single M.2 slot.

10. razster ◴[] No.46278499{3}[source]
The MSI motherboard I use has 3, and with the PCIe expansion card installed, I have 7 m.2's. There are some expansion cards with 8 m.2 slots. You can also get SATA to m.2 devices, or my fav is USB-c drives that hold 2 m.2. Getting great speeds from that little device.
11. mgerdts ◴[] No.46278707{5}[source]
This article is talking about SATA SSDs, not HDDs. While the NVMe spec does allow for MVMe HDDs, it seems silly to waste even one PCIe lane on a HDD. SATA HDDs continue to make sense.
replies(1): >>46279959 #
12. wtallis ◴[] No.46278729{5}[source]
We used to want to connect SSDs, hard drives and optical drives, all to SATA ports. Now, mainstream PCs only need one type of internal drive. Hard drives and optical drives are solidly out of the mainstream and have been for quite a while, so it's natural that motherboards don't need as many ports.
replies(1): >>46279442 #
13. justsomehnguy ◴[] No.46279442{6}[source]
> Now, mainstream PCs only need one type of internal drive

More so it would only need one drive. ODD is dead for at least 10 years and most people never need another internal drive at all.

replies(1): >>46282594 #
14. dana321 ◴[] No.46279525{4}[source]
its not enough if you have four ssds each with 4tb for instance
replies(1): >>46279917 #
15. 0manrho ◴[] No.46279721[source]
SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).

Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.

Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):

- NVMe PCIe

- M.2 and all of it's keys/lengths/clearances

- U.2 (SFF-8639) and U.3 (SFF-TA-1001)

- EDSFF (which is a very large family of things)

- FibreChannel

- SAS and all of it's permutations

- Oculink

- MCIO

- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe

Obligatory: https://imgs.xkcd.com/comics/standards_2x.png

replies(1): >>46280744 #
16. zamadatix ◴[] No.46279917{5}[source]
Is it not fair to say 4x4 TB SSD is an example of at least a prosumer use case (barrier there is more like ~10 before needing workstation/server gear)? Joe Schmoe is doing on the better half of Steam gamers if he's rocking a 1x2 TB SSD as his primary drive.
17. ComputerGuru ◴[] No.46279959{6}[source]
And I'm saying assuming that m.2 slots are sufficient to replace SATA is folly because it is only talking about SSDs.

And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.

replies(1): >>46280522 #
18. zamadatix ◴[] No.46280050{3}[source]
On top of what the others have said, any faster interface you replace SATA with will have the same problem set because it's rooted in the total bandwidth to the CPU, not the form factor of the slot.

E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.

19. wtallis ◴[] No.46280522{7}[source]
> And SATA SSDs do make sense, they are significantly more cost effective than NVMe

That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.

We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.

SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.

20. saltcured ◴[] No.46280744[source]
I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.

The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.

Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.

Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.

Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.

Does SAS still have some benefit here?

replies(3): >>46280962 #>>46286859 #>>46299852 #
21. wtallis ◴[] No.46280962{3}[source]
I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.
replies(1): >>46281691 #
22. Fire-Dragon-DoL ◴[] No.46281544[source]
It's for hdds. We still use those for massive storage
23. saltcured ◴[] No.46281691{4}[source]
OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.

I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?

FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?

Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.

24. tracker1 ◴[] No.46282571{3}[source]
My desktop motherboard has 4... not sure how many you need, even if 8tb drives are pretty pricey. Though actual PCIe lanes in consumer CPUs are limited. If you bump up to ThreadRipper, you can use PCIe to M.2 adapters to add lots of drives.
25. tracker1 ◴[] No.46282594{7}[source]
Still use ODD for ripping... that said, I'm using a USB3 BRW drive and it's been fine for what I need.
replies(1): >>46302444 #
26. pdimitar ◴[] No.46286859{3}[source]
As @wtallis already said, a lot of external USB stuff is just unreliable.

Right now I am overlooking my display and seeing 4 different USB-A hubs and 3 different enclosures that I am not sure what to do with (likely can't even sell them, they'd go for like 10-20 EUR and deliveries go for 5 EUR so why bother; I'll likely just dump them at one point). _All_ of them were marketed as 24/7, not needing cooling etc. _All_ of them could not last two hours of constant hammering and it was not even a load at 100% of the bus; more like 60-70%. All began disappearing and reappearing every few minutes (I am presuming after overheating subsided).

Additionally, for my future workstation at least I want everything inside. If I get an [e]ATX motherboard and the PC case for it then it would feel like a half-solution if I then have to stack a few drives or NAS-like enclosures at the side. And yeah I don't have a huge villa. Desk space can become a problem and I don't have cabinets or closets / storerooms either.

SATA SSDs fill a very valid niche to this day: quieter and less power-hungry and smaller NAS-like machines. Sure, not mainstream, I get how giants like Samsung think, but to claim they are no longer desirable tech like many in this thread do is a bit misinformed.

replies(1): >>46292717 #
27. saltcured ◴[] No.46292717{4}[source]
I recognize the value in some kind of internal expansion once you are talking about an ATX or even uATX board and a desktop chassis. I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling. Is it an intrinsic problem with the controllers and protocol, or more related to the cheap external parts aimed at consumers?

Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right? For an SSD scenario, I think some multiplexer card full of NVMe M.2 slots makes more sense than trying to stick to an HDD array physical form factor. I think this would effectively be a PCIe switch?

I've used LSI MegaRAID cards in the past to add a bunch of ports to a PC. I combined this with a 5-in-3 disk subsystem in a desktop PC. This is where the old 3x 5.25" drive bay space could be occupied by one subsystem with 5x 3.5" HDD hot-swap trays. I even found out how to re-flash such a card to convert it from RAID to a basic SATA/SAS expander for JBOD service, since I wanted to use OS-based software RAID concepts instead.

replies(1): >>46296149 #
28. pdimitar ◴[] No.46296149{5}[source]
> I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling

Honestly no idea. Should be doable but with personal computing being attacked every year, I would not hold my breath.

> Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right?

Sure, but then you have to budget your PCIe lanes. And once you get to a certain scale (a very small one in fact) then you have to consider getting a Threadripper board + CPU, and that increases the expense anywhere from 3x to 8x.

I thought about it lately and honestly it's either a Threadripper workstation with all the huge expenses that entails, or I'd probably just settle for an ITX form factor, cram it with 2-3 huge NVMe SSDs (8TB each), have a really good GPU and quiet cooling... and just expand horizontally if I ever need anything else (and make VERY sure it has at least two USB 4 / Thunderbolt ports that don't gimp the bandwidth to your SSDs or GPU so the expansion would be at 100% capacity).

Meaning that going for a classic PC does not makes sense if you want an internally expandable workstation. What's the point in a consumer board + a Ryzen 9950X and a big normal PC case if I can't put more than two old-school HDDs in there? Just to have a better airflow? Meh. I can put 2-3 Noctua coolers in an ITX case and it might even be quieter.

29. rasz ◴[] No.46299852{3}[source]
USB, even 3.2 doesnt support DMA mastering thus is bad for anything requiring performance.

USB4 is just passing PCIE traffic and should be fine, but at that point you are paying >$150 per usb4 hub (because mobos have two at most) and >$50 per m.2 converter.

30. justsomehnguy ◴[] No.46302444{8}[source]
I'm not even sure if CD/DVDs are selling anywhere. Sure, there are some niche shops out there, along with flea and retro markets but otherwise...