←back to thread

SSDs have become fast, except in the cloud

(databasearchitects.blogspot.com)
589 points greghn | 1 comments | | HN request time: 0s | source
Show context
siliconc0w ◴[] No.39444011[source]
Core count plus modern nvme actually make a great case for moving away from the cloud- before it was, "your data probably fits into memory". These are so fast that they're close enough to memory so it's "your data surely fits on disk". This reduces the complexity of a lot of workloads so you can just buy a beefy server and do pretty insane caching/calculation/serving with just a single box or two for redundancy.
replies(3): >>39444040 #>>39444175 #>>39444225 #
malfist ◴[] No.39444175[source]
I keep hearing that, but that's simply not true. SSDs are fast, but they're several orders of magnitude slower than RAM, which is orders of magnitude slower than CPU Cache.

Samsung 990 Pro 2TB has a latency of 40 μs

DDR4-2133 with a CAS 15 has a latency of 14 nano seconds.

DDR4 latency is 0.035% of one of the fastest SSDs, or to put it another way, DDR4 is 2,857x faster than an SSD.

L1 cache is typically accessible in 4 clock cycles, in 4.8 ghz cpu like the i7-10700, L1 cache latency is sub 1ns.

replies(5): >>39444275 #>>39444384 #>>39447096 #>>39448236 #>>39453512 #
1. jltsiren ◴[] No.39448236[source]
RAM is not as fast in practice as the specs claim, because there is a lot of overhead in accessing it. I did some latency benchmarking on my M2 Max MBP when I got it last year. As long as the working set fits in L1 cache, read latency is ~2 ns. Then it starts increasing slowly, reaching ~10 ns at 10 MiB. Then there is a rapid rise to ~100 ns at 100 MiB, followed by slow growth until ~10 GiB. Then the latency starts increasing rapidly again, reaching ~330 ns at 64 GiB.