←back to thread

SSDs have become fast, except in the cloud

(databasearchitects.blogspot.com)
589 points greghn | 1 comments | | HN request time: 0s | source
Show context
siliconc0w ◴[] No.39444011[source]
Core count plus modern nvme actually make a great case for moving away from the cloud- before it was, "your data probably fits into memory". These are so fast that they're close enough to memory so it's "your data surely fits on disk". This reduces the complexity of a lot of workloads so you can just buy a beefy server and do pretty insane caching/calculation/serving with just a single box or two for redundancy.
replies(3): >>39444040 #>>39444175 #>>39444225 #
malfist ◴[] No.39444175[source]
I keep hearing that, but that's simply not true. SSDs are fast, but they're several orders of magnitude slower than RAM, which is orders of magnitude slower than CPU Cache.

Samsung 990 Pro 2TB has a latency of 40 μs

DDR4-2133 with a CAS 15 has a latency of 14 nano seconds.

DDR4 latency is 0.035% of one of the fastest SSDs, or to put it another way, DDR4 is 2,857x faster than an SSD.

L1 cache is typically accessible in 4 clock cycles, in 4.8 ghz cpu like the i7-10700, L1 cache latency is sub 1ns.

replies(5): >>39444275 #>>39444384 #>>39447096 #>>39448236 #>>39453512 #
1. BackBlast ◴[] No.39447096[source]
You're missing the purpose of the cache. At least for this argument it's mostly for network responses.

HDD was 10ms, which was noticeable for cached network request that needs to go back out on the wire. This was also bottle necked by IOPS, after 100-150 IOPS you were done. You could do a bit better with raid, but not the 2-3 orders of magnitude you really needed to be an effective cache. So it just couldn't work as a serious cache, the next step up was RAM. This is the operational environment which redis and such memory caches evolved.

40 us latency is fine for caching. Even the high load 500-600us latency is fine for the network request cache purpose. You can buy individual drives with > 1 million read IOPS. Plenty for a good cache. HDD couldn't fit the bill for the above reasons. RAM is faster, no question, but the lower latency of the RAM over the SSD isn't really helping performance here as the network latency is dominating.

Rails conference 2023 has a talk that mentions this. They moved from a memory based cache system to an SSD based cache system. The Redis RAM based system latency was 0.8ms and the SSD based system was 1.2ms for some known system. Which is fine. It saves you a couple of orders of magnitude on cost and you can do much much larger and more aggressive caching with the extra space.

Often times these RAM caching servers are a network hop away anyway, or at least a loopback TCP request. Making the question of comparing SSD latency to RAM totally irrelevant.