←back to thread

63 points trelane | 1 comments | | HN request time: 0.209s | source
Show context
dusted ◴[] No.42166600[source]
it just dawned on me how trivially simple it would be for memory controllers to implement ECC in UDIMMs, for every N words, reserve 1 word for parity. You gain ECC for a small decrease in capacity. Since the memory controller is on the CPU, it can easily abstract this away.
replies(2): >>42166695 #>>42173924 #
kvemkon ◴[] No.42166695[source]
Indeed. Intel has recently implemented it in a low-cost CPU SoC: "in-band ECC".

https://news.ycombinator.com/item?id=41090956

But you not only loose some capacity. Some bandwidth is also lost. And perhaps even some CPU cycles, since likely in-band ECC hasn't been implemented purely in a hard IP-block.

replies(1): >>42167425 #
wtallis ◴[] No.42167425[source]
I think the bigger performance problem is that a read burst from one channel of RAM is no longer matched to the CPU cache line size when doing in-band ECC.
replies(2): >>42170293 #>>42194699 #
1. dusted ◴[] No.42170293[source]
This is true, however, with the readahead cpu's usually do anyway, I don't even think it's that bad.. There is definitely a performance and capacity cost, but again, technically, that capacity cost is also present in ECC memory, that extra memory is still there, it's just not printed on the label, and instead, the stick is more expensive..

The cpu cache won't be mismatched though, since the memory controller can mask this. The performance hit will be due to the memory controller having to do the extra reads for parity.

That will be a tiny mismatch, and I wonder if the performance implication of this won't more or less be equal to the performance difference we already have between buffered and unbuffered memory (more or less the same, simply, now that "extra work", moved from inside the dimm, to the memory controller)