←back to thread

63 points trelane | 1 comments | | HN request time: 0.208s | source
Show context
dusted ◴[] No.42166600[source]
it just dawned on me how trivially simple it would be for memory controllers to implement ECC in UDIMMs, for every N words, reserve 1 word for parity. You gain ECC for a small decrease in capacity. Since the memory controller is on the CPU, it can easily abstract this away.
replies(2): >>42166695 #>>42173924 #
kvemkon ◴[] No.42166695[source]
Indeed. Intel has recently implemented it in a low-cost CPU SoC: "in-band ECC".

https://news.ycombinator.com/item?id=41090956

But you not only loose some capacity. Some bandwidth is also lost. And perhaps even some CPU cycles, since likely in-band ECC hasn't been implemented purely in a hard IP-block.

replies(1): >>42167425 #
wtallis ◴[] No.42167425[source]
I think the bigger performance problem is that a read burst from one channel of RAM is no longer matched to the CPU cache line size when doing in-band ECC.
replies(2): >>42170293 #>>42194699 #
1. adrian_b ◴[] No.42194699[source]
The chips with in-band ECC have a separate dedicated cache for storing ECC codes, which are stored in another part of the memory chip, not inline with the corresponding cache line that stores data.

So the burst transfers have the same size as when ECC is disabled.

Without the special cache, the number of memory accesses would double, for data and for the extra ECC bits, which would not be acceptable. With the ECC cache, in many cases the reading and writing of the extra ECC bits can be avoided.

There have been published a few benchmarks for inline ECC. The performance loss depends on the cache hit rates, so it varies a lot from program to program. In some cases the speed is lower by only a couple percent, but for some applications the performance loss can be as high as 20% or 30%.