←back to thread

163 points mariuz | 2 comments | | HN request time: 0s | source
Show context
mwkaufma ◴[] No.43618250[source]
On ABZÛ we ripped out multiplayer and lots of other cruft from AActor and UPrimitiveComponent - dropping builtin overlap events, which are a kind of anti pattern anyway, not only saved RAM but cut a lot of ghost reads/writes.
replies(6): >>43618358 #>>43618532 #>>43619035 #>>43619088 #>>43621863 #>>43624533 #
pwdisswordfishz ◴[] No.43621863[source]
What's a ghost read? Search engines are failing me.
replies(4): >>43621972 #>>43621981 #>>43622956 #>>43625739 #
1. Veliladon ◴[] No.43625739[source]
There's two main things. First of all, when you load data into a register from an address in memory the CPU loads 64-byte cache lines, not words or bytes. AActor for instance is 1016 bytes. 16 cache lines. It's freaking huge.

So let's say you're going through all the actors and updating one thing. If those actors are in an array it's easy. Just a for loop, update the member variable, done. Easy, fast, should be performant right? But each time you're updating one of those the prefetcher is also bringing in extra lines, more data in the object, thinking you might need them next. So if you're only updating a single thing or a couple of things in the object on different cache lines you might really bring in 3-8x the data you actually need.

CPU prefetchers have something called stride detectors which can detect access patterns of N number of steps and stop the prefetcher from grabbing additional lines but at 16 cache lines the AActor object is way too big for the stride detector to keep up with. So you stride in gaps of 16 cache lines at a time through memory and you still get 2-3 extra cache lines after the initial access.

Secondly, a 1016 byte object just doesn't fit. It's word aligned but it's not cache line aligned and it's sure as hell not page aligned.

Best case scenario if you're updating two variables next to each other in memory the prefetcher gets both on the same cache line. Medium case scenario, the prefetcher has to grab the next line every so often. You'll get best most often and medium rarely.

Bad case scenario, the prefetcher has to grab the next cache line on the NEXT PAGE. Which only just became a thing on recent CPUs but also involves translating the virtual address of the next page to its physical page address which takes forever in data access terms. Bunch of pointer chasing, basically a few thousand clock of waiting.

The absolute worst case scenario is that the prefetcher thinks you need the next cache line, it's on the next page, it does the rigamarole of translating the next page's virtual address and you don't actually need it. You've done two orders of magnitude more work than reading a single variable for literally nothing.

So yeah. The prefetcher can do some weird ass shit when you throw weird and massive data structs at it. Slashing and burning the size down helps because the stride detector can start functioning again when the object is small enough. If it can be kept to a multiple of 64 bytes you even get page aligned again.

replies(1): >>43627531 #
2. Veliladon ◴[] No.43627531[source]
*an exponent of 64 bytes

If it can be kept to a exponent of 64 bytes you even get page aligned again.