←back to thread

163 points mariuz | 3 comments | | HN request time: 0.694s | source
Show context
bengarney ◴[] No.43617928[source]
Really interesting analysis of where the data lives… cutting 3-4 textures would save you more memory even in the 100k actor case, though.
replies(2): >>43618098 #>>43618115 #
1. reitzensteinm ◴[] No.43618098[source]
Depending on a bunch of factors of how this data is accessed and actors are laid out in memory, it may be more cache friendly which could yield substantial speedups.

Or it could do next to nothing, as the data is multiple cache lines long anyway.

replies(1): >>43622853 #
2. bengarney ◴[] No.43622853[source]
I would not expect much, but you'd have to measure to be sure.

If you actually have a million of something you're better off writing a custom manager thing to handle the bulk of the work anyway. For instance, if you're doing a brick building game where users might place a million bricks - maybe you want each brick to be an Actor for certain use cases, but you'd want to centralize all the collision, rendering, update logic. (This is what I did on a project with this exact use case and it worked nicely.)

replies(1): >>43627935 #
3. reitzensteinm ◴[] No.43627935[source]
I wouldn't expect much either. The potential for speedups would be if there's locality for data on either side of the multiplayer padding, or if the actors have contiguous layout and deleting the data plays better with the CPU's stride prefetching.

Significant performance degredation is also possible if at some point a smart (but not wise) developer positioned the data to eliminate false sharing on either side.

Agreed that you shouldn't be using this heavy weight paradigm with large amounts of entities. My intention was just to add a bit of color to the idea that saving memory allocations can have implications beyond just the number of bytes you ultimately malloc.