←back to thread

163 points mariuz | 9 comments | | HN request time: 0.41s | source | bottom
1. bengarney ◴[] No.43617928[source]
Really interesting analysis of where the data lives… cutting 3-4 textures would save you more memory even in the 100k actor case, though.
replies(2): >>43618098 #>>43618115 #
2. reitzensteinm ◴[] No.43618098[source]
Depending on a bunch of factors of how this data is accessed and actors are laid out in memory, it may be more cache friendly which could yield substantial speedups.

Or it could do next to nothing, as the data is multiple cache lines long anyway.

replies(1): >>43622853 #
3. cma ◴[] No.43618115[source]
If the memory savings he got were fully read or fragmented with other stuff on cache lines that are read in every frame (not likely for static world actors), it could be ~10% of CPU memory bandwidth on mobile every frame at 120hz on an lpddr4 phone.

A big problem with them is they are so heavyweight you can only spawn a few per frame before causing hitches and have to have pools or instancing to manage things like bullets.

I think in their Robo Recall talk they found they could only spawn 10-20 projectile style bullets per frame before running into hitches, and switched to pools and recycling them.

replies(2): >>43619650 #>>43623175 #
4. teamonkey ◴[] No.43619650[source]
Pooling is pretty standard practice though, it would be the go-to solution for any experienced gameplay programmer when dealing with more than a dozen entities (though annoyingly there isn’t a standardised way of doing it in Blueprint).
replies(2): >>43619995 #>>43625933 #
5. dijit ◴[] No.43619995{3}[source]
To be completely fair though, blueprints themselves are oft-maligned for performance.

They're fantastic for prototyping, but once you have designed some kind of hot-path most people typically start converting blueprints to code as an optimisation.

In such a scenario adding pooling becomes a trivial part of such an effort.

6. bengarney ◴[] No.43622853[source]
I would not expect much, but you'd have to measure to be sure.

If you actually have a million of something you're better off writing a custom manager thing to handle the bulk of the work anyway. For instance, if you're doing a brick building game where users might place a million bricks - maybe you want each brick to be an Actor for certain use cases, but you'd want to centralize all the collision, rendering, update logic. (This is what I did on a project with this exact use case and it worked nicely.)

replies(1): >>43627935 #
7. Pxtl ◴[] No.43623175[source]
I've never played with UE and so I'm kinda shocked to learn that there isn't pooling already for objects that have this kind of creation cost.
8. cma ◴[] No.43625933{3}[source]
Standard practice, but it bit Epic by surprise. You wouldn't think it would be needed at such small numbers. You wouldn't automatically think it would be needed on 3+ghz machines.
9. reitzensteinm ◴[] No.43627935{3}[source]
I wouldn't expect much either. The potential for speedups would be if there's locality for data on either side of the multiplayer padding, or if the actors have contiguous layout and deleting the data plays better with the CPU's stride prefetching.

Significant performance degredation is also possible if at some point a smart (but not wise) developer positioned the data to eliminate false sharing on either side.

Agreed that you shouldn't be using this heavy weight paradigm with large amounts of entities. My intention was just to add a bit of color to the idea that saving memory allocations can have implications beyond just the number of bytes you ultimately malloc.