Or it could do next to nothing, as the data is multiple cache lines long anyway.
A big problem with them is they are so heavyweight you can only spawn a few per frame before causing hitches and have to have pools or instancing to manage things like bullets.
I think in their Robo Recall talk they found they could only spawn 10-20 projectile style bullets per frame before running into hitches, and switched to pools and recycling them.
They're fantastic for prototyping, but once you have designed some kind of hot-path most people typically start converting blueprints to code as an optimisation.
In such a scenario adding pooling becomes a trivial part of such an effort.
If you actually have a million of something you're better off writing a custom manager thing to handle the bulk of the work anyway. For instance, if you're doing a brick building game where users might place a million bricks - maybe you want each brick to be an Actor for certain use cases, but you'd want to centralize all the collision, rendering, update logic. (This is what I did on a project with this exact use case and it worked nicely.)
Significant performance degredation is also possible if at some point a smart (but not wise) developer positioned the data to eliminate false sharing on either side.
Agreed that you shouldn't be using this heavy weight paradigm with large amounts of entities. My intention was just to add a bit of color to the idea that saving memory allocations can have implications beyond just the number of bytes you ultimately malloc.