←back to thread

95 points ingve | 2 comments | | HN request time: 0s | source
Show context
mkw5053 ◴[] No.44567439[source]
I used to work closely with the Android team at Unity, and in my experience, shifting large native codebases to a new page size often uncovers subtle runtime assumptions beyond just replacing hardcoded constants like PAGE_SIZE. I’m optimistic Google’s tooling will help a lot, but interested about how effectively it catches these more nuanced compatibility issues like custom allocators or memory pooling tuned for 4K boundaries.
replies(2): >>44568557 #>>44573383 #
HPsquared ◴[] No.44568557[source]
Could they find those by setting page size to some absurdly large value like 1MB?
replies(2): >>44568979 #>>44571706 #
majke ◴[] No.44568979[source]
A lot of software wont work if you do that. Many jits and memory allocators have opinions on page size. Also tagged pointers are very common.
replies(3): >>44569427 #>>44571507 #>>44572184 #
vient ◴[] No.44572184{3}[source]
Memory page size should be transparent for tagged pointers (any pointers, really), I don't see how they can be affected. You have an object at address 0xAB0BA, does the size of underlying page matter?
replies(1): >>44576914 #
1. danudey ◴[] No.44576914{4}[source]
It can be an issue of behavior; for example, Redis recommended disabling transparent huge page support in Linux because of (among other things?) copy-on-write memory page behaviors, and still does if you're going to persist data to disk.

1. You have a redis instance with e.g. 1GB of mapped memory in one 1GB huge page

2. Redis forks a copy of itself when it tries to persist data to disk so it can avoid having to lock the entire dataset for writes

3. The new Redis process does anything to modify any of the data anywhere in that 1GB

4. The OS has to now allocate a new 1GB page and copy the entire data set over

5. Oops, we're under memory pressure! Better page out 1GB of data to the paging file, or flush 1GB of data from the filesystem cache, so that I can allocate this 1GB page for the next 200ms.

You could imagine how memory allocators that try to be intelligent about what they're allocating and how much in order to optimize performance might care; when a custom allocator is trying to allocate many small pages and keep them in a pool so it can re-use them without having to request new pages from the OS, getting 100x 2M pages instead of 100x 4k pages is a colossal waste of memory and (potentially) performance.

It's not necessarily that the allocators will break or behave in weird, incorrect ways (they may) but often that the allocators will say "I can't work under these conditions!" (or will work but sub-optimally).

replies(1): >>44586916 #
2. vient ◴[] No.44586916[source]
True, but that has nothing to do with tagged pointers.