←back to thread

95 points ingve | 1 comments | | HN request time: 0.207s | source
Show context
nephanth ◴[] No.44568300[source]
From my noobish standpoint, it feels like most code shounldn't care what the page size is? Why does it need te be recompiled?

What typically tends to break when changing it?

replies(6): >>44568475 #>>44568482 #>>44568492 #>>44568560 #>>44568984 #>>44569166 #
1. kevingadd ◴[] No.44568475[source]
Off the top of my head:

If you rely on being able to do things like mark a range of memory as read-only or executable, you now have to care about page sizes. If your code is still assuming 4KB pages you may try to change the protection of a subset of a page and it will either fail to do what you want or change way too much. In both cases weird failures will result.

It also can have performance consequences. For example, if before you were making a lot of 3.5KB allocations using mmap, the wastage involved in allocating a 4KB page for each one might not have been too bad. But now those 3.5KB allocations will eat a whole 16KB page, making your app waste a lot of memory. Ideally most applications aren't using mmap directly for this sort of thing though. I could imagine it making life harder for the authors of JIT compilers.

Some algorithms also take advantage of the page size to do addressing tricks. For example, if you know your page size is 4KB, the addresses '3' and '4091' both can be known to have the same protection flags applied (R/W/X) and be the same kind of memory (mmap'd file on disk, shared memory segment, mapped memory from a GPU, etc.) This would allow any tables tracking information like that to only have 4KB granularity and make the tables much smaller. So that sort of trick needs to know the page size too.