I wonder where this requirement comes from ...
I wonder where this requirement comes from ...
(as a starting point 4k is a "page size for ants" in 2025 - 4MB might be too much however)
But the bigger the page the less TLB entries you need, and less entries in your OS data structures managing memory, etc
The reason to want pages of exactly 4k is that software is often tuned for this and may even require this from not being programmed in a sufficiently hardware agnostic way (similar to why running lots of software on big median systems can be hard).
The reasons to want bigger pages are:
- there is more OS overhead tracking tiny pages
- as well as caches for memory, CPUs have caches for the mapping between virtual memory and physical memory, and this mapping is page-size granularity. These caches are very small (as they have to be extremely fast) so bigger pages means memory accesses are more likely to go to pages in the cache, which means faster memory accesses.
- CPU caches are addressed based on the index into the minimum page size so the max size of a cache is page-size * associativity. I think it can be harder to increase the latter than the former so bigger pages could allow for bigger caches, which can make some software perform better.
These things you see in practice are:
- x86 supports 2MB and 2GB pages, as well as 4KB pages. Linux can either directly give you pages in this larger size (a fixed number are allocated at startup by the OS) or there is a feature called ‘transparent hugepages’ where sufficiently aligned contiguous smaller pages can be merged. This mostly helps with the first two problems
- I think the Apple M-series chips have an 8k minimum page size, which might help with the third problem but I don’t really know about them
If I'm mistaken about some low level detail I'd be interested to learn more.