←back to thread

Bought myself an Ampere Altra system

(marcin.juszkiewicz.com.pl)
204 points pabs3 | 10 comments | | HN request time: 1.347s | source | bottom
Show context
amelius ◴[] No.44421186[source]
> And the latest one, an Apple MacBook Pro, is nice and fast but has some limits — does not support 64k page size. Which I need for my work.

I wonder where this requirement comes from ...

replies(3): >>44421250 #>>44421494 #>>44421603 #
1. ot ◴[] No.44421250[source]
I would guess to develop and test software that will ultimately run on a system with 64k page size.
replies(1): >>44421261 #
2. amelius ◴[] No.44421261[source]
Is there a fundamental advantage over other page sizes, other than the convenience of 64k == 2^16?
replies(4): >>44421331 #>>44421363 #>>44421743 #>>44435389 #
3. ch_123 ◴[] No.44421331[source]
64k is the largest page size that the ARM architecture supports. The large page size provides advantages for applications which allocate large amounts of memory.
4. raverbashing ◴[] No.44421363[source]
Yes there are

(as a starting point 4k is a "page size for ants" in 2025 - 4MB might be too much however)

But the bigger the page the less TLB entries you need, and less entries in your OS data structures managing memory, etc

replies(1): >>44422104 #
5. dan-robertson ◴[] No.44421743[source]
The reason to want small pages is that the page is often the smallest unit that the operating system can work with, so bigger pages can be less efficient – you need more ram for the same number of memory mapped files, tricks like guard pages or mapping the same memory twice for a ring buffer have a bigger minimum size, etc.

The reason to want pages of exactly 4k is that software is often tuned for this and may even require this from not being programmed in a sufficiently hardware agnostic way (similar to why running lots of software on big median systems can be hard).

The reasons to want bigger pages are:

- there is more OS overhead tracking tiny pages

- as well as caches for memory, CPUs have caches for the mapping between virtual memory and physical memory, and this mapping is page-size granularity. These caches are very small (as they have to be extremely fast) so bigger pages means memory accesses are more likely to go to pages in the cache, which means faster memory accesses.

- CPU caches are addressed based on the index into the minimum page size so the max size of a cache is page-size * associativity. I think it can be harder to increase the latter than the former so bigger pages could allow for bigger caches, which can make some software perform better.

These things you see in practice are:

- x86 supports 2MB and 2GB pages, as well as 4KB pages. Linux can either directly give you pages in this larger size (a fixed number are allocated at startup by the OS) or there is a feature called ‘transparent hugepages’ where sufficiently aligned contiguous smaller pages can be merged. This mostly helps with the first two problems

- I think the Apple M-series chips have an 8k minimum page size, which might help with the third problem but I don’t really know about them

replies(1): >>44425440 #
6. fc417fc802 ◴[] No.44422104{3}[source]
4K seems appropriate for embedded applications. Meanwhile 4M seems like it would be plenty small for my desktop. Nearly every process is currently using more than that. Even the lightest is still coming in at a bit over 1M
replies(1): >>44425462 #
7. p_ing ◴[] No.44425440{3}[source]
I believe this is true for x86 as a whole, but on NT any large page must be mapped with a single protection applied to the entire page, so if the page contains read-only code and read-write data, the entire page must be marked read-write.
8. p_ing ◴[] No.44425462{4}[source]
1M is a huge waste of memory.

Imagine writing out a one sentence note in notepad and the resulting file being 1M on disk.

replies(1): >>44428367 #
9. fc417fc802 ◴[] No.44428367{5}[source]
Yet when I reference the running processes on my desktop something like 90% of them have more than 16M resident. So it doesn't appear that even an 8M page size would waste much on a modern desktop during typical usage.

If I'm mistaken about some low level detail I'd be interested to learn more.

10. nearyd ◴[] No.44435389[source]
Yes! Data workloads fare considerably better with larger pages, less TLB pressire, and a higher cache hit rate. I wrote a tutorial about this and how to figure out whether it will be a good trade-off for your use-case: https://amperecomputing.com/tuning-guides/understanding-memo...