1. You have a redis instance with e.g. 1GB of mapped memory in one 1GB huge page
2. Redis forks a copy of itself when it tries to persist data to disk so it can avoid having to lock the entire dataset for writes
3. The new Redis process does anything to modify any of the data anywhere in that 1GB
4. The OS has to now allocate a new 1GB page and copy the entire data set over
5. Oops, we're under memory pressure! Better page out 1GB of data to the paging file, or flush 1GB of data from the filesystem cache, so that I can allocate this 1GB page for the next 200ms.
You could imagine how memory allocators that try to be intelligent about what they're allocating and how much in order to optimize performance might care; when a custom allocator is trying to allocate many small pages and keep them in a pool so it can re-use them without having to request new pages from the OS, getting 100x 2M pages instead of 100x 4k pages is a colossal waste of memory and (potentially) performance.
It's not necessarily that the allocators will break or behave in weird, incorrect ways (they may) but often that the allocators will say "I can't work under these conditions!" (or will work but sub-optimally).