Respectfully, the title feels a little clickbaity to me. Both methods are still ultimately reading out of memory, they are just using different i/o methods.
Respectfully, the title feels a little clickbaity to me. Both methods are still ultimately reading out of memory, they are just using different i/o methods.
Seeing if the cached file data can be accessed quickly is the point of the experiment. I can't get mmap() to open a file with huge pages.
void* buffer = mmap(NULL, size_bytes, PROT_READ, (MAP_HUGETLB | MAP_HUGE_1GB), fd, 0); doesn't work.
You can can see my code here https://github.com/bitflux-ai/blog_notes. Any ideas?
Do you have kernel documentation that says that hugetlb doesn't work for files? I don't see that stated anywhere.
Based on this SO discussion [1], it is possibly a limitation with popular filesystems like ext4?
If anyone knows more about this, I'd love to know what exactly are the requirements for using hugepages this way.
[1] https://stackoverflow.com/questions/44060678/huge-pages-for-...
Honestly i never knew any of this i thought huge pages just worked for all of mmap.
You can maybe reduce the number of page faults, but you can do that by walking the mapped address space once before the actual benchmark too.
See a quick example I whipped up here: https://github.com/inetknght/mmap-hugetlb
See a quick example I whipped up here: https://github.com/inetknght/mmap-hugetlb
See a quick example I whipped up here: https://github.com/inetknght/mmap-hugetlb