Respectfully, the title feels a little clickbaity to me. Both methods are still ultimately reading out of memory, they are just using different i/o methods.
Respectfully, the title feels a little clickbaity to me. Both methods are still ultimately reading out of memory, they are just using different i/o methods.
Seeing if the cached file data can be accessed quickly is the point of the experiment. I can't get mmap() to open a file with huge pages.
void* buffer = mmap(NULL, size_bytes, PROT_READ, (MAP_HUGETLB | MAP_HUGE_1GB), fd, 0); doesn't work.
You can can see my code here https://github.com/bitflux-ai/blog_notes. Any ideas?
False. I've successfully used it to memory-map networked files.
Based on this SO discussion [1], it is possibly a limitation with popular filesystems like ext4?
If anyone knows more about this, I'd love to know what exactly are the requirements for using hugepages this way.
[1] https://stackoverflow.com/questions/44060678/huge-pages-for-...
See a quick example I whipped up here: https://github.com/inetknght/mmap-hugetlb