Most active commenters
  • eru(17)
  • bayindirh(17)
  • gchamonlive(7)
  • adastra22(7)
  • pdimitar(5)
  • ta1243(5)
  • db48x(5)
  • fluoridation(5)
  • commandersaki(4)
  • winrid(4)

←back to thread

804 points jryio | 149 comments | | HN request time: 2.018s | source | bottom
Show context
speedgoose ◴[] No.45661785[source]
Looking at the htop screenshot, I notice the lack of swap. You may want to enable earlyoom, so your whole server doesn't go down when a service goes bananas. The Linux Kernel OOM killer is often a bit too late to trigger.

You can also enable zram to compress ram, so you can over-provision like the pros'. A lot of long-running software leaks memory that compresses pretty well.

Here is how I do it on my Hetzner bare-metal servers using Ansible: https://gist.github.com/fungiboletus/794a265cc186e79cd5eb2fe... It also works on VMs.

replies(15): >>45661833 #>>45662183 #>>45662569 #>>45662628 #>>45662841 #>>45662895 #>>45663091 #>>45664508 #>>45665044 #>>45665086 #>>45665226 #>>45666389 #>>45666833 #>>45673327 #>>45677907 #
1. levkk ◴[] No.45662183[source]
Yeah, no way. As soon as you hit swap, _most_ apps are going to have a bad, bad time. This is well known, so much so that all EC2 instances in AWS disable it by default. Sure, they want to sell you more RAM, but it's also just true that swap doesn't work for today's expectations.

Maybe back in the 90s, it was okay to wait 2-3 seconds for a button click, but today we just assume the thing is dead and reboot.

replies(16): >>45662314 #>>45662349 #>>45662398 #>>45662411 #>>45662419 #>>45662472 #>>45662588 #>>45663055 #>>45663460 #>>45664054 #>>45664170 #>>45664389 #>>45664461 #>>45666199 #>>45667250 #>>45668533 #
2. gchamonlive ◴[] No.45662314[source]
How programs use ram also changed from the 90s. Back then they were written targeting machines that they knew would have a hard time fitting all their data in memory, so hitting swap wouldn't hurt perceived performance too drastically since many operations were already optimized to balance data load between memory and disk.

Nowadays when a program hits swap it's not going to fallback to a different memory usage profile that prioritises disk access. It's going to use swap as if it were actual ram, so you get to see the program choking the entire system.

replies(2): >>45662410 #>>45662768 #
3. henryfjordan ◴[] No.45662349[source]
Does HDD vs SSD matter at all these days? I can think of certain caching use-cases where swapping to an SSD might make sense, if the access patterns were "bursty" to certain keys in the cache
replies(1): >>45662393 #
4. winrid ◴[] No.45662393[source]
It's still extremely slow and can cause very unpredictable performance. I have swap setup with swappiness=1 on some boxes, but I wouldn't generally recommend it.
replies(1): >>45663399 #
5. LaurensBER ◴[] No.45662398[source]
The beauty of ZRAM is that on any modern-ish CPU it's surprisingly fast. We're talking 2-3 ms instead of 2-3 seconds ;)

I regularly use it on my Snapdragon 870 tablet (not exactly a top of the line CPU) to prevent OOM crashes (it's running an ancient kernel and the Android OOM killer basically crashes the whole thing) when running a load of tabs in Brave and a Linux environment (through Tmux) at the same time.

ZRAM won't save you if you do actually need to store and actively use more than the physical memory but if 60% of your physical memory is not actively used (think background tabs or servers that are running but not taking requests) it absolutely does wonders!

On most (web) app servers I happily leave it enabled to handle temporary spikes, memory leaks or applications that load a whole bunch of resources that they never ever use.

I'm also running it on my Kubernetes cluster. It allows me to set reasonable strict memory limits while still having the certainty that Pods can handle (short) spikes above my limit.

replies(1): >>45666538 #
6. winrid ◴[] No.45662410[source]
Exactly. Nowadays, most web services are run in a GC'ed runtime. That VM will walk pointers all over the place and reach into swap all the time.
replies(1): >>45662595 #
7. bayindirh ◴[] No.45662411[source]
This is a wrong belief because a) SSDs make swap almost invisible, so you can have that escape ramp if something goes wrong b) SWAP space is not solely an escape ramp which RAM overflows into anymore.

In the age of microservices and cattle servers, reboot/reinstall might be cheap, but in the long run it is not. A long running server, albeit being cattle, is always a better solution because esp. with some excess RAM, the server "warms up" with all hot data cached and will be a low latency unit in your fleet, given you pay the required attention to your software development and service configuration.

Secondly, Kernel swaps out unused pages to SWAP, relieving pressure from RAM. So, SWAP is often used even if you fill 1% of your RAM. This allows for more hot data to be cached, allowing better resource utilization and performance in the long run.

So, eff it, we ball is never a good system administration strategy. Even if everything is ephemeral and can be rebooted in three seconds.

Sure, some things like Kubernetes forces "no SWAP, period" policies because it kills pods when pressure exceeds some value, but for more traditional setups, it's still valuable.

replies(8): >>45662537 #>>45662599 #>>45662646 #>>45662687 #>>45663237 #>>45663354 #>>45664553 #>>45664705 #
8. 01HNNWZ0MV43FF ◴[] No.45662419[source]
It's not just 3 seconds for a button click, every time I've run out of RAM on a Linux system, everything locks up and it thrashes. It feels like 100x slowdown. I've had better experiences when my CPU was underclocked to 20% speed. I enable swap and install earlyoom. Let processes die, as long as I can move the mouse and operate a terminal.
replies(2): >>45662523 #>>45662677 #
9. KaiserPro ◴[] No.45662472[source]
Yeahna, thats just memory exhaustion.

Swap helps you use ram more efficiently, as you put the hot stuff in swap and let the rest fester on disk.

Sure if you overwhelm it, then you're gonna have a bad day, but thats the same without swap.

Seriously, swap is good, don't believe the noise.

replies(2): >>45662602 #>>45662672 #
10. C7E69B041F ◴[] No.45662523[source]
This, I'm used to restarting my Plasma 2 times a day cause PHPStorm just leaks memory and it eventually crashes and requires hard reboot.
11. gchamonlive ◴[] No.45662537[source]
> SSDs make swap almost invisible

It doesn't. SSDs came a long way but so did memory dies and buses, and with that the way programs work also changed as more and more they are able to fit their stacks and heaps on memory more often than not.

I have had a problem with shellcheck that for some reason eats up all my ram when I open I believe .zshrc and trust me, it's not invisible. The system crawls to a halt.

replies(3): >>45662623 #>>45662783 #>>45663004 #
12. zymhan ◴[] No.45662588[source]
Where on earth did you get this misconception?
replies(1): >>45662616 #
13. cogman10 ◴[] No.45662595{3}[source]
Depends entirely on the runtime.

If your GC is a moving collector, then absolutely this is something to watch out for.

There are, however, a number of runtimes that will leave memory in place. They are effectively just calling `malloc` for the objects and `free` when the GC algorithm detects an object is dead.

Go, the CLR, Ruby, Python, Swift, and I think node(?) all fit in this category. The JVM has a moving collector.

replies(4): >>45662942 #>>45663386 #>>45664264 #>>45665210 #
14. commandersaki ◴[] No.45662599[source]
This is a wrong belief

This is not about belief, but lived experience. Setting up swap to me is a choice between a unresponsive system (with swap) or a responsive system with a few oom kills or downed system.

replies(1): >>45662637 #
15. gchamonlive ◴[] No.45662602[source]
It's good, and Aws shouldn't disable it by default, but it won't save the system from OOM.
replies(1): >>45663011 #
16. commandersaki ◴[] No.45662616[source]
Lived experience? With swap system stays up but is unresponsive, without it is either responsive due to oom kill or completely down.
replies(1): >>45662754 #
17. bayindirh ◴[] No.45662623{3}[source]
It depends on the SSD, I may say.

If we're talking about SATA SSDs which top at 600MBps, then yes, an aggressive application can make itself known. However, if you have a modern NVMe, esp. a 4x4 one like Samsung 9x0 series or if you're using a Mac, I bet you'll notice the problem much later, if ever. Remember the SSD trashing problem on M1 Macs? People never noticed that system used SWAP that heavily and trashed the SSD on board.

Then, if you're using a server with a couple of SAS or NVMe SSDs, you'll not notice the problem again, esp. if these are backed by RAID (even md counts).

replies(1): >>45662992 #
18. bayindirh ◴[] No.45662637{3}[source]
> This is not about belief, but lived experience.

I mean, I manage some servers, and this is my experience.

> Setting up swap to me is a choice between a unresponsive system (with swap) or a responsive system with a few oom kills or downed system.

Sorry, but are you sure that you budgeted your system requirements correctly? A Linux system shall neither fill SWAP nor trigger OOM regularly.

replies(2): >>45663353 #>>45663947 #
19. adastra22 ◴[] No.45662646[source]
What pressure? If your ram is underutilized, what pressure are you talking about?

If the slowest drive on the machine is the SSD, how does caching to swap help?

replies(2): >>45662707 #>>45662734 #
20. adastra22 ◴[] No.45662672[source]
I don’t understand. If you provision the system with enough RAM, then you can for every page in RAM, hot or not.
replies(1): >>45663000 #
21. zozbot234 ◴[] No.45662677[source]
> It feels like 100x slowdown.

Yup, this is a thing. It happens because file-backed program text and read-only data eventually get evicted from RAM (to make room for process memory) so every access to code and/or data beyond the current 4K page can potentially involve a swap-in from disk. It would be nice if we had ways of setting up the system so that pages of code or data that are truly critical for real-time responsiveness (including parts of the UI) could not get evicted from RAM at all (except perhaps to make room for the OOM reaper itself to do its job) - but this is quite hard to do in practice.

22. vasco ◴[] No.45662687[source]
In EC2 using any kind of swapping is just wrong, the comment you replied to already made all the points that can be made though.
replies(1): >>45662758 #
23. bayindirh ◴[] No.45662707{3}[source]
A long running Linux system uses 100% of its RAM. Every byte unused for applications will be used as a disk cache, given you read more data than your total RAM amount.

This cache is evictable, but it'll be there eventually.

Linux used to don't touch unused pages in the RAM in the older days if your RAM was not under pressure, but now it swaps out pages unused for a long time. This allows more cache space in RAM.

> how does caching to swap help?

I think I failed to convey what I tried to say. Let me retry:

Kernel doesn't cache to SSD. It swaps out unused (not accessed) but unevictable pages to SWAP, assuming that these pages will stay stale for a very long time, allowing more RAM to be used as cache.

When I look to my desktop system, in 12 days, Kernel moved 2592MB of my RAM to SWAP despite having ~20GB of free space. ~15GB of this free space is used as disk cache.

So, to have 2.5GB more disk cache, Kernel moved 2592 MB of non-accessed pages to SWAP.

replies(3): >>45662776 #>>45663196 #>>45667848 #
24. adgjlsfhk1 ◴[] No.45662734{3}[source]
The OS uses almost all the ram in your system (it just doesn't tell you because then users complain that their OS is too ram heavy). The primary thing it uses it for is caching as much of your storage system as possible. (e.g. all of the filesystem metadata and most of the files anyone on the system has touched recently). As such, if you have RAM that hasn't been touched recently, the OS can page it out and make the rest of the system faster.
replies(1): >>45663231 #
25. GuinansEyebrows ◴[] No.45662754{3}[source]
in either case, what do you do? if you can't reach a box and it's otherwise safe to do so, you just reboot it. so is it just a matter of which situation occurs more often?
replies(1): >>45664321 #
26. bayindirh ◴[] No.45662758{3}[source]
From my understanding, the comment I'm replying to uses EC2 example to portray that swapping is wrong in any and all circumstances, and I just replied with my experience with my system administrator hat.

I'm not an AWS guy. I can see and touch the servers I manage, and in my experience, SWAP works, and works well.

replies(1): >>45662999 #
27. zoeysmithe ◴[] No.45662768[source]
This is really interesting and I've never really heard about this. What is going on with the kernel team then? Are they just going to keep swap as-is for backwards compatibility then everyone else just disables it? Or if this advice just for high performance clusters?
replies(2): >>45662932 #>>45662966 #
28. wallstop ◴[] No.45662776{4}[source]
Edit:

    wallstop@fridge:~$ free -m
                   total        used        free      shared  buff/cache   available
    Mem:           15838        9627        3939          26        2637        6210
    Swap:           4095           0        4095


    wallstop@fridge:~$ uptime

    00:43:54 up 37 days, 23:24,  1 user,  load average: 0.00, 0.00, 0.00
replies(1): >>45662870 #
29. justsomehnguy ◴[] No.45662783{3}[source]
What do you prefer:

( ) a 1% chance the system would crawl to a halt but would work

( ) a 1% change the kernel would die and nothing would work

replies(6): >>45662983 #>>45663003 #>>45663220 #>>45663425 #>>45667758 #>>45668771 #
30. bayindirh ◴[] No.45662870{5}[source]
The command you want to use is "free -m".

This is from another system I have close:

                   total        used        free      shared  buff/cache   available
    Mem:           31881        1423        1042          10       29884       30457
    Swap:            976           2         974
2MB of SWAP used, 1423 MB RAM used, 29GB cache, 1042 MB Free. Total RAM 32 GB.
replies(3): >>45663312 #>>45663669 #>>45667833 #
31. kccqzy ◴[] No.45662932{3}[source]
No. I use swap for my home machines. Most people should leave swap enabled. In fact I recommend the setup outlined in the kernel docs for tmpfs: https://docs.kernel.org/filesystems/tmpfs.html which is to have a big swap and use tmpfs for /tmp and /var/tmp.
32. zozbot234 ◴[] No.45662942{4}[source]
Every garbage collector has to constantly sift through the entire reference graph of the running program to figure out what objects have become garbage. Generational GC's can trace through the oldest generations less often, but that's about it.

Tracing garbage collectors solve a single problem really really well - managing a complex, possibly cyclical reference graph, which is in fact inherent to some problems where GC is thus irreplaceable - and are just about terrible wrt. any other system-level or performance-related factor of evaluation.

replies(2): >>45663131 #>>45663383 #
33. gchamonlive ◴[] No.45662966{3}[source]
As someone else said, swap is important not only in the case the system exhaust main memory, but it's used to efficiently use system memory before that (caching, offload page blocks to swap that aren't frequently used etc...)
34. andai ◴[] No.45662983{4}[source]
Can someone explain this to me? Doesn't swap just delay the fundamental issue? Or is there a qualitative difference?
replies(4): >>45663275 #>>45663409 #>>45663992 #>>45664646 #
35. gchamonlive ◴[] No.45662992{4}[source]
Now that you say, I have a new Lenovo yoga with those SoC ram with crazy parallel channel config (16gb spread across 8 dies of 2gb). It's noticeably faster than my Acer nitro with dual channel 16gb ddr5. I'll check that, but I'd say it's not what the average home user (and even server I'd risk saying) would have.
36. matt-p ◴[] No.45662999{4}[source]
Just for context EC2 typically uses network storage that, for obvious reasons, often has fairly rubbish latency and performance characteristics. Swap works fine if you have local storage, though obviously it burns through your SSD/NVME drive faster and can other side effects on it's performance (usually not particularly noticeable).
replies(1): >>45667245 #
37. akvadrako ◴[] No.45663000{3}[source]
Only if you have more RAM than disk space, which is wasteful for many applications.
replies(1): >>45663147 #
38. gchamonlive ◴[] No.45663003{4}[source]
I think I've not made myself as clear as I could. Swap is important for efficient system performance way before you hit OOM on main memory. It's not, however, going to save system responsiveness in case of OOM. This is what I mean.
39. xienze ◴[] No.45663004{3}[source]
> it's not invisible. The system crawls to a halt.

I’m gonna guess you’re not old enough to remember computers with memory measured in MB and IDE hard disks? Swapping was absolutely brutal back then. I agree with the other poster, swap hitting an SSD is a barely noticeable in comparison.

replies(1): >>45667962 #
40. matt-p ◴[] No.45663011{3}[source]
I bet there's a big "burns through our SSDs faster" spreadsheet column or similar that caused it to be disabled.
replies(1): >>45663100 #
41. akerl_ ◴[] No.45663055[source]
Is it possible you misread the comment you're replying to? They aren't recommending adding swap, they're recommending adjusting the memory tunables to make the OOM killer a bit more aggressive so that it starts killing things before the whole server goes to hell.
42. gchamonlive ◴[] No.45663100{4}[source]
Maybe. Or maybe it's an arbitrary decision.

Many won't enable swap. For some swap wouldn't help anyways, but others it could help soak up spikes. The latter in some cases will upgrade to a larger instance without even evaluating if swap could help, generating AWS more money.

Either way it's far-fetched to derive intention from the fact.

43. cogman10 ◴[] No.45663131{5}[source]
> Every garbage collector has to constantly sift through the entire reference graph of the running program to figure out what objects have become garbage.

There's a lot of "it depends" here.

For example, an RC garbage collector (Like swift and python?) doesn't ever trace through the graph.

The reason I brought up moving collectors is by their nature, they take up a lot more heap space, at least 2x what they need. The advantage of the non-moving collectors is they are much more prompt at returning memory to the OS. The JVM in particular has issues here because it has pretty chunky objects.

replies(1): >>45664560 #
44. adastra22 ◴[] No.45663147{4}[source]
Running out of memory kills performance. It is better to kill the VM and restart it so that any active VM remains low latency.

That is my interpretation of what people are saying upthread, at least. To which posters such as yourself are saying “you still need swap.” Why?

replies(2): >>45663366 #>>45666448 #
45. adastra22 ◴[] No.45663196{4}[source]
Yes, and if I am writing an API service, for example, I don’t want to suddenly add latency because I hit pages that have been swapped out. I want guarantees about my API call latency variance, at least when the server isn’t overloaded.

I DON’T WANT THE KERNEL PRIORITIZING CACHE OVER NRU PAGES.

The easiest way to do this is to disable swap.

replies(6): >>45663291 #>>45663295 #>>45664809 #>>45665015 #>>45667197 #>>45667278 #
46. ◴[] No.45663220{4}[source]
47. adastra22 ◴[] No.45663231{4}[source]
At the cost of tanking performance for the less frequently used code path. Sometimes it is more important to optimize in ways that minimize worst case performance rather than a marginal improvement to typical work loads. This is often the case for distributed systems, e.g. SaaS backends.
replies(1): >>45666977 #
48. eru ◴[] No.45663237[source]
How long is long running? You should be getting the warm caches after at most a few hours.

> Secondly, Kernel swaps out unused pages to SWAP, relieving pressure from RAM. So, SWAP is often used even if you fill 1% of your RAM. This allows for more hot data to be cached, allowing better resource utilization and performance in the long run.

Yes, and you can observe that even in your desktop at home (if you are running something like Linux).

> So, eff it, we ball is never a good system administration strategy. Even if everything is ephemeral and can be rebooted in three seconds.

I wouldn't be so quick. Google ran their servers without swap for ages. (I don't know if they still do it.) They decided that taking the slight inefficiency in memory usage, because they have to keep the 'leaked' pages around in actual RAM, is worth it to get predictability in performance.

For what it's worth, I add generous swap to all my personal machines, mostly so that the kernel can offload cold / leaked pages and keep more disk content cached in RAM. (As a secondary reason: I also like to have a generous amount of /tmp space that's backed by swap, if necessary.)

With swap files, instead of swap partitions, it's fairly easy to shrink and grow your swap space, depending on what your needs for free space on your disk are.

replies(1): >>45667275 #
49. eru ◴[] No.45663275{5}[source]
Swap delays the 'fundamental issue', if you have a leak that keeps growing.

If your problem doesn't keep growing, and you just have more data that programs want to keep in memory than you have RAM, but the actual working set of what's accessed frequently still fits in RAM, then swap perfectly solves this.

Think lots of programs open in the background, or lots of open tabs in your browser, but you only ever rapidly switch between at most a handful at a time. Or you are starting a memory hungry game and you don't want to be bothered with closing all the existing memory hungry programs that idle in the background while you play.

50. eru ◴[] No.45663291{5}[source]
You better not write your API in Python, or any language/library that uses amortised algorithms in the standard (like Rust and C++ do). And let's not mention garbage collection.
replies(1): >>45669082 #
51. sethherr ◴[] No.45663295{5}[source]
I’m asking because I genuinely don’t know - what are “pages” here?
replies(1): >>45663328 #
52. eru ◴[] No.45663312{6}[source]
If you are interested in human consumption, there's "free --human" which decided on useful units by itself. The "--human" switch is also available for "du --human" or "df --human" or "ls -l --human". It's often abbreviated as "-h", but not always, since that also often stands for "--help".
replies(1): >>45667223 #
53. adastra22 ◴[] No.45663328{6}[source]
That’s a fair question. A page is the smallest allocatable unit of RAM, from the OS/kernel perspective. The size is set by the CPU, traditionally 4kB, but these days 8kB-4MB are also common.

When you call malloc(), it requests a big chunk of memory from the OS, in units of pages. It then uses an allocator to divide it up into smaller, variable length chunks to form each malloc() request.

You may have heard of “heap” memory vs “stack” memory. The stack of course is the execution/call stack, and heap is called that because the “heap allocator” is the algorithm originally used for keeping track of unused chunks of these pages.

(This is beginner CS stuff so sorry if it came off as patronizing—I assume you’re either not a coder or self-taught, which is fine.)

54. kryptiskt ◴[] No.45663354[source]
My work Ubuntu laptop has 40GB of RAM and and a very fast Nvme SSD, if it gets under memory pressure it slows to a crawl and is for all practical purposes frozen while swapping wildly for 15-20 minutes.

So no, my experience with swap isn't that it's invisible with SSD.

replies(4): >>45664006 #>>45664550 #>>45664888 #>>45664991 #
55. eru ◴[] No.45663353{4}[source]
Swap also works really well for desktop workloads. (I guess that's why Apple uses it so heavily on their Macbooks etc.)

With a good amount of swap, you don't have to worry about closing programs. As long as your 'working set' stays smaller than your RAM, your computer stays fast and responsive, regardless of what's open and idling in the background.

replies(1): >>45667191 #
56. eru ◴[] No.45663366{5}[source]
RAM costs money, disk space costs less money.

It's a bit wasteful to provision your computers so that all the cold data lives in expensive RAM.

replies(2): >>45663396 #>>45663411 #
57. eru ◴[] No.45663383{5}[source]
Modern garbage collectors have come a long way.

Even not so modern ones: have you heard of generational garbage collection?

But even in eg Python they introduced 'immortal objects' which the GC knows not to bother with.

replies(1): >>45665528 #
58. eru ◴[] No.45663386{4}[source]
A moving GC should be better at this, because it can compact your memory.
replies(1): >>45663579 #
59. adastra22 ◴[] No.45663396{6}[source]
When building distributed systems, service degradation means you’ll have to provision more systems. Cheaper to provision fewer systems with more RAM.
replies(1): >>45663573 #
60. eru ◴[] No.45663399{3}[source]
HDDs are much, much slower than SSD.

If swapping to SSD is 'extremely slow', what's your term for swapping to HDD?

replies(1): >>45665041 #
61. justsomehnguy ◴[] No.45663409{5}[source]
https://news.ycombinator.com/item?id=45007821

> Doesn't swap just delay the fundamental issue?

The fundamental issue here is what the linux fanboys literally think what killing a working process and most of the time the process[0] is a good solution for not solving the fundamental problem of memory allocation in the Linux kernel.

Availability of swap allows you to avoid malloc failure in a rare case your processes request more memory than physically (or 'physically', heh) present in the system. But in the mind of so called linux administrators even if a one byte of the swap would be used then the system would immediately crawl to a stop and never would recover itself. Why it always should be the worst and the most idiotic scenario instead of a sane 'needed 100MB more, got it - while some shit in the memory which wasn't accessed since the boot was swapped out - did the things it needed to do and freed that 100MB' is never explained by them.

[0] imagine a dedicated machine for *SQL server - which process would have the most memory usage on that system?

replies(2): >>45663792 #>>45667787 #
62. fluoridation ◴[] No.45663411{6}[source]
>It's a bit wasteful to provision your computers so that all the cold data lives in expensive RAM.

But that's a job applications are already doing. They put data that's being actively worked on in RAM they leave all the rest in storage. Why would you need swap once you can already fit the entire working set in RAM?

replies(3): >>45663566 #>>45663646 #>>45665729 #
63. eru ◴[] No.45663425{4}[source]
The trade-off depends on how your system is set up.

Eg Google used to (and perhaps still does?) run their servers without swap, because they had built fault tolerance in their fleet anyway, so were happier to deal with the occasional crash than with the occasional slowdown.

For your desktop at home, you'd probably rather deal with a slowdown that gives you a chance to close a few programs, then just crashing your system. After all, if you are standing physically in front of your computer, you can always just manually hit the reset button, if the slowdown is too agonising.

replies(1): >>45663751 #
64. the8472 ◴[] No.45663460[source]
YMMV. Garbage-collected/pointer-chasing languages suffer more from swapping because they touch more of the heap all the time. AWS suffers more from swap because EBS is ridiculously slow and even their instance-attached NVMe is capped compared physical NVMe sticks.
65. eru ◴[] No.45663566{7}[source]
Sure, some applications are written to manually do a job that your kernel can already do for you.

In that case, and if you are only running these applications, the need for swap is much less.

replies(1): >>45663689 #
66. eru ◴[] No.45663573{7}[source]
It depends on what you are doing, and how your system behaves.

If you size your RAM and swap right, you get no service degradation, but still get away with using less RAM.

But when I was at Google (about a decade ago), they followed exactly the philosophy you were outlining and disabled swap.

67. cogman10 ◴[] No.45663579{5}[source]
A moving collector has to move to somewhere and, generally by it's nature, it's constantly moving data all across the heap. That's what makes it end up touching a lot more memory while also requiring more memory. On minor collections I'll move memory between 2 different locations and on major collections it'll end up moving the entire old gen.

It's that "touching" of all the pages controlled by the GC that ultimately wrecks swap performance. But also the fact that moving collector like to hold onto memory as downsizing is pretty hard to do efficiently.

Non-moving collectors are generally ultimately using C allocators which are fairly good at avoiding fragmentation. Not perfect and not as fast as a moving collector, but also fast enough for most use cases.

Java's G1 collector would be the worst example of this. It's constantly moving blocks of memory all over the place.

replies(1): >>45664965 #
68. vlovich123 ◴[] No.45663646{7}[source]
Because then you have more active working memory as infrequently used pages are moved to compressed swap and can be used for more page cache or just normal resident memory.

Swap ram by itself would be stupid but no one doing this isn’t also turning on compression.

replies(2): >>45666343 #>>45671937 #
69. wallstop ◴[] No.45663669{6}[source]
Thanks! My other problem was formatting. Just wanted to share that I see 0 swap usage and nowhere near 100% memory usage as a counterpoint.
70. fluoridation ◴[] No.45663689{8}[source]
You mean to tell me most applications you've ever used read the entire file system, loading every file into memory, and rely on the OS to move the unused stuff to swap?
replies(1): >>45664970 #
71. macintux ◴[] No.45663751{5}[source]
That’s very common to distributed systems: much better to have a failed node than a slow node. Slow nodes are often contagious.
72. ssl-3 ◴[] No.45663792{6}[source]
Indeed.

Also: When those processes that haven't been active since boot (and which may never be active again) are swapped out, more system RAM can become available for disk caching to help performance of things that are actively being used.

And that's... that's actually putting RAM to good use, instead of letting it sit idle. That's good.

(As many are always quick to point out: Swap can't fix a perpetual memory leak. But I don't think I've ever seen anyone claim that it could.)

replies(1): >>45664123 #
73. commandersaki ◴[] No.45663947{4}[source]
It doesn’t happen often, and I have a multi user system with unpredictable workloads. It’s also not about swap filling up, but giving the pretense the system is operable in a memory exhausted state which means oom killer doesn’t run, but the system is unresponsive and never recovers.

Without swap oom killer runs and things become responsive.

74. danielheath ◴[] No.45663992{5}[source]
I run a chat server on a small instance; when someone uploads a large image to the chat, the 'thumbnail the image' process would cause the OOM-killer to take out random other processes.

Adding a couple of gb of swap means the image resizing is _slow_, but completes without causing issues.

75. interroboink ◴[] No.45664006{3}[source]
I don't know your exact situation, but be sure you're not mixing up "thrashing" with "using swap". Obviously, thrashing implies swap usage, but not the other way around.
replies(1): >>45664709 #
76. elwebmaster ◴[] No.45664054[source]
what an ignorant and clueless comment. Guess what? Todays disks are NVMe drives which are orders of magnitude faster than the 5400rpm HDDs of the 90s. Today's swap is 90s RAM.
77. qotgalaxy ◴[] No.45664123{7}[source]
What if I care more about the performance of things that aren't being used right now than the things that are? I'm sick of switching to my DAW and having to listen to my drive thrash when I try to play a (say) sampler I had loaded.
replies(2): >>45664906 #>>45664947 #
78. slyall ◴[] No.45664170[source]
My 2cents is that in a lot of cases swap is being used for unimportant stuff leave more RAM for your app. Do a "ps aux" and look at all the RAM used by weird stuff. Good news is those things will be swapped out.

Example on my personal VPS

   $ free -m
                  total        used        free      shared  buff/cache   available
   Mem:            3923        1225         328         217        2369        2185
   Swap:           1535        1335         200
79. manwe150 ◴[] No.45664264{4}[source]
MemBalancer is a relatively new analysis paper that argues having swap allows maximum performance by allowing small excesses, that avoids needing to over-provision ram instead. The kind of gc does not matter since data spends very little time in that state and on the flip side, most of the time the application has twice has access to twice as much memory to use
80. commandersaki ◴[] No.45664321{4}[source]
The thing is you can survive memory exhaustion if the oom killer can do its job, which it can't many times when there's swap. I guess the topmost response to this thread talks about an earlyoom tool that might alleivate this, but I've never used it, and I don't find swap helpful anyway so there's no need for me to go down this route.
81. ◴[] No.45664389[source]
82. Dylan16807 ◴[] No.45664461[source]
"as soon as you hit swap" is a bad way of looking at things. Looking around at some servers I run, most of them have .5-2GB of swap used despite a bunch of gigabytes of free memory. That data is never or almost never going to be touched, and keeping it in memory would be a waste. On a smaller server that can be a significant waste.

Swap is good to have. The value is limited but real.

Also not having swap doesn't prevent thrashing, it just means that as memory gets completely full you start dropping and re-reading executable code over and over. The solution is the same in both cases, kill programs before performance falls off a cliff. But swap gives you more room before you reach the cliff.

83. webstrand ◴[] No.45664550{3}[source]
I've experimented with no-swap and find the same thing happens. I think the issue is that linux can also evict executable pages (since it can just reload them from disk).

I've had good experience with linux's multi-generation LRU feature, specifically the /sys/kernel/mm/lru_gen/min_ttl_ms feature that triggers OOM-killer when the "working set of the last N ms doesn't fit in memory".

replies(1): >>45668497 #
84. hhh ◴[] No.45664553[source]
Kubernetes supports swap now.

I still don’t use it though.

replies(1): >>45667173 #
85. Dylan16807 ◴[] No.45664560{6}[source]
> The reason I brought up moving collectors is by their nature, they take up a lot more heap space, at least 2x what they need.

If the implementer cares about memory use it won't. There are ways to compact objects that are a lot less memory-intensive than copying the whole graph from A to B and then deleting A.

86. charcircuit ◴[] No.45664646{5}[source]
The problem is freezing the system for hours or more to delay the issue is not worth it. I'd rather a program get killed immediately than having my system locked up for hours before a program gets killed.
87. db48x ◴[] No.45664705[source]
This is not really true of most SSDs. When Linux is really thrashing the swap it’ll be essentially unusable unless the disk is _really_ fast. Fast enough SSDs are available though. Note that when it’s really thrashing the swap the workload is 100% random 4KB reads and writes in equal quantities. Many SSDs have high read speeds and high write speeds but have much worse performance under mixed workloads.

I once used an Intel Optane drive as swap for a job that needed hundreds of gigabytes of ram (in a computer that maxed out at 64 gigs). The latency was so low that even while the task was running the machine was almost perfectly usable; in fact I could almost watch videos without dropping frames at the same time.

replies(3): >>45665615 #>>45668643 #>>45668647 #
88. db48x ◴[] No.45664709{4}[source]
If it’s frozen, or if the mouse suddenly takes seconds to respond to every movement, then it’s not just using some swap. It’s thrashing for sure.
replies(1): >>45667916 #
89. gnosek ◴[] No.45664809{5}[source]
Or you can set the vm.swappiness sysctl to 0.
90. omgwtfbyobbq ◴[] No.45664888{3}[source]
It's seldom invisible, but in my experience how visible it is depends on the size/modularity/performance/etc of what's being swapped and the underlying hardware.

On my 8gb M1 Mac, I can have a ton of tabs open and it'll swap with minimal slowdown. On the other hand, running a 4k external display and a small (4gb) llm is at best horrible and will sometimes require a hard reset.

I've seen similar with different combinations of software/hardware.

91. db48x ◴[] No.45664906{8}[source]
Sounds like you just need more memory.
92. ssl-3 ◴[] No.45664947{8}[source]
Just set swappiness to [say] 5, 2, 1, or even 0, and move on with your project with a system that is more reluctant to go into swap.

And maybe plan on getting more RAM.

(It's your system. You're allowed to tune it to fit your usage.)

93. eru ◴[] No.45664965{6}[source]
> It's that "touching" of all the pages controlled by the GC that ultimately wrecks swap performance. But also the fact that moving collector like to hold onto memory as downsizing is pretty hard to do efficiently.

The memory that's now not in use, but still held onto, can be swapped out.

94. eru ◴[] No.45664970{9}[source]
No? What makes you think so?
replies(1): >>45665034 #
95. baq ◴[] No.45664991{3}[source]
Linux being absolute dogshit if it’s under any sort of memory pressure is the reason, not swap or no swap. Modern systems would be much better off tweaking dirty bytes/ratios, but fundamentally the kernel needs to be dragged into the XXI century sometime.
replies(1): >>45668522 #
96. baq ◴[] No.45665015{5}[source]
If you’re writing services in anything higher level than C you’re leaking something somewhere that you probably have no idea exists and the runtime won’t ever touch again.
97. fluoridation ◴[] No.45665034{10}[source]
Then what do you mean, some applications organize hot and cold data in RAM and storage respectively? Just about every application does it.
replies(1): >>45666351 #
98. baq ◴[] No.45665041{4}[source]
‘Hard reboot’ (not OP)
99. masklinn ◴[] No.45665210{4}[source]
Python’s not a mover but the cycle breaker will walk through every object in the VM.

Also since the refcounts are inline, adding a reference to a cold object will update that object. IIRC Swift has the latter issue as well (unless the heap object’s RC was moved to the side table).

100. winrid ◴[] No.45665528{6}[source]
It doesn't matter. The GC does not know what heap allocations are in memory vs swap, and since you don't write applications thinking about that, running a VM with a moving GC on swap is a bad idea.
replies(1): >>45666328 #
101. fulafel ◴[] No.45665615{3}[source]
> Note that when it’s really thrashing the swap the workload is 100% random 4KB reads and writes in equal quantities.

The free memory won't go below a configurable percentage and the contiguous io algorithms of the swap code and i/o stack can still do their work.

replies(1): >>45668135 #
102. akvadrako ◴[] No.45665729{7}[source]
This subthread is about a poster's claim above that every page would be in RAM if you have enough, "hot or not", not just the working set.
103. goodpoint ◴[] No.45666199[source]
No, swap is absolutely fine if used correctly.
104. eru ◴[] No.45666328{7}[source]
A moving GC can make sure to separate hot and cold data, and then rely on the kernel to keep hot data in RAM.
replies(1): >>45674983 #
105. eru ◴[] No.45666343{8}[source]
> Swap ram by itself would be stupid but no one doing this isn’t also turning on compression.

I'm not sure what you mean here? Swapping out infrequently accesses pages to disk to make space for more disk cache makes sense with our without compression.

replies(1): >>45670340 #
106. eru ◴[] No.45666351{11}[source]
A silly but realistic example: lots of applications leak a bit of memory here and there.

Almost by definition, that leaked memory is never accessed again, so it's very cold. But the applications don't put this on disk by themselves. (If the app's developers knew about which specific bit is leaking, they'd rather fix the leak then write it to disk.)

replies(1): >>45667356 #
107. KaiserPro ◴[] No.45666448{5}[source]
> Running out of memory kills performance. It is better to kill the VM and restart it so that any active VM remains low latency.

Right, you seem to be not understanding what I'm getting at.

Memory exhaustion is bad, regardless of swap or not.

Swap gets you a better performing machine because you can swap out shit to disk and use that ram for vfs cache.

the whole "low latency" and "I want my VM to die quicker" is tacitly saying that you haven't right sized your instances, your programme is shit, and you don't have decent monitoring.

Like if you're hovering on 90% ram used, then your machine is too small, unless you have decent bounds/cgroups to enforce memory limits.

108. geokon ◴[] No.45666538[source]
My understanding was that if you're doing random access - ZRAM has near-zero overhead. While data is being fetched from RAM, you have enough cycles to decompress blocks.

Would love to be corrected if I'm wrong

109. bayindirh ◴[] No.45666977{5}[source]
You can request things from Kernel, like pinning cores or telling kernel not swap your pages out (see mlockall() / madvise()).

The easiest way affecting everything running on the system might not be the best or even the correct way to do things.

There's always more than one way to solve a problem.

Reading the Full Manual (TM) is important.

110. bayindirh ◴[] No.45667173{3}[source]
Good to know. Thanks!
111. bayindirh ◴[] No.45667191{5}[source]
Yes, this is my experience, too. However, I still tend to observe my memory usage even if I have plenty of free RAM.

Old habits die hard, but I'm not complaining about this one. :)

112. bayindirh ◴[] No.45667197{5}[source]
> I DON’T WANT THE KERNEL PRIORITIZING CACHE OVER NRU PAGES.

Then tell the Kernel about it. Don't remove a feature which might benefit other things running on your system.

113. bayindirh ◴[] No.45667223{7}[source]
Thanks, I generally use free -m since my brain can unconsciously parse it after all these years. ls -lh is one of my learned commands though. I type it in automatically when analyzing things.

ls -lrt, ls -lSh and ls -lShr are also very common in my daily use, depending on what I'm doing.

114. bayindirh ◴[] No.45667245{5}[source]
Thanks, I'll keep that in mind if I start to use EC2 for workloads.

However, from my experience, normal (eviction based) usage of SWAP doesn't impact the life of an SSD in a measurable manner. My 256GB system SSD (of my desktop system) shows 78% life remaining after 4 years of power on hours, which also served as /home for at least half of its life.

replies(1): >>45674697 #
115. Hendrikto ◴[] No.45667250[source]
> This is well known

But also false. Swap is there so anonymous pages can be evicted. Not as a “slow overflow for RAM”, as a lot of people still believe.

By disabling swap you can actually *increase* thrashing, because the kernel is more limited in what it can do with the virtual memory.

116. bayindirh ◴[] No.45667275{3}[source]
> Yes, and you can observe that even in your desktop...

Yup, that part of my comment was culmination of using Linux desktops for the last two decades. :)

> I wouldn't be so quick. Google ran their servers without swap for ages.

If you're designing this from get go and planning accordingly, it doesn't fit into my definition of eff it, we ball, but let's try this and see whether we can make it work.

> With swap files, instead of swap partitions,...

I'm a graybeard. I eyeball a swap partition size while installing the OS, and just let it be. Being mindful and having good amount of RAM means that SWAP acts as a eviction area for OS first, and as an escape ramp second, in very rare cases.

--

Sent from my desktop.

117. dwattttt ◴[] No.45667278{5}[source]
If you're getting this far into the details of your memory usage, shouldn't you use mlock to actually lock in the parts of memory you need to stay there? Then you get to have three tiers of priority: pages you never want swapped, cache, then pages that haven't been used recently.
replies(1): >>45669131 #
118. fluoridation ◴[] No.45667356{12}[source]
That's just recognizing that there's a spectrum of hotness to data. But the question remains: if all the data that the application wants to keep in memory does fit in memory, why do you need swap?
119. ta1243 ◴[] No.45667758{4}[source]
The second by a long shot.

Detecting things are down is far easier than detecting things are slow.

I'd rather that oom started killing things though than a kernel panic or a slow system. Ideally the thing that is leaking, but if not the process using the most memory (and yes I know that "using" is tricky)

120. ta1243 ◴[] No.45667787{6}[source]
If I've got 128G of ram and need 100M more to get it, something is wrong.

What if I've got 64G of ram and 64G of swap and need the same amount of memory?

replies(1): >>45674360 #
121. ta1243 ◴[] No.45667833{6}[source]
So that 2M of used swap is completely irrelevant. Same on my laptop

               total        used        free      shared  buff/cache   available
    Mem:           31989       11350        4474        2459       16164       19708
    Swap:           6047          20        6027
My syslog server on the other hand (which does a ton of stuff on disk) does use swap

    Mem:            1919         333          75           0        1511        1403
    Swap:           2047         803        1244
With uptime of 235 days.

If I were to increase this to 8G of ram instead of 2G, but for arguments sake had to have no swap as the tradeoff, would that be better or worse. Swap fans say worse.

replies(1): >>45667951 #
122. ta1243 ◴[] No.45667848{4}[source]
> A long running Linux system uses 100% of its RAM.

How about this server:

             total       used       free     shared    buffers     cached
  Mem:          8106       7646        459          0        149       6815
  -/+ buffers/cache:        681       7424
  Swap:         6228         25       6202
Uptime of 2,105 days - nearly 6 years.

How long does the server have to run to reach 100% of ram?

replies(1): >>45667890 #
123. bayindirh ◴[] No.45667890{5}[source]
You already maxed it from Kernel's PoV. 8GB of RAM, where 6.8GB is cache. ~700MB is resident and 459 is free because I assume Kernel wants to have some free space to allocate something quite fast.

25MB swap use seems normal for a server which doesn't juggle much tasks, but works on one.

replies(1): >>45672180 #
124. pdimitar ◴[] No.45667916{5}[source]
I get it that the distinction is real but nobody using the machine cares at this point. It must not happen and if disabling swap removes it then people will disable swap.
125. bayindirh ◴[] No.45667951{7}[source]
> So that 2M of used swap is completely irrelevant.

As I noted somewhere, my other system has 2,5GB of SWAP allocated over 13 days. That system is a desktop system and juggles tons of things everyday.

I have another server with tons of RAM, and the Kernel decided not to evict anything to SWAP (yet).

> If I were to increase this to 8G of ram instead of 2G, but for arguments sake had to have no swap as the tradeoff, would that be better or worse. Swap fans say worse.

I'm not a SWAP fan, but I support its use. On the other hand I won't say it'd be worse, but it'd be overkill for that server. Maybe I can try 4, but that doesn't seem to be necessary if these numbers are stable over time.

126. pdimitar ◴[] No.45667962{4}[source]
I am not sure exactly what your point is. Is it "hey, it can be much worse"? If so, not a very interesting argument if your machine crawls to a halt.
127. db48x ◴[] No.45668135{4}[source]
That may be the intention, but you shouldn’t rely on it. In practice the average IO size is, or at least was, almost always 4KB.

Here’s a screenshot from atop while the task was running: <https://db48x.net/temp/Screenshot%20from%202019-11-19%2023-4...>. Note the number of page faults, the swin and swout (swap in and swap out) numbers, and the disk activity on nvme0n1. Swap in is 150k, and the number of disk reads was 116k with an average size of 6KB. Swap out was 150k with 150k disk writes of 4KB. It’s also reading from sdh at a fair clip (though not as fast as I wanted!)

<https://db48x.net/temp/Screenshot%20from%202019-12-09%2011-4...> is interesting because it actually shows 24KB average write size. But notice that swout is 47k but there were actually 57k writes. That’s because the program I was testing had to write data out to disk to be useful, and I had it going to a different partition on the same nvme disk. Notice the high queue depth; this was a very large serial write. The swap activity was still all 4KB random IO.

replies(1): >>45678387 #
128. ValdikSS ◴[] No.45668497{4}[source]

    Enables Multi-Gen LRU (improved page reclaim and caching policy).
    Prevents thrashing, improves loading speeds under low ram conditions.
    Requires kernel 6.1+.
    Has dramatic effect especially on slower HDDs.
    For slower HDDs, consider 1000 instead of 300 for min_ttl_ms.

    sudo tee /etc/tmpfiles.d/mglru.conf <<EOF
    w-      /sys/kernel/mm/lru_gen/enabled          -       -       -       -       y
    w-      /sys/kernel/mm/lru_gen/min_ttl_ms       -       -       -       -       300
    EOF
129. ValdikSS ◴[] No.45668522{4}[source]
It's kind of solved since kernel 6.1 with MGLRU, see above.

Dirty buffer should also be tuned (limited), absolutely. Default is 20% of RAM, (with 5 second writeback and 30 second expire intervals), which is COMPLETELY insane. I limit it to 64 MB max usually, with 1 second writeback and 3 second expire intervals.

130. AlexandrB ◴[] No.45668533[source]
> Maybe back in the 90s, it was okay to wait 2-3 seconds for a button click, but today we just assume the thing is dead and reboot.

My experience is the exact opposite. If anything 2-3 second button clicks are more common than ever today since everything has to make a roundtrip to a server somewhere whereas in the 90s 2-3s button click meant your computer was about to BSOD.

Edit: Apple recently brought "2-3s to open tab" technology to Safari[1].

[1] https://old.reddit.com/r/MacOS/comments/1nm534e/sluggish_saf...

131. ◴[] No.45668643{3}[source]
132. ValdikSS ◴[] No.45668647{3}[source]
It's fixed since Kernel 6.1 + MGLRU, see above, or read this: https://notes.valdikss.org.ru/linux-for-old-pc-from-2007/en/...
replies(1): >>45670562 #
133. pdimitar ◴[] No.45668771{4}[source]
I don't count crawling to a halt as a working machine. Plus it depends. Back in the day I had computers that got blocked for 30-ish seconds which was annoying but gave you the window of opportunity to go kill the offending program. But then you had some that we left, out of curiosity, to work throughout the entire workday and they never recovered.

So most of the time I'd prefer option 3: the OOM killer to reap a few offending programs and let me handle restarting them.

134. pdimitar ◴[] No.45669082{6}[source]
Huh? Could you please clarify wrt to Rust and C++? Can't they use another allocator if needed? Or that's not the problem?
135. pdimitar ◴[] No.45669131{6}[source]
Can mlock be instructed to f.ex. "never swap pages from this pid"?
replies(1): >>45669173 #
136. bayindirh ◴[] No.45669173{7}[source]
The application requests this itself from the Kernel. See https://man7.org/linux/man-pages/man2/mlock.2.html
replies(1): >>45674932 #
137. vlovich123 ◴[] No.45670340{9}[source]
Swapping out to RAM without compression is stupid - then you’re just shuffling pages around in memory. Compression is key so that you free up space. Swap to disk is separate.
138. webstrand ◴[] No.45670562{4}[source]
Do you know how the le9 patch compares to mg_lru? The latter applies to all memory, not just files as far as I can tell. The former might still be useful in preventing eager OOM while still keeping executable file-backed pages in memory?
replies(1): >>45675774 #
139. fluoridation ◴[] No.45671937{8}[source]
>Because then you have more active working memory as infrequently used pages are moved to compressed swap and can be used for more page cache or just normal resident memory.

Uhh... A VMM that swaps out to disk an allocated page to make room for more disk cache would be braindead. The process has allocated that memory to use it. The kernel doesn't have enough information to deem disk cache a higher priority. The only thing that should cause it to be swapped out is either another process or the kernel requesting memory.

replies(1): >>45677588 #
140. ta1243 ◴[] No.45672180{6}[source]
So not 100% of ram, less than 95%
141. justsomehnguy ◴[] No.45674360{7}[source]
"Why it always should be the worst and the most idiotic scenario "

And no, if you need 100MB more then it's literally not important how much RAM do you have. You just needed 100MB more this time.

142. vasco ◴[] No.45674697{6}[source]
You don't care about life of any hardware in the cloud, that doesn't really matter either unless you work for the cloud provider in their datacenter teams.
replies(1): >>45679358 #
143. dwattttt ◴[] No.45674932{8}[source]
From the link, mlockall with MCL_CURRENT | MCL_FUTURE

> Lock all pages which are currently mapped into the address space of the process.

> Lock all pages which will become mapped into the address space of the process in the future.

144. winrid ◴[] No.45674983{8}[source]
Yeah but in practice I'm not sure that really works well with any GCs today? Ive tried this with modern JVM and Node vms, it always ended up with random multi second lockups. Not worth the time.
145. ValdikSS ◴[] No.45675774{5}[source]
le9 is a 'simple' method to keep the fixed amount of the page cache. It works exceptionally well for what it is, but it requires manual tuning of the amount of cache in MB.

MGLRU is basically a smarter version of already existing eviction algorithm, with evicts (or keeps) both page cache and anon pages, and combined with min_ttl_ms it tries to keep current active page cache for a specified amount of time. It still takes into account swappiness and does not operate on a fixed amount of page cache, unlike le9.

Both are effective in trashing prevention, both are different. MGLRU, especially with higher min_ttl_ms, could cause OOM killer more frequently than you'd like it to be called. I find le9 more effective for desktop use on old low-end machines, but that's only because it just keeps the (large/er amounts of) page cache. It's not very preferable for embedded systems for example.

146. vlovich123 ◴[] No.45677588{9}[source]
> A VMM that swaps out to disk an allocated page to make room for more disk cache would be braindead

Claiming any decision is “brain dead” in something as heuristic heavy and impossible to compute optimally as resident memory pages is quite the statement to make; this is a form of the knapsack problem (NP-complete at least) with the added benefit of time where the items are needed in some specific indeterminate order in the future and there’s a whole bunch of different workloads and workload permutations that alter this.

To drive this point home in case you disagree, what’s dumber? Swapping out to disk an allocated page (from the kernel’s perspective) that’s just sitting in the free list of the userspace allocator for that process or a page of some frequently accessed page of data?

Now, I agree that VMMs may not do this because it’s difficult to come up with these kinds of scenarios that don’t penalize the general case, more importantly than performance this has to be a mechanism that is explainable to others and understandable for them. But claiming it’s a braindead option to even consider is IMHO a bridge too far.

147. fulafel ◴[] No.45678387{5}[source]
That's surprising. Do you know what your application memory access pattern is like, is it really this random and the single page io is working along its grain, or is the page clustering, io readahead etc just MIA?
replies(1): >>45690608 #
148. bayindirh ◴[] No.45679358{7}[source]
Yes, but I care about hardware life on my own personal systems and infrastructure I manage, so... :)
149. db48x ◴[] No.45690608{6}[source]
I didn’t delve very deep into it, but the program was written in Go. At this point in the lifecycle of the program we had optimized it quite a bit, removing all the inefficiencies that we could. It was now spending around two thirds of its cpu cycles on garbage collection. It had this ridiculously large heap that was still growing, but hardly any of it was actually garbage.

I rewrote a slice of the program in Rust with quite promising results, but by that time there wasn’t really any demand left. You see, one of the many uses of Reposurgeon <http://www.catb.org/esr/reposurgeon/> is to convert SVN repositories into Git repositories. These performance results were taken while reposurgeon was running on a dump of the GCC source code repository. At the time this was the single largest open source SVN repository left in the world with 287k commits. Now that it’s been converted to a Git repository it’s unlikely that future Reposurgeon users will have the same problem.

Also, someone pointed out that MG-LRU <https://docs.kernel.org/admin-guide/mm/multigen_lru.html> might help by increasing the block size of the reads and writes. It was introduced a year or more after I took these screenshots, so I can’t easily verify that.