Most active commenters
  • bayindirh(8)
  • adastra22(3)
  • ta1243(3)

←back to thread

804 points jryio | 23 comments | | HN request time: 0.001s | source | bottom
Show context
speedgoose ◴[] No.45661785[source]
Looking at the htop screenshot, I notice the lack of swap. You may want to enable earlyoom, so your whole server doesn't go down when a service goes bananas. The Linux Kernel OOM killer is often a bit too late to trigger.

You can also enable zram to compress ram, so you can over-provision like the pros'. A lot of long-running software leaks memory that compresses pretty well.

Here is how I do it on my Hetzner bare-metal servers using Ansible: https://gist.github.com/fungiboletus/794a265cc186e79cd5eb2fe... It also works on VMs.

replies(15): >>45661833 #>>45662183 #>>45662569 #>>45662628 #>>45662841 #>>45662895 #>>45663091 #>>45664508 #>>45665044 #>>45665086 #>>45665226 #>>45666389 #>>45666833 #>>45673327 #>>45677907 #
levkk ◴[] No.45662183[source]
Yeah, no way. As soon as you hit swap, _most_ apps are going to have a bad, bad time. This is well known, so much so that all EC2 instances in AWS disable it by default. Sure, they want to sell you more RAM, but it's also just true that swap doesn't work for today's expectations.

Maybe back in the 90s, it was okay to wait 2-3 seconds for a button click, but today we just assume the thing is dead and reboot.

replies(16): >>45662314 #>>45662349 #>>45662398 #>>45662411 #>>45662419 #>>45662472 #>>45662588 #>>45663055 #>>45663460 #>>45664054 #>>45664170 #>>45664389 #>>45664461 #>>45666199 #>>45667250 #>>45668533 #
bayindirh ◴[] No.45662411[source]
This is a wrong belief because a) SSDs make swap almost invisible, so you can have that escape ramp if something goes wrong b) SWAP space is not solely an escape ramp which RAM overflows into anymore.

In the age of microservices and cattle servers, reboot/reinstall might be cheap, but in the long run it is not. A long running server, albeit being cattle, is always a better solution because esp. with some excess RAM, the server "warms up" with all hot data cached and will be a low latency unit in your fleet, given you pay the required attention to your software development and service configuration.

Secondly, Kernel swaps out unused pages to SWAP, relieving pressure from RAM. So, SWAP is often used even if you fill 1% of your RAM. This allows for more hot data to be cached, allowing better resource utilization and performance in the long run.

So, eff it, we ball is never a good system administration strategy. Even if everything is ephemeral and can be rebooted in three seconds.

Sure, some things like Kubernetes forces "no SWAP, period" policies because it kills pods when pressure exceeds some value, but for more traditional setups, it's still valuable.

replies(8): >>45662537 #>>45662599 #>>45662646 #>>45662687 #>>45663237 #>>45663354 #>>45664553 #>>45664705 #
adastra22 ◴[] No.45662646[source]
What pressure? If your ram is underutilized, what pressure are you talking about?

If the slowest drive on the machine is the SSD, how does caching to swap help?

replies(2): >>45662707 #>>45662734 #
1. bayindirh ◴[] No.45662707[source]
A long running Linux system uses 100% of its RAM. Every byte unused for applications will be used as a disk cache, given you read more data than your total RAM amount.

This cache is evictable, but it'll be there eventually.

Linux used to don't touch unused pages in the RAM in the older days if your RAM was not under pressure, but now it swaps out pages unused for a long time. This allows more cache space in RAM.

> how does caching to swap help?

I think I failed to convey what I tried to say. Let me retry:

Kernel doesn't cache to SSD. It swaps out unused (not accessed) but unevictable pages to SWAP, assuming that these pages will stay stale for a very long time, allowing more RAM to be used as cache.

When I look to my desktop system, in 12 days, Kernel moved 2592MB of my RAM to SWAP despite having ~20GB of free space. ~15GB of this free space is used as disk cache.

So, to have 2.5GB more disk cache, Kernel moved 2592 MB of non-accessed pages to SWAP.

replies(3): >>45662776 #>>45663196 #>>45667848 #
2. wallstop ◴[] No.45662776[source]
Edit:

    wallstop@fridge:~$ free -m
                   total        used        free      shared  buff/cache   available
    Mem:           15838        9627        3939          26        2637        6210
    Swap:           4095           0        4095


    wallstop@fridge:~$ uptime

    00:43:54 up 37 days, 23:24,  1 user,  load average: 0.00, 0.00, 0.00
replies(1): >>45662870 #
3. bayindirh ◴[] No.45662870[source]
The command you want to use is "free -m".

This is from another system I have close:

                   total        used        free      shared  buff/cache   available
    Mem:           31881        1423        1042          10       29884       30457
    Swap:            976           2         974
2MB of SWAP used, 1423 MB RAM used, 29GB cache, 1042 MB Free. Total RAM 32 GB.
replies(3): >>45663312 #>>45663669 #>>45667833 #
4. adastra22 ◴[] No.45663196[source]
Yes, and if I am writing an API service, for example, I don’t want to suddenly add latency because I hit pages that have been swapped out. I want guarantees about my API call latency variance, at least when the server isn’t overloaded.

I DON’T WANT THE KERNEL PRIORITIZING CACHE OVER NRU PAGES.

The easiest way to do this is to disable swap.

replies(6): >>45663291 #>>45663295 #>>45664809 #>>45665015 #>>45667197 #>>45667278 #
5. eru ◴[] No.45663291[source]
You better not write your API in Python, or any language/library that uses amortised algorithms in the standard (like Rust and C++ do). And let's not mention garbage collection.
replies(1): >>45669082 #
6. sethherr ◴[] No.45663295[source]
I’m asking because I genuinely don’t know - what are “pages” here?
replies(1): >>45663328 #
7. eru ◴[] No.45663312{3}[source]
If you are interested in human consumption, there's "free --human" which decided on useful units by itself. The "--human" switch is also available for "du --human" or "df --human" or "ls -l --human". It's often abbreviated as "-h", but not always, since that also often stands for "--help".
replies(1): >>45667223 #
8. adastra22 ◴[] No.45663328{3}[source]
That’s a fair question. A page is the smallest allocatable unit of RAM, from the OS/kernel perspective. The size is set by the CPU, traditionally 4kB, but these days 8kB-4MB are also common.

When you call malloc(), it requests a big chunk of memory from the OS, in units of pages. It then uses an allocator to divide it up into smaller, variable length chunks to form each malloc() request.

You may have heard of “heap” memory vs “stack” memory. The stack of course is the execution/call stack, and heap is called that because the “heap allocator” is the algorithm originally used for keeping track of unused chunks of these pages.

(This is beginner CS stuff so sorry if it came off as patronizing—I assume you’re either not a coder or self-taught, which is fine.)

9. wallstop ◴[] No.45663669{3}[source]
Thanks! My other problem was formatting. Just wanted to share that I see 0 swap usage and nowhere near 100% memory usage as a counterpoint.
10. gnosek ◴[] No.45664809[source]
Or you can set the vm.swappiness sysctl to 0.
11. baq ◴[] No.45665015[source]
If you’re writing services in anything higher level than C you’re leaking something somewhere that you probably have no idea exists and the runtime won’t ever touch again.
12. bayindirh ◴[] No.45667197[source]
> I DON’T WANT THE KERNEL PRIORITIZING CACHE OVER NRU PAGES.

Then tell the Kernel about it. Don't remove a feature which might benefit other things running on your system.

13. bayindirh ◴[] No.45667223{4}[source]
Thanks, I generally use free -m since my brain can unconsciously parse it after all these years. ls -lh is one of my learned commands though. I type it in automatically when analyzing things.

ls -lrt, ls -lSh and ls -lShr are also very common in my daily use, depending on what I'm doing.

14. dwattttt ◴[] No.45667278[source]
If you're getting this far into the details of your memory usage, shouldn't you use mlock to actually lock in the parts of memory you need to stay there? Then you get to have three tiers of priority: pages you never want swapped, cache, then pages that haven't been used recently.
replies(1): >>45669131 #
15. ta1243 ◴[] No.45667833{3}[source]
So that 2M of used swap is completely irrelevant. Same on my laptop

               total        used        free      shared  buff/cache   available
    Mem:           31989       11350        4474        2459       16164       19708
    Swap:           6047          20        6027
My syslog server on the other hand (which does a ton of stuff on disk) does use swap

    Mem:            1919         333          75           0        1511        1403
    Swap:           2047         803        1244
With uptime of 235 days.

If I were to increase this to 8G of ram instead of 2G, but for arguments sake had to have no swap as the tradeoff, would that be better or worse. Swap fans say worse.

replies(1): >>45667951 #
16. ta1243 ◴[] No.45667848[source]
> A long running Linux system uses 100% of its RAM.

How about this server:

             total       used       free     shared    buffers     cached
  Mem:          8106       7646        459          0        149       6815
  -/+ buffers/cache:        681       7424
  Swap:         6228         25       6202
Uptime of 2,105 days - nearly 6 years.

How long does the server have to run to reach 100% of ram?

replies(1): >>45667890 #
17. bayindirh ◴[] No.45667890[source]
You already maxed it from Kernel's PoV. 8GB of RAM, where 6.8GB is cache. ~700MB is resident and 459 is free because I assume Kernel wants to have some free space to allocate something quite fast.

25MB swap use seems normal for a server which doesn't juggle much tasks, but works on one.

replies(1): >>45672180 #
18. bayindirh ◴[] No.45667951{4}[source]
> So that 2M of used swap is completely irrelevant.

As I noted somewhere, my other system has 2,5GB of SWAP allocated over 13 days. That system is a desktop system and juggles tons of things everyday.

I have another server with tons of RAM, and the Kernel decided not to evict anything to SWAP (yet).

> If I were to increase this to 8G of ram instead of 2G, but for arguments sake had to have no swap as the tradeoff, would that be better or worse. Swap fans say worse.

I'm not a SWAP fan, but I support its use. On the other hand I won't say it'd be worse, but it'd be overkill for that server. Maybe I can try 4, but that doesn't seem to be necessary if these numbers are stable over time.

19. pdimitar ◴[] No.45669082{3}[source]
Huh? Could you please clarify wrt to Rust and C++? Can't they use another allocator if needed? Or that's not the problem?
20. pdimitar ◴[] No.45669131{3}[source]
Can mlock be instructed to f.ex. "never swap pages from this pid"?
replies(1): >>45669173 #
21. bayindirh ◴[] No.45669173{4}[source]
The application requests this itself from the Kernel. See https://man7.org/linux/man-pages/man2/mlock.2.html
replies(1): >>45674932 #
22. ta1243 ◴[] No.45672180{3}[source]
So not 100% of ram, less than 95%
23. dwattttt ◴[] No.45674932{5}[source]
From the link, mlockall with MCL_CURRENT | MCL_FUTURE

> Lock all pages which are currently mapped into the address space of the process.

> Lock all pages which will become mapped into the address space of the process in the future.