←back to thread

804 points jryio | 10 comments | | HN request time: 0s | source | bottom
Show context
speedgoose ◴[] No.45661785[source]
Looking at the htop screenshot, I notice the lack of swap. You may want to enable earlyoom, so your whole server doesn't go down when a service goes bananas. The Linux Kernel OOM killer is often a bit too late to trigger.

You can also enable zram to compress ram, so you can over-provision like the pros'. A lot of long-running software leaks memory that compresses pretty well.

Here is how I do it on my Hetzner bare-metal servers using Ansible: https://gist.github.com/fungiboletus/794a265cc186e79cd5eb2fe... It also works on VMs.

replies(15): >>45661833 #>>45662183 #>>45662569 #>>45662628 #>>45662841 #>>45662895 #>>45663091 #>>45664508 #>>45665044 #>>45665086 #>>45665226 #>>45666389 #>>45666833 #>>45673327 #>>45677907 #
1. cactusplant7374 ◴[] No.45661833[source]
What's the performance hit from compressing ram?
replies(4): >>45661888 #>>45661893 #>>45662040 #>>45662060 #
2. speedgoose ◴[] No.45661888[source]
I haven’t scientifically measured, but you don’t compress the whole ram. It is more about reserving a part of the ram to have very fast swap.

For an algorithm using the whole memory, that’s a terrible idea.

replies(2): >>45661932 #>>45664998 #
3. YouAreWRONGtoo ◴[] No.45661893[source]
It's sometimes not a hit, because CPUs have caches and memory bandwidth is the limiting factor.
4. sokoloff ◴[] No.45661932[source]
> It is more about reserving a part of the ram to have very fast swap.

I understand all of those words, but none of the meaning. Why would I reserve RAM in order to put fast swap on it?

replies(1): >>45662030 #
5. vlovich123 ◴[] No.45662030{3}[source]
Swap to disk involves a relatively small pipe (usually 10x smaller than RAM). So instead of paying the cost to page out to disk immediately, you create compressed pages and store that in a dedicated RAM region for compressed swap.

This has a number of benefits: in practice more “active” space is freed up as unused pages are compressed and often compressible. Often times that can be freed application memory that is reserved within application space but in the free space of the allocator, especially if that allocator zeroes it those pages in the background, but even active application memory (eg if you have a browser a lot of the memory is probably duplicated many times across processes). So for a usually invisible cost you free up more system RAM. Additionally, the overhead of the swap is typically not much more than a memcpy even compressed which means that you get dedup and if you compressed erroneously (data still needed) paging it back in is relatively cheap.

It also plays really well with disk swap since the least frequently used pages of that compressed swap can be flushed to disk leaving more space in the compressed RAM region for additional pages. And since you’re flushing retrieving compressed pages from disk you’re reducing writes on an SSD (longevity) and reducing read/write volume (less overhead than naiive direct swap to disk).

Basically if you think of it as tiered memory, you’ve got registers, l1 cache, l2 cache, l3 cache, normal RAM, compressed swap RAM, disk swap - it’s an extra interim tier that makes the system more efficient.

6. waynesonfire ◴[] No.45662040[source]
> zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as a general-purpose RAM disk

To clarify OP's represention of the tool, it compresses swap space not resident ram. Outside of niche use-cases, compressing swap has overall little utility.

replies(1): >>45663605 #
7. aidenn0 ◴[] No.45662060[source]
Depends on the algorithm (and how much CPU is in use); if you have a spare CPU, the faster algorithms can more-or-less keep up with your memory bandwidth, making the overhead negligible.

And of course the overhead is zero when you don't page-out to swap.

8. coppsilgold ◴[] No.45663605[source]
Incorrect, with zram you swap ram to compressed ram.

It has the benefit of absorbing memory leaks (which for whatever reason compress really well) and compressing stale memory pages.

Under actual memory pressure performance will degrade. But in many circumstances where your powerful CPU is not fully utilized you can 2x or even 3x your effective RAM (you can opt for zstd compression). zram also enables you to make the trade-off of picking a more powerful CPU for the express purpose of multiplying your RAM if the workload is compatible with the idea.

PS: On laptops/workstations, zram will not interfere with an SSD swap partition if you need it for hibernation. Though it will almost never be used for anything else if you configure your zram to be 2x your system memory.

replies(1): >>45666552 #
9. LargoLasskhyfv ◴[] No.45664998[source]
>...but you don’t compress the whole ram.

I do: https://postimg.cc/G8Gcp3zb (casualmeasurement.png)

10. masklinn ◴[] No.45666552{3}[source]
> Incorrect, with zram you swap ram to compressed ram.

That reads like what they said? You reserve part of the RAM as a swap device, and memory is swapped from resident RAM to the swap ramdisk, as long as there’s space on there. And AFAIK linux will not move pages between swap devices because it doesn’t understand them beyond priority.

Zswap actually seems strictly better in many cases (especially interactive computers / dev machines) as it can more flexibly grow / shrink, and can move pages between the compressed RAM cache and the disk swap.