>While this seems an easy relationship to satisfy, more-complex models based on real-world performance parameters and access patterns indicate that it is, in reality, very difficult to obtain a performance boost with this method.
Now that improvements in CPU speed have outpaced improvements in storage speed, it's easier to benefit from compressed swap in ram. Linux includes "zswap"[0], which in my experience works well for postponing the severe performance degradation you otherwise get with heavy memory pressure on general desktop use. If you think you want to set "vm.swappiness=0" (which I ran for several years), you probably want to enable zswap instead. If you're running Debian, you can set it up by installing "zram-tools" (which can be configured with /etc/default/zramswap if you're not happy with the default).
zswap and zram are actually different systems. zram is a compressed ramdisk that acts like a block device and can host any filesystem (including swapfs).
zswap (nee compcache), on the other hand, is a region of memory that tries to store pages compressed instead of swapping them out -- basically a last resort before swapping. zswapped pages are not considered swap, so you have to check sysfs to see how much it's getting used. Many default configurations are pretty aggressive, allowing zswap to use up to 20% of physical memory before finally resorting to the system's configured swap space.
Storage speed has really outpaced CPU speed for the decade zswap has been out. It still has its place but if you have modern flash storage it may well be a throughput drop and power usage increase to enable zswap (particularly with the default algorithms).
Latency for pulling single pages has never really been limited by CPU performance, that's always been faster and will probably never change due to physics.
As someone running older hardware with a spinning disk drive, the comment was useful and something I might consider looking into.
At work I think I have a 2012-era ssd drive, that while speedy is probably pretty far behind modern drives. When things start thrashing in swap it's a bad time all around.
Right, compressed ram has been used heavily on android where the secondary storage has been terribly slow (emmc anyone?).
That's probably shifted as everyone moves to PCIe based storage solutions, particularly on the low-mid range phones that don't have super high performance cores.
Windows also compresses pages to alleviate memory pressure before resorting to swap nowadays. Probably the same reason as with Android, as on lower-end hardware, especially with hard drives, compressing/decompressing isn't really noticeable, while hitting the disk is.
macOS and iOS do this too. iOS doesn't even use any traditional swap and gets away with it by putting the onus on developers to make sure their apps never go OOM when in the foreground and to deal seamlessly with apps being OOM killed when in the background.
It is, though. Under memory pressure the kernel will drop any page that's backed by a file, and then page it back in as needed. "swap" is just making _all_ memory be backed by files.
Now that improvements in CPU speed have outpaced improvements in storage speed, it's easier to benefit from compressed swap in ram. Linux includes "zswap"[0], which in my experience works well for postponing the severe performance degradation you otherwise get with heavy memory pressure on general desktop use. If you think you want to set "vm.swappiness=0" (which I ran for several years), you probably want to enable zswap instead. If you're running Debian, you can set it up by installing "zram-tools" (which can be configured with /etc/default/zramswap if you're not happy with the default).
[0] https://en.wikipedia.org/wiki/Zswap