I do admit the "reliance on swap" thing is speculation on my part :)
My experience is that I can still tell when the OS is unhappy when I demand more RAM than it can give. MacOS is still relatively responsive around this range, which I just attributed to super fast swapping. (I'd assume memory compression too, but I usually run into this trouble when working with large amounts of poorly-compressible data.)
In either case, I know it's frustrating when someone is confidently wrong but you can't properly correct them, so you have my apologies
I suggest you go and look HOW it is done in apple silicon macs, and then think long and hard why this might make a huge difference. Maybe Asahi Linux guys can explain it to you ;)
I understand that it can make a difference to performance (which is already baked into the benchmarks we look at), I don't see how it can make a difference to compression ratios, if anything in similar implementations (ex: console APUs) it tends to lead to worse compression ratios.
If there's any publicly available data to the contrary I'd love to read it. Anecdotally I haven't seen a significant difference between zswap on Linux and macOS memory compression in terms of compression ratios, and on the workloads I've tested zswap tends to be faster than no memory compression on x86 for many core machines.
You are entirely (100%) wrong, but, sadly, NDA...