When I was setting up dm-crypt for my Linux dev box at work I opted for using an encrypted swap as it really made sense: why keep on-disk stuff encrypted if swapped memory is in plain sight.
It turned out to be a somewhat bad decision, because when the system did run out of memory (thank you, Firefox and Skype) and began swapping, the whole box just freezes as the kernel was too busy encrypting more and more stuff, not having time to even run the OOM killer.
Maybe the systems use unencrypted swap to avoid triggering such bugs in default setups.
I don't doubt your story, but modern CPUs can perform AES-GCM at gigabytes per second per core (5GB/sec on my laptop). Disk encryption is a different block cipher mode, but it shouldn't be much slower. So if implemented correctly, encrypted swap should not slow you down vs unencrypted swap. If in your setup encrypted swap is much slower than unencrypted swap, it's a bug in the kernel crypto and it should be fixed.
Even if a modern CPU can to do, the question is whether on a stock distro kernel they will do that. I could see the default kernels not being compiled with the extensions that may allow that speed for compatibility purposes. Then again, I remember when distros often shipped different i386, i586 and i686 kernels, and when Red Hat switched away from i386 as the default kernel (which might be a lot more recent than you think, if you weren't thinking of all the cheap i386 devices it was still working on in the early 2000's). Then, even if the stock kernel does support it through a kernel module, the question is whether that module was loaded. Finally, from looking at an Intel/AMD AES intructions compatibility list[1], not all processors as of even a couple years ago supported AES instructions (notably, it appears Intel used it to differentiate the i3 line from the more powerful ones)..
I'm uncertain if it's safe to use AESENC and friends in kernel mode in Freebsd. A number of kernels can't safely use e.g. FPU registers in-kernel and aesni operates exclusively on SSE state.
In my experience, swap usually does not actually work out helpfully. The system slows down so much it's unusable, and you wish the OOM killer killed something so you can recover some processes/state, but it takes way too long to do so, many minutes while the system is effectively frozen and you are helpless. (Some people think the OOM killer kicks in too soon. They're crazy ;)
I also ran into this when setting memory limits on docker containers. It basically made them freeze for entire minutes when they hit the limit instead of killing them. So instead I wrote my own script to check and signal them to gracefully exit before memory runs out (and after a few more seconds SIGKILL them).
Swap wins when you have different large processes open that you aren't actually using simultaneously. On my laptop with 2GB of RAM, it's why I don't have to close my browser before opening something else.
Most new drives can do encryption in hardware with zero performance penalty using either the ATA password or Opal for NVMe. For those doing software/CPU encryption on the swap, or any other partition for the matter is simply wasteful and often slower.
In my experience, linux locking up for minutes at a time while in a swapping situation happens if the disk is encrypted or not, and is a function of a crappy swapping algorythm.
When I was setting up dm-crypt for my Linux dev box at work I opted for using an encrypted swap as it really made sense: why keep on-disk stuff encrypted if swapped memory is in plain sight.
It turned out to be a somewhat bad decision, because when the system did run out of memory (thank you, Firefox and Skype) and began swapping, the whole box just freezes as the kernel was too busy encrypting more and more stuff, not having time to even run the OOM killer.
Maybe the systems use unencrypted swap to avoid triggering such bugs in default setups.