The SSH-agent forwarding model is the problem, I think; Every couple of years someone finds another semi-common scenario where it becomes vulnerable.
I have, since 2010, practiced "jumps" only - that is, if I need to connect to a non-routeable host through a routeable one ("bastion host" or "jump host"), then the intermediate host is only used as a transport, by way of -L/-R forwarding of the final destination's ssh port, or by netcat, or (since a couple of years ago) SSH's internal ProxyJump or ProxyCommand; compared to agent forwarding, this is less efficient CPU-wise and traffic-wise, but not by much; and it is as safe as connecting to the final destination directly, unlike agent forwarding.
That works for bastions, but not other use cases. One common one is pushing & pulling from git on a remote dev box. Or if you want to SCP something between remote machines without having to pull it down locally. Agent forwarding is the best way to do this (outside of possibly generating an ephemeral key pair and copying public keys around for a single use then destroying them... but ain’t nobody gonna do that).
In general I'd say SSHing from a remote box to another remote box is a non-issue since you can always use some sort of tunneling/bastioning to make that work.
It's when you want to use some other tool that tunnels over SSH -- scp, git, sftp, rsync, etc. -- that you run into trouble.
Consider a remote development instance. So I'm an engineer, and instead of developing on my local I do development on an EC2 instance in AWS (there are a variety of reasons this may be preferential or even required). If I'm pushing & pulling code from GitHub to/from this development instance I'm gonna need to either have a private key on that development instance or have agent forwarding back to my local.
Now suppose I want to copy a large database snapshot between two instances on EC2. There are obvious benefits to being able to transfer directly between these two instances, on the same private network, versus bringing the data to your local (where you might not even have sufficient disk space) and pushing it back up.
Not sure what other use cases exist, but these two alone I think justify the existence of agent forwarding.
SSH agent forwarding does have subtle security issues. But so does putting a private key on a remote server. If you're the only one who has access to the remote server, neither option is problematic. If someone else has root, and you don't trust that person, then both are problematic (and which is worse is debatable).
Thanks. When faced with similar issues, I have always set up an ad-hoc channel (e.g. created a local key, ssh-copy-id it, use it, and remove both the keyfile and the authorized_keys line afterwards). It's more work, but gives exactly the minimum required trust and no more. I've never even considered using agent forwarding for this case.
That said, I don't have to do that often; If I did, I'd probably look for a simpler way (or ignore my paranoia and use ssh agent forwarding...)
I have, since 2010, practiced "jumps" only - that is, if I need to connect to a non-routeable host through a routeable one ("bastion host" or "jump host"), then the intermediate host is only used as a transport, by way of -L/-R forwarding of the final destination's ssh port, or by netcat, or (since a couple of years ago) SSH's internal ProxyJump or ProxyCommand; compared to agent forwarding, this is less efficient CPU-wise and traffic-wise, but not by much; and it is as safe as connecting to the final destination directly, unlike agent forwarding.