Googlebot has been able to index SPAs since 2019. They use a Headless Chrome instance and allow a number of seconds for things to render after each interaction.
With the caveat that server-generated HTML is indexed immediately, while pages that need client-side rendering get put into a render queue that takes Google a while to get to (days?).
That's why you write down your use case for every project. Have a news site which needs to be indexed by Google immediately? SSR.
Have some Jira or whatever? CSR.
Most CSR applications are behind a login wall anyway. Thinking of the core applications of services like WhatsApp, Discord, Gmail, Dropbox, Google Docs etc.
Bottom line, whether SSR really being “the future”: “it depends”.
Hence you don't build documents with SPAs, they are meant for applications. And usually you don't care about indexing the inside of applications, only the landing pages and such, which are documents (should not be a part of the SPA).
A blog built as a SPA? Sucks. A blog built as a collection of documents? Awesome.
I would have thought they could spin up headless Chrome instances to simply pull down, render, and then index websites. Apparently this is too resource intensive for them? I'm sure the idea has come up (there's no way I thought of this and they didn't).
You'd think right? There must be other reasons then... how does Google benefit from not building better SPA crawling infrastructure? It's certainly gotten _better_ over the last few years, but still seems lacking.
It's funny Google can't index a SPA, given the tie to Angular (2500 apps in use in-house). Wouldn't be so hard to build something that could.