If you don’t need edge compute (which you do if you have customers dispersed geographically) then what you say is true.
But if you do no amount of Kubernetes on the old school cloud providers is going to get you there. You will encounter the hard problems fly solves for.
Read replica caches at the edge are pretty standard (whether the main db or a cache layer like redis).
I think the killer app on fly is actually a geo aware sql db such as cockroach. That as a managed offering puts fly above and beyond anything we’ve had before.
Caching at the edge could be done by an API gateway or nginx.
Geo aware SQL DB sounds like a lot of added complexity. What is the latency trade off in practice? 100ms ping time is probably small compared to query execution time, especially if your backend returns everything the frontend needs in one response.
I understand someone at the scale of Amazon wanting to shave ms off page loads. But most web apps?
But if you do no amount of Kubernetes on the old school cloud providers is going to get you there. You will encounter the hard problems fly solves for.