If you're a fan of elegant prose, I can't recommend Epicurian Dealmaker[1] enough. Sadly, the blog hasn't been written to in 8-ish years now. But I suspect that most of the information is still probably reasonably accurate.
Ironically the fastest path to getting everyone connected was to have them talk to cloud data-centers/neo-mainframes.
So it both justifies the mainframe, but if you look at why we all went mainframe in the end, it was to, ironically, connect many computers together.
There's also the overwhelming issue of power and control. Moving computing back to the mainframe allow totalistic control over computing by the service provider. This is a good way to make money. But is it good for the world? And what would the world look like if we had reliable fast trustable interconnected system, instead of cloud mainframes/data keeps?
I'm forgetting which books but some of the books about early computing talked about protests against computerization, against the mass data ingestion (probably among others What The Doormouse Said?). For a while the personal computer was a friendlier less scary mass-roll-out of computing, but this cloud era has not seen many viable alternatives to staying connected while keeping computing personal. RemoteStorage was early in, and Tim Berners Lee trying a seemingly very reasonable Solid idea seem like very reasonable takes, or going full p2p with data/hyper and that world: none of these have the inertia where others can follow suit. The problem is much harder, but I think it's more path dependence and perverse incentives, that breaking out will be found to be quite workable and good and validated, but there's gross inaction on finding the moral, open ecosystem, protocols & standards based alternatives for connecting ourselves together as we might.
Hosting from home seems absurdly viable for many. I have a systemd-timer that keeps a upnp-igd nat hole punched so I can ssh in, and that has absurdly good uptime. My fiber to the home would survive to quite a lot of use.
Past that, vps can be had so cheap. If we have good software, the computing footprint ought to be tiny.
One real challenge is scale out. Ideally, for p2p to really work, some multi-tenant systems seem required, so we can effectively co-host. I loved the sandstorm model, but didn't actually use it, and I think there's further refinement possible.
Ideally imo, I could host like 10 apps, but if you want to use one, you spin up your own tenant instance. The lambda engine/serverless/FaaS thing wouldn't actually spin up new runtimes, it'd use the same FaaS instances, but be fed your tenant context when it ran and only be able to access your tenant stuff. That way as a host, I kind of know & can manage what runs, but you can have your own freedom of configuring your own instance to a large degree.
Then we need front ends that let you traffic steer and any cast in fancy ways, so you can host your own but it falls back to me, or you have 10 peers helping you host & you can weight between them.
Operationalizing what we have already & finding efficient wins to scale is kind of cart before the horse, since fediverse &Al are so new, but I think the deployment/management model to let us scale our footprint beyond ourselves is a crucial leap. And I think we are remarkably closer than we might think, that the jump into a bigger more holistic pattern is possible if we leverage the excellent serverless runtimes & operational tools that have recently merged integratively.