If you need zero downtime, you already in a field where NOTHING is a piece of cake. Not your network, not your computing, not your storage, and i haven't even talked about the human aspect of "zero downtime".
Alright. But the GP comment claimed upgrading PG was a piece of cake. You claim it is not a piece of cake. So it sounds like you agree that the claim that upgrading PG was a piece of cake was misleading.
> the GP comment claimed upgrading PG was a piece of cake. You claim it is not a piece of cake. So it sounds like you agree that the claim that upgrading PG was a piece of cake was misleading.
GP claimed upgrading was a piece of cake, not that zero downtime upgrades are a piece of cake. The two claims aren’t interchangeable. The simple upgrade path is always available, though it may have downtime consequences you personally are unwilling to accept. And the complex upgrade path is complex for reasons that have nothing to do with PostgreSQL - it’s just as complex to do a zero downtime upgrade in any data store, because in all cases it requires logical replication.
So if anything it feels like you’re the one being misleading by acting as though GP made a more specific claim than they actually did, and insisting that the hard case is hard because of PG instead of difficulty that’s inherent to the zero downtime requirement.
So if I tell you that upgrading pretty much all databases is a piece of cake but not include the criteria "unless you want to keep your data" you would say that is a fair statement?
If you claim that process X is trivial one has to make some assumptions, right? Otherwise I could claim that going to the moon is trivial but leave out "assuming you have a rocket, resources, people and anything else you may require".
Claiming that something is a piece of cake as a broad statement without any details is meaningless at best.
> So if I tell you that upgrading pretty much all databases is a piece of cake but not include the criteria "unless you want to keep your data" you would say that is a fair statement?
Incredibly bad-faith comparison, this.
Many, many datastore deployments can tolerate 10 minutes of downtime every 4 or 5 years when their PG install finally transitions out of support. Data loss isn’t even in the same universe of problem. It’s reasonable to talk about how easy it is to upgrade if you can tolerate a tiny bit of downtime every few years, since most people can. It’s utterly asinine to compare that to data deletion.