Yeah I'm also wondering that. I'm looking for self-host PostgreSQL after Cockroach changed their free tier license but found the HA part of PostgreSQL is really lacking. I tested Patroni which seems to be a popular choice but found some pretty critical problems (https://www.binwang.me/2024-12-02-PostgreSQL-High-Availabili...). I tried to explore some other solutions, but found out the lack of a high level design really makes the HA for PostgreSQL really hard if not impossible. For example, without the necessary information in WAL, it's hard to enforce primary node even with an external Raft/Paxos coordinator. I wrote some of them down in this blog (https://www.binwang.me/2025-08-13-Why-Consensus-Shortcuts-Fa...) especially in the section "Highly Available PostgreSQL Cluster" and "Quorum".
My theory of why Postgres is still getting the hype is either people don't know the problem, or it's acceptable on some level. I've worked in a team that maintains the in house database cluster (even though we were using MySQL instead of PostgreSQL) and the HA story was pretty bad. But there were engineers manually recover the data lost and resolve data conflicts, either from the recovery of incident or from customer tickets. So I guess that's one way of doing business.
Instead of giving up the good guarantee of etcd, a better approach maybe grouping some nodes together to create a tree like structure with sub clusters.
I don't think it's possible with ANY set up. All you get is that some replicas are more outdated than others. But they won't return 2 conflicting states when ReplicaA says tx1 wrote (but not tx2), while ReplicaB says tx2 wrote (but not tx1). Which is what Long Fork and Parallel Snapshot are about.
So Amazon Multi-cluster seems to replicate changes out of order?
Kinda. I think it's "just" PostgreSQL behaviour that's to blame here: On replicas, transaction commit visibility order is determined by the order of WAL records; on the primary it's based on when the backend that wrote the transaction notices that its transaction is sufficiently persisted.
Ever since CockroachDB changed their license, I'm searching for alternatives. PostgreSQL is an obvious choice but is there a good HA solution? What people usually do for HA with PostgreSQL or do they just not care about it? I tested Patroni, which is the most popular one in my knowledge, but found some HA issues that makes me hesitate to use: https://www.binwang.me/2024-12-02-PostgreSQL-High-Availabili...
> What people usually do for HA with PostgreSQL or do they just not care about it?
Patroni for most cases. At Heroku we have our own control plane to manage HA and fencing which works very reliably. I also like the approach the Cloud Native PG folks have taken with implementing it in the k8s API via the instance manager[1].
Other options like Stolon or repmgr are popular too. Patroni is, despite the Jepsen testing, used well without issues in the majority of circumstances. I wouldn't over think it.
Is it really prioritizing happiness tho? From Wikipedia:
> According to the World Happiness Report 2019, Bhutan is 95th out of 156 countries.
Not to mention its ethnic cleansing of the non-Buddhist population. There are definitely other things that have higher priority on the government's agenda than people's happiness.
I don't see lots of AR/VR techs in the intro: there is no interaction with real world, there is no 3D object other than virtual screen and virtual controls. It seems to be just a traditional portable computer with bigger screens. I can see the potential of it. But at the current stage, it's hard to justify the price.
> it's a specific formula that applies equally to everyone.
That doesn’t mean the formula is reasonable or transparent. An AI is also a specific formula that applies equally to everyone. It’s about how the formula takes “weight” on the individual’s differences.
> There are hard inquiries and soft inquiries
Why there are “hard inquiries”? Why someone else inquiring my credit score should affect my credit score?
>That doesn’t mean the formula is reasonable or transparent. An AI is also a specific formula that applies equally to everyone.
You can learn literally everything you need to know about how the FICO credit scoring systems works in probably two or three hours of dedicated research, even though the model itself is a trade-secret. When lenders deny credit, they're legally obliged to provide the specific reasons why.
This is not-even-in-the-same-ballpark as AI, where no-one can even tell you how inputs relate to outputs, not even the creators.
I certainly don’t love the system, but the difference is “checking credit score in order to take more credit (eg a loan or credit card)” (hard) vs “checking credit score for my own knowledge” (soft). This distinction feels reasonable to me.
How is it reasonable? Anyone checking your credit score already knows that you're applying for new credit, because you're applying for new credit with them.
Dinging your score over that only punishes you for soliciting credit offers from multiple lenders at once so you can choose the best terms. Which ought to be an antitrust violation.
That's usually not the case, though. I was just looking around at Chase's credit report tracking thing, and they have this to say on credit inquiries:
> The VantageScore credit score model takes rate shopping, e.g., for a mortgage or car loan, into consideration. All inquiries for mortgages, auto loans and major credit cards that appear in your credit file within a 14-day window are interpreted as a single inquiry.
I believe that's the case for the FICO score algorithm as well. I think a 14-day window is probably too short, but it likely does cover most situations.
It's a defense against the double spend problem. It minimizes potential timing issues that could allow someone to simultaneously taking out multiple credit lines without the ability to properly evaluate the risk.
But it isn't, because the reduction in credit score from the query doesn't necessarily prohibit two simultaneous loans, it just makes the terms worse (and so makes default more likely).
And there are obviously better ways to prevent "double spend," like reporting the new loan being granted, which actually should reduce the credit score due to the new debt. It's not as if we can't make computers capable of updating this information in real time.
The interesting thing I've noticed is that opening a new credit account often doesn't actually get that information on your credit report. It sometimes only shows up after you've made the first payment (either loan or credit card). I wonder if that's why the system dings you for too many hard credit inquiries. I agree with you that there's no reason why this couldn't be fixed, though.
But if you're applying for, and repeatedly being rejected for, extension of credit by third parties that is absolutely something a pre-existing lender would reasonably consider when thinking about risk.
You have to understand that there is a no "credit denied" feedback into the credit rating system. The standard pattern for an approved credit application is a hard-pull followed by the opening of a new tradeline, both of which affect your score. If you're applying for credit, but not being approved, then that just shows up as a hard-pull with no new line.
The FICO score is also set up specifically to ignore additional hard-pulls for the same type of credit within a bounded period; so if you go shopping for a mortgage (or an auto loan, or a student loan), then multiple queries within a 30 day period are coalesced, and only count once from a scoring perspective.
I dont think its unreasonable for one’s credit score to get dinged if half a dozen lenders check their score - that means the consumer went through the initial stages of trying to get multiple loans.
But more importantly the ding is maybe 10 points which is pretty much irrelevant to the “buckets” that define credit worthiness (>800, 700-800, etc).
Plus individual lenders can look at the report themselves and make their own determination.
My theory of why Postgres is still getting the hype is either people don't know the problem, or it's acceptable on some level. I've worked in a team that maintains the in house database cluster (even though we were using MySQL instead of PostgreSQL) and the HA story was pretty bad. But there were engineers manually recover the data lost and resolve data conflicts, either from the recovery of incident or from customer tickets. So I guess that's one way of doing business.