Human judgement is like a house built on sand, it is basically provably feeble [0]. I've literally never in practice seen a human update their beliefs using Bayes' formula. I suspect we'll find that at some point fairly soon AIs will just have better judgement than us because they can be programmed to incorporate formal statistical concepts while humans have to rely on evolving grey goop which we haven't quite mastered. I imagine it'll be almost comical watching human experts going up against a system that can actually intuit the difference between a 60% and 70% chance of something happening in their risk calculations.
Humans will still have a role expressing preferences and subjective questions though. Questions like "how much risk do you want your investments to take?" or "does this look good?" ultimately can't be answered by AIs because they depend on the internal state of a human.
> I've literally never in practice seen a human update their beliefs using Bayes' formula.
Then you've never debugged anything genuinely difficult.
Moving from "Where did I screw up in my code?" to "Is this library broken?" to "Wait, that's not possible. Let's look at the compiler output on Godbolt." to "Are you kidding me? The SPI system returns garbage in the last bit for transactions of 8n+1 bits?" (BTW, Espressif, please fix that in the C6. Kthxbye.) is all about establishing ground truth and adjusting your Bayesian priors as you gather evidence.
If there is one thing AI is currently shockingly bad at, it's updating its assumptions when it is confronted with evidence that they are incorrect. It will tirelessly spin its wheels (seemingly) forever laboring under a false assumption, running into dead end after dead end without ever coming to the conclusion that it should reexamine its assumptions.
Humans may not update like Bayesians, but we read context, shift priorities, and act under pressure. Judgment isn't just math — it's lived experience, intuition, and meaning in motion. That’s still hard to replicate.
Humans will still have a role expressing preferences and subjective questions though. Questions like "how much risk do you want your investments to take?" or "does this look good?" ultimately can't be answered by AIs because they depend on the internal state of a human.
[0] See also, the academic field of psychology