Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>what about the prior probability of a good-faith statement in presence of a strong conflict of interest?

The definition of your prior would incorporate that.

>Having no ability to directly verify the truth, wouldn’t you require stronger evidence if you were to rely on that statement in your decision-making?

That is exactly the purpose of priors and Bayesian belief systems. If you could verify any truth, you'd not need a prior or Bayesian probability concepts for that.

In none of these cases do you change the prior after some evidence is obtained, You'd use the prior and Bayesian update to make a new belief, which is then the prior for making decisions as future evidence is obtained.

Retroactively changing the prior is demonstrably bad at reaching the proper conclusions over time. E.T. Jaynes "Probability Theory" [1] has detailed proofs of all of this, covered in the first few chapters.

[1] https://www.cambridge.org/core/books/probability-theory/9CA0...



Say we are weighing the probability of the lab leak hypothesis being correct. Incoming evidence suggests a lab leak is hurtful to a wider array of institutions and individuals than assumed originally. We use this new evidence, along with previous beliefs on how likely lab leak would be in the first place based on priors like biosafety violation and spillover history and statements made by the above-mentioned individuals and institutions (considered authoritative, but now found possibly in conflict of interest), to form a new belief which we then use as a new prior when assessing how likely it is that the lab leak hypothesis is correct. End result: lab leak hypothesis being correct is assigned a lower prior likelihood and will require stronger evidence. Do we disagree on substance?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: