Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it hilarious that rationalists have failed to notice or realize the consequences of the fact that approximating an update to a Bayesian network, even to getting an approximate probability answer that is within 49% of the real one, is NP hard.

The consequence is that for any moderately complex set of beliefs, it is computationally impossible for us to reason hard enough about any particular observation to correctly update our own beliefs. Two people who start with the same beliefs, then try as hard as they can with every known technique, may well come to exactly opposite conclusions. And it is impossible to figure out which is right and which is wrong.

If rationalists really cared about rationality, they should view this as a very important result. It should create humility about the limitations of just reasoning hard enough.

But they don't react that way. My best guess as to why not is that people become rationalists because they believe in the power of rationality. This creates a cognitive bias for finding ways to argue for the effectiveness of rationality. Which bias leads to DISCOUNTING the importance of proven limitations on what is actually possible with rationality. And succumbing to this bias demonstrates their predictable failure to actually BE rational.



> something something NP-hard.

Yes, in general. But the usual limits of "bounded rationality" make that result basically irrelevant. Most people don't have a myriad strong beliefs.

The problem is more like "the 10 commandments are inconsistent" and not that "Rawls' reflective equilibrium might not converge".


Point is not reasoning brings worse outcomes. It doesn't have to be perfect.


That is an argument for learning how to reason. Which is not actually the point under discussion. Going back to the parent of my comment:

> One thing I've frequently noticed in the rationalist community is the belief that if we all just reason hard enough, we'll reach the same conclusions.

What I'm showing is that results like https://www.sciencedirect.com/science/article/abs/pii/000437... demonstrate that this belief is incorrect. Two people starting with the same priors, same observations, and same views on rationality may do the best they can and come to diametrically opposed conclusions. And a lifetime of discussion may be too little to determine which one is right. Ditto putting all the computers in the world to work on the problem for a lifetime.

Real life is worse. We start with different priors and our experiences include different observations. Which makes stark disagreements even easier than when you start with an ideal situation of identical priors and experiences.

This result should encourage us to have humility about the certainty which can be achieved through rationality. But few rationalists show anything like that form of humility.


One of the good sides of NP is that you can validate it in polynomial time. So arriving to different conclusions is not a problem as long as you can recognize the reason.


a) You can verify that a solution is valid in polynomial time, but you can't verify whether or to what extent it's optimal. But even if this weren't the case...

b) The solution you're talking about is an update to the network. It's buried back in the network's construction, not directly visible in the network itself. Model "blame" is a thing, but not heavily researched or at all cheap, computationally.

That said, btilly's "getting an approximate probability answer that is within 49% of the real one, is NP hard" isn't exactly true either. That's a description of what it takes for an approximation algorithm to guarantee some factor, i.e. set a worst-case bound. In practice an approximation can still be nearly optimal on average.

I agree with the broader point, though.


True, there are lots of cases where an approximation can create provably good answers question. But they usually require things like having probabilities bounded away from 1 and 0.

Unfortunately in the real world we actually are certain about lots of things. And when you add data, we tend to be more certain of previously uncertain things. Therefore we wind up with self-referential networks of beliefs that reinforce each other.

But sometimes there are two very different networks of beliefs, both of which are self-reinforcing. And a single data point can flip between them. Identifying which one is computationally impossible. However when you encounter someone whose beliefs are very different, and you can find the feedback loops that draw each of you in different directions, there is a good chance that the differences between you cannot be resolved by pure logic alone.


I mean cases where the approximation isn't provably (or at least proven) good, but is still good in practice. This isn't uncommon with NP-hard problems or algorithms with bad worst-case behavior more generally, e.g. the Horspool variant of Boyer-Moore, or PCRE. It only takes one pathological case to wreck your guarantees, but those cases may be rare or even nonexistent in the real world.

Of course you actually could prove better bounds for an approximation algorithm given a specific universe of inputs. Your 'probabilities bounded away from 1 and 0' sound like an example of that (presumably the 'special cases' from the Bayesian network reference, but I only read the abstract). What I see more often are empirical studies showing, say, > 98% optimality for some kind of max cover approach to a specific problem, even though the guarantee in general is < 64%.


The problem is that both are approximate updates. Both verify in polynomial time that they are not exact. And neither knows how to find the real answer.

Polynomial validation isn't going to help.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: