Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean cases where the approximation isn't provably (or at least proven) good, but is still good in practice. This isn't uncommon with NP-hard problems or algorithms with bad worst-case behavior more generally, e.g. the Horspool variant of Boyer-Moore, or PCRE. It only takes one pathological case to wreck your guarantees, but those cases may be rare or even nonexistent in the real world.

Of course you actually could prove better bounds for an approximation algorithm given a specific universe of inputs. Your 'probabilities bounded away from 1 and 0' sound like an example of that (presumably the 'special cases' from the Bayesian network reference, but I only read the abstract). What I see more often are empirical studies showing, say, > 98% optimality for some kind of max cover approach to a specific problem, even though the guarantee in general is < 64%.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: