> But indeed: The abstract Hashtable Packing Problem is strongly NP-complete. We can therefore stop looking for optimal solutions and instead use heuristics (and still sleep well).
It depends on what you mean exactly.
Many instances of NP complete problems are solved optimally all the time, often quite quickly. See eg integer linear programming.
I disagree. Any particular descision problem can be seen as an instance of an NP-hard problem. If you know you're looking at a subset of some NP-complete family, you should try to work out whether that subset is NP-hard (which you could show by finding an NP-hard problem such that any instance can be converted to an instance of your problem).
A solver won’t be the optimal solution that you may (or more likely may not) discover for your particular problem. But it’ll be good enough in most cases.
A solver, like an integer linear programming solver, will find optimal solutions (or approximations, depending on what you ask for, and how long you give it to run).
Right. For some problems it’ll be optimal in execution time, for most it won’t be and you may be forced to let it approximate. But that’s usually still good enough.
Which is distinct from spending time trying to find an optimal solution, in the general case.
In general, using a general purpose solver won't be 'optimal in execution time'. (And in general, we have no clue what the optimal execution time for any NP hard or NP complete problem is, because then we'd also have solved P vs NP.)
> For some problems it’ll be optimal in execution time, for most it won’t be and you may be forced to let it approximate. But that’s usually still good enough.
Yes, it depends on your problem and your application. For some problems, you can approximate well, and in some applications that's good. And in some other applications it's fine to occasionally not solve a problem at all.
> Many instances of NP complete problems are solved optimally all the time ...
Okay, but the optimal solution of any NP-complete problem is still at least superpolynomial in complexity. If "optimally" also meant general-case computationally feasible (polynomial) we would have proved P=NP.
That's at most true for worst case instances, not necessarily for instances you will see in practice. Many instances of many NP-complete problems can be solved rather quickly in practice.
Use an SMT solver or look into mixed integer linear programming solvers for some examples.
And equally if "the optimal solution of any NP-complete problem is still at least superpolynomial in complexity" were true then we would have proven P!=NP..
Excellent point. "Any" and "at least" were overstated. Should have been "all so far." But I'm definitely going with the side where we have all the evidence so far (even if it isn't proof) when deciding expectations in the next few decades at least.
I sit here wondering how Ryan Williams' treatise on Simulating Time in Square-Root Space could be applied to this problem [0]. I guess, to apply it effectively, one would need to express the packing algorithm as a tree-structured computation and implement the Tree Evaluation framework (Cook/Mertz), potentially integrating it with existing heuristic searches ... This would enable space-efficient exploration of hashtable configurations during precomputation, particularly useful for memory-constrained environments.
But its still not clear to me if this would be practically useful enough.
Hashtable packing can be solved in O(n) space: just try out all possible combinations of offsets. To improve on this with square-root-of-time space, you would need an algorithm that takes o(n²) time, but since hashtable packing is NP-complete, the existence of such an algorithm would imply P = NP.
It depends on what you mean exactly.
Many instances of NP complete problems are solved optimally all the time, often quite quickly. See eg integer linear programming.