Hacker News new | past | comments | ask | show | jobs | submit login

While theoretically very interesting, this seems to be pretty much a non-problem given today's computing power, combined with the fact that paragraphs are usually not very long, and the fact that imperfections are not actually disastrous.



...in the same way that bubblesort is a "non-problem" with small sizes?

The problem is that in the real world, especially with untrusted arbitrary input, you can easily cause these algorithms to take maximum time. Combine that with the typical (in)efficiency of HLLs and it's a potential for DoS. See the exponential backtracking behaviour of some regex engines for a related example.


It will always be important to sanitize and validate your input.


Sure, but that doesn't mean we need to introduce more classes of problems we have to check against. Designing algorithms to handle all cases efficiently is often better than trying to filter everything that triggers cases the lazy algorithms can't handle


Manuals and texts on humanities may perfectly have paragraphs with more than 500 words. So, at 250.000 (n^2) it may take quite a while.


250,000 is nothing for a modern computer with often more than 1M pixels on its screen.


But that is just ONE paragraph under ONE circumstance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: