Hacker News new | past | comments | ask | show | jobs | submit login

A bad implementation of quicksort has O(n^2) worst case, and that's not really a fair comparison. I'm not defending quicksort, just pointing out that a fair comparison should keep in mind that we can easily select better pivots to achieve best-, average-, and worst-case complexity of O(n log n) for quicksort.

Comparing sort algorithms like this shows that we really care about more than just time complexity. We care about memory used or swapped down to the byte, quirks on popular processors, and expected run times on real-world data.




You say that you're not "defending quicksort"--not that it really matters if you were, it's a great algorithm--but you are. And you're wrong.

We can't easily select better pivots for quicksort. For every deterministic selection of a pivot based on a fixed number of elements, a case can easily be constructed which will demonstrate its descent into quadratic behavior.[1]

In order to select a "good enough" pivot for quicksort, you can't just examine 3 elements or 5 elements or 10 elements; you must examine 0.5N elements or 0.25N elements or 0.1N elements. If you pick the median of that subset of elements (which you must do in linear time, mind you! Have you read CLR's "median of medians" algorithm, the one you have to use because quickselect doesn't guarantee linear time?) then you're guaranteed to complete in O(n log n) time, but at what cost? You've massively increased both the complexity and the constant factors of your solution.

[1] http://www.cs.dartmouth.edu/~doug/mdmspe.pdf is one example of how to do so.


> [...] but at what cost? You've massively increased both the complexity and the constant factors of your solution.

That's what you get for derandomizing your algorithms.


also merge sort is stable :-)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: