Hacker News new | past | comments | ask | show | jobs | submit login

Are there any standard benchmarks for POSIX systems, facilitating objective comparisons between schedulers? When I was reading the dinosaur book, I was continually impressed by the elegance of solutions used in Solaris, scheduler being one of the highlights. However, a sense of elegance may be poorly calibrated, and I’d like to look at some hard data about how Solaris (today Illumos) stacks up against BSDs and Linux. Sadly I’m not knowledgable enough about operating systems to write a good set of such benchmarks myself, so I’d prefer to lean on the expertise of smarter people.



From years of reading Phoronix articles, scheduling is generally one area where Linux really shines compared to other OSs. There are particular workloads where someone does better but not overall. And many of the problems described in this article are complaints about Linux trading off what's best for HPC users against approaches that are better on servers or user devices. Like, the overload-on-wakeup behavior is absolutely what you want on anything battery powered even if it hurts in TPC-H.


Some of those tradeoffs are made by the distributions - the kernel has (always) offered various schedulers but you have to pick one.


> the kernel has (always) offered various schedulers but you have to pick one

Umm, mainline kernel has had only CFS scheduler available for the past 15 years. Sure, there are some out of tree options available, but with those comes the common problems of using out of tree patchsets.


Huh, maybe my kernels have always had patchsets, because I always get an option to change the scheduler (but never do).


Actually the history of how pluggable schedulers came to be in the kernel is a fascinating one, and one I recall watching unfold in the mid-2000s. There were out of tree schedulers and a pluggable scheduler implementation put forward by Con Kolivas before Ingo introduced the CFS patch, and a lot of frustration that pluggable scheduler patch sets were rejected up until that point.


I do know this for sure:

'So it's not just "better", it's "Better" with a capital 'B'. Nothing else out there comes even close. The Linux dcache is simply in a class all its own.' -Linus Torvalds

https://www.tag1consulting.com/blog/interview-linus-torvalds...

I do realize that the dcache is not a direct relation to the scheduler (but will certainly impact it), but I trust that performance enthusiasts will go to great lengths to extend Linux's top benchmarks in TPC and elsewhere.

It has also not been widely reported that a) Oracle posted a top TPC-C score shortly after acquiring Sun, running on 11g/Solaris SPARC 10, and b) OceanBase has now beaten that by an order of magnitude.

To see both the Oceanbase and Oracle 11g/Solaris scores, historical benchmarks must be enabled:

https://www.tpc.org/tpcc/results/tpcc_results5.asp?print=fal...


If your looking at the results I am, I see a system with with 28x the CPU's being 23x faster, after 10 years of cpu development. And substantially more expensive in total costs too? Are we looking at the same thing? Did I get the math wrong? ( always a possibility ). Yes, it's a much bigger topline number, but it doesn't seem very impressive given all the infrastructure differences?


Individually, every performance benchmark will test against a defined/repeatable workload. If you think about it, if you don't benefit from the performance improvements, does it really matter to you? And if it is noticeable to you, what metrics are you using to determine that? Once you narrow that down, it will be easy to come up with a workload to compare them.


> the dinosaur book

Do you have a link?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: