Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does FFmpeg and co have "macrobenchmarks" as well? I would imagine software like that would have a diverse set of videos and a bajillion different encoding / decoding / transformation sets that are used to measure performance (time, cpu, file size, quality) over time. But it would need dedicated and consistent hardware to test all of that.


You don't actually need neither dedicated nor consistent hardware, if you are willing to do some statistics.

Basically, you'd do a block design (https://en.wikipedia.org/wiki/Blocking_(statistics)): on any random hardware you have, you run both versions back to back (or even better, interleaved), and note down the performance.

The idea that the differences in machines themselves and anything else running on them are noise, and you are trying to design your experiments in such a way that the noise should affect arms of the experiment in the same way---at least statistically.

Downsides: you have to do more runs and do more statistics to deal with the noise.

Upside: you can use any old hardware you have access to, even if it's not dedicated. And the numbers are arguably going to be more representative of real conditions, and not just a pristine lab environment.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: