If you're going to make a big claim about sort speed, tell me how speed is better/worse for various data. How do the algorithms compare when the data is already ordered, when it's almost (but not quite) already ordered, when it's largely ordered, when it's completely random, and it's in the opposite order. This stuff, as well as the size of the dataset, is what we need to know in practice.
Rather than, or at least in addition to, raw measured speed on a specific piece of hardware, which is often affected in hard to understand ways by niche optimisation choices and nuances of specific hardware, I actually really like the choice to report how many comparisons your algorithm needed.
For example Rust's current unstable sort takes ~24 comparisons on average for each of 10^7 truly random elements to sort them all, but if instead all those elements are chosen (still at random) from only 20 possibilities, it only needs a bit more than five comparisons regardless of whether there are 10^3 elements or 10^7.
Unlike "On this Intel i6-9402J Multi Wazoo (Bonk Nugget Edition) here are my numbers" which is not very useful unless you also have the i6-9402J in that specific edition, these comparison counts get to a more fundamental property of the algorithm that transcends micro architecture quirks which will not matter next year.
"My boss wants to buy systems with the Intel i10-101010F Medium Core Platinum (with rowhammer & Sonic & Knuckles), can you buy this $20,000 box and test your program so I can write him a report?"
It does give some insight into what you seek, at least. For example, “We find that for smallern≲262144, JesseSort is slower than Python’s default sort.”
I’d like to see a much larger n but the charts in the research paper aren’t really selling JesseSort. I think as more and more “sorts” come out, they all get more niche. JesseSort might be good for a particular dataset size and ordering/randomness but from what I see, we shouldn’t be replacing the default Python sorting algorithm.
Concorde development was a (UK & France) national project. They would have had easy access to military aircraft. Aircraft like the Lightning might only just have been able to intercept but would easily have observed pre-arranged tests.
It's almost inconceivable that the test flights would not have been closely recorded, especially the significant ones including trans-sonic and supersonic ops. Despite the best design and air-tunnel work, you'd expect that things would go wrong and you really want to learn as much as possible from any incidents/events.
Unfortunately, all this happened well before the internet age, and so records and images are not so easily found :(
Seems like a fashion thing, but the Linux distribs I've recently checked out all defaulted to a dark mode. Fine for night but a pain to read normally :(
> Everywhere a C constant-expression appears in the C grammar the compiler should be able to execute functions at compile time, too, as long as the functions do not do things like I/O, access mutable global variables, make system calls, etc.
That one is easily broken. Pick a function that runs for a lloooonngg time...
int busybeaver(int n) {...} // pure function returns max lifetime of n state busy beaver machine
int x = busybeaver(99);
do {
Get(Current_Character);
if (Current_Character == '*') break;
print(Current_Character);
} while (true);
I don't see why this needs a new construct in languages that don't already have it. It's just syntactic sugar that doesn't actually save any work. The one with the specialized construct isn't really any shorter and looks pretty much the same. Both have exactly one line in the middle denoting the split. And both lines look really similar anyway.
Absolutely - current windows is truly horrible, ads, ads, ads, crash :( If I were designing something to be deliberately distracting, annoying, and confusing, it would look a lot like windows!