I'm mildly surprised that the compiler seems to compile the code so literally so that the binary blows up like that with a linear runtime as well. I would've guessed that for such a straight-line execution on one of the simplest mathematical operations to yield a boolean, a modern highly-optimizing compiler would've done something with it.
What would it take to make the compiler optimize this code meaningfully?
(what matters is whether the compiler can turn the sequence of ifs into a switch statement, then both compilers have special logic to generate a lookup table.)
Interesting. I only know a little assembler, so I can't tell what -03 is producing there, but I know enough to see that it's doing some sort of loop bouncing between even/odd label choice. Is that now a constant-size binary (if still linear time)?
What GCC is doing at -O3 is encoding even (resp. odd) numbers in a bitmap.
43690 = 0xAAAA = 0b1010101010101010
21845 = 0x5555 = 0b0101010101010101
Then
sal rax, cl
test eax, 43690
jne .L3
// .L3
mov edi, OFFSET FLAT:.LC1 // This is a pointer to the string "odd"
call puts
jmp .L2
is equivalent to C:
if ((1 << number) & 0xAAAA) {
puts("odd"); // x64 is little endian
}
So execution is constant time (it only needs to check against two constants), there's no loop.
It should be advantageous compared to a jump table because it doesn't have to do two indirect jumps (though you now have branches that might mispredict).
What would it take to make the compiler optimize this code meaningfully?