But didn't the original problem arise because the compiler assumed overflow could never happen and simply optimized out the "undefined behavior"? What's to prevent compilers from optimizing out this check for similar reasons?
The expression ((INT32_MAX / 0x1ff) <= x) is well-defined and will not cause overflow for any value of x of type int32_t. There is nothing to "optimize away" because there are no inputs that would invoke undefined behavior.
The original code was like this:
if (x < 0)
return 0;
int32_t i = x * 0x1ff / 0xffff;
if (i >= 0 && i < sizeof(tab)) {
(x * 0x1ff / 0xffff) can only be negative if (x < 0), which can be ruled out because the function would have returned already, or if a signed overflow has occurred, which is undefined behavior so anything can happen. The compiler can remove the (i >= 0) check because the only way it's false is if undefined behavior has already been invoked.
If you add the overflow check you quoted before the assignment, the compiler can still optimize away the (i >= 0) check like before with the same reasoning. Only this time the function will return before an overflow would occur.
The point of the "sledgehammer principle" described in the article is that UB checks must occur before the UB might be invoked and branch away. You obviously can't do this either:
int i = 2 / x;
if (i != undefined) {
return i;
}
return 0;
Instead, you'll have to do something like:
if (x != 0) {
int i = 2 / x;
return i;
}
return 0;