> That's bad for code that can handle overflow and do something reasonable, like switch to bignum integers.
I see that as an argument for throwing exceptions or setting a flag on overflow, not for silently wrapping to negative values.
Defining signed overflow means that I can't enable runtime checking of it without erroneously flagging intended cases of overflow. To deny that to support the tiny percentage of developers who write bignum libraries strikes me as a poor tradeoff.
(Simply checking for a sign change isn't sufficient for implementing bignums anyway. It's trivially possible to multiply a large positive number by a small positive number and have it wrap not only past negatives back to positives, but to a number larger than the input.)
I see that as an argument for throwing exceptions or setting a flag on overflow, not for silently wrapping to negative values.
Defining signed overflow means that I can't enable runtime checking of it without erroneously flagging intended cases of overflow. To deny that to support the tiny percentage of developers who write bignum libraries strikes me as a poor tradeoff.
(Simply checking for a sign change isn't sufficient for implementing bignums anyway. It's trivially possible to multiply a large positive number by a small positive number and have it wrap not only past negatives back to positives, but to a number larger than the input.)