Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It wouldn't produce an exception, it would not compile. The nice thing is that you can avoid range checking at runtime.

Exactly, Ada's modular types would be a good option in this case, if that is what you want (my feeling is, most likely not unless you are doing some low level stuff). An alternative would be to rewrite the for loop in a functional or range based style.

In algorithmic code, you almost never want overflow. If you have a little function to calculate something, you want the intermediate variables to be big enough to perform the calculation, and in the end you cast it down to the size needed (maybe the compiler can do it, but maybe you know from some mathematical principles that the number is in a certain range and do it manually). In any case, I would want to be warned by the compiler if I am:

1. loosing precision

2. performing a wrong calculation (overflowing)

3. accidentially loosing performance (using bignums when avoidable)

1 and 2 can happen in C if you are not careful. 3 could theoretically happen in Python I guess, but it handles the transition int <-> bignum transparently good enough so it was never an issue for me.



> It wouldn't produce an exception, it would not compile. The nice thing is that you can avoid range checking at runtime.

You're proposing a language where you cannot increment integers? I don't think that would be a very popular language.


You could increment integers so long as you make it clear what you will do in the overflow case. Either use bigints with no overflow, specify that you do in fact want modular behavior, or specify what you want to do when your fixed width int overflows upon increment. That seems eminently sensible, instead of having overflow just sit around as a silent gotcha enabled by default everywhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: