The named integer range thing is interesting. I guess it depends on what you goal is. Could you use asserts? Could you wrap the integer in an object and embed the restriction logic there?
Even in compiled languages I think the benefits won't outweigh the difficulties in compiler optimization.
It starts with the question of how I would like to design my sequences:
- Is it the range (0, 10] in steps of 1?
- Is it the range (0, 2^10) in steps by the power of 2?
- Is it the range (0.0, 2PI] in steps of EPSILON?
How would the compiler engineer generalize this optimization?
And the question would continue whether I'd be really able to define them that precisely before runtime. Most applications are just dealing with lightweight structures, mostly serialized JSON nowadays, and then even there are enough fuck-ups[0] where such an optimization wouldn't help at all.
I can imagine the places where they really matter are some heavy memory intensive data structures like deep neural networks, graphics or video and the like - for the time being they're just dealing with tensors of data type floatX, and that seems to be fine AFAIK.
I mean, I'd be really nice if the smaller memory footprint could come out-of-the-box during compilation. But all the CLI tools written in rust certainly don't have the use case to put this complication on the shoulders of compiler research.
You could do it with asserts or as a wrapped type, but both of those approaches are only checked at runtime. That means you don’t get runtime errors instead of compiler errors. It also limits the compiler’s ability to take advantage of the constraint to optimise. Also asserts need to be added everywhere - by both you and the compiler. And wrapped types are annoying to use since you lose access to all the standard arithmetic operators.
It would be better if the type system could encode types like this directly. Better for ergonomics and better for optimisation.