Yes and no. Rust adds its own set of hassles. You think building becomes a synch? With Rust, you're fighting the compiler probably more than with C++. I'm sure some game developers would rather have an occasional crash they can fix down the road after their game is published than be forced to make a perfect system the first time. Remember, with game production, it's about time-to-market, not about perfect code.
Being able to cut corners with code hygiene works both ways though for productivity: you don't want to realize a week before a deadline that you have a hard-to-find memory leak or crash. A lot of the time it feels like rust development is slow because you are fighting the compiler. On the other hand once you run the program you often get that Haskell-y "it worked because it compiled" feeling.
With a normal OO language I often build/run just to spot the next place I have made some bad assumption that the compiler didn't catch.
I haven't experienced that feeling for anything but toy programs.
But concerning productivity: fighting the compiler sometimes means abandoning perfectly reasonable (and efficient!) designs just because the compiler doesn't like them.
I'm not aware of any type corsets that I think force good designs.
In a really clean design mistakes are not terribly hard to fix, even in a language like C. Granted in C they are in some cases harder to find in the first place, but there might not be any commercial interest in going beyond "it seems to work".
Having to fight the compiler over reasonable code that the compiler doesn't like will happen with any type system, you don't need one with lifetimes to have that. The question is of course where to draw the line (where the drawbacks outweigh). I really like how you opting out of the guarantees, e.g. to initialize a doubly linked pointer you might just drop to unsafe because you can overview the memory safety implications of those two lines, but not the whole program.
How effective (and thus popular) Rust will be for creating large systems on tight deadlines remains to be seen I suppose - if it isn't competitive with C++ in that respect, then I'd consider that a failure. And a surprise.
One thing that's hard to do in safe code is getting two &mut references out of a HashMap at the same time. (If you know the keys are disjoint.) That might matter to some design somewhere?
The type corsets of ML-family languages have been the best design experience I've ever had. My programming intuition has got a lot better by using them; it's much easier to spot that there's a subtle issue with the design if it shows up as friction in the types. Even in something like Python the lessons apply, though I have to devote a lot more attention to it since I can't rely on the compiler to show me. Just my subjective experience of course.
Maybe I am not smart enough, but it's never clear to me what things should be cast in type. Type the one invariant, can't type the other - and the other way around. The choice seems often arbitrary. But it has a huge influence on the overall design.
Sum types are a major headache. Is there a good rule for when to use a sum type vs distinct types? The expression problem is very practically relevant to me.
Also the typical flat vs hierarchial data storage wisdom applies: Trees and hierarchies are very much encouraged by HM type systems, but the choice of what gets to be parent and what gets the child is arbitrary and often turns out super limiting further down the line. Similarly, there's the choice of what to include in a hierarchy and what to put in a separate one.
Tables on the other hand, supported by light usage of manually coded lookup tables, have been the real game changer for me. When I'm back in a normal imperative language I can be so naturally productive and write efficient programs without relying on black compiler magic. I don't see how most of the invariants in my programs could ever be codified in a practical type system. They are so relational - they involve variables with very diverse lifetimes and expressions depending on dynamic values.
In the end, I feel writing assertions is just much better for me, because I can be somewhat sure in a few tries that my invariants hold, in the same language that I use for coding, and having the same values available. Meanwhile I would waste hours trying to codify a small fraction of them in an HM type system.
> Maybe I am not smart enough, but it's never clear to me what things should be cast in type. Type the one invariant, can't type the other - and the other way around. The choice seems often arbitrary. But it has a huge influence on the overall design.
I find it best to let confidence guide me. If I'm not confident something's right, that's usually a sign I didn't type it enough. If I think I know it already, then it doesn't need more types. It affects the design but it should affect the design, I would say; decisions about which invariants are important are design decisions.
> Is there a good rule for when to use a sum type vs distinct types?
If at some point you have a value that you know (and care) is one particular, well, type, make it a real type. If you only ever have values that could be one or the other and it doesn't matter which they are (or the section where it matters can be reasonably confined to a match block) then a sum type.
> but the choice of what gets to be parent and what gets the child is arbitrary and often turns out super limiting further down the line.
I find this is much less true in an immutable-by-default language. E.g. in the standard circle/ellipse example it becomes completely obvious which is the parent and which is the child.
> Tables on the other hand, supported by light usage of manually coded lookup tables, have been the real game changer for me. When I'm back in a normal imperative language I can be so naturally productive and write efficient programs without relying on black compiler magic. I don't see how most of the invariants in my programs could ever be codified in a practical type system. They are so relational - they involve variables with very diverse lifetimes and expressions depending on dynamic values.
Heh, this is the opposite of my experience. I find tables always confuse me, and are usually a sign that my model needs to have an intermediate entity - like the experience described in http://wiki.c2.com/?WhatIsAnAdvancer
> I'm sure some game developers would rather have an occasional crash they can fix down the road after their game is published than be forced to make a perfect system the first time.
> Remember, with game production, it's about time-to-market, not about perfect code.
In a way you are over-selling Rust, because it doesn't offer perfect code! I'm not sure why Mozilla would pay to build it if that's what it was about.
What it offers is a lower defect rate, which is something you can definitely leverage to improve productivity. Lower defect rate at any cost is clearly too expensive; but developers seem to have been able to absorb the complexity of C++ alright and it Rust can't be called more complex than C++.
To be clear, I didn't say Rust offered perfect code. But supposedly better code comes from Rust, as you're arguing.
While Rust as a language isn't more complex, the paradigms are different. Games often have trees and lists, which are a real pain in Rust. To do things right in Rust requires learning new ways of doing things - not something game devs want to spend time on. They've been working with the same horse for years, so they keep riding it.
This might be a large investment for the engine itself, but minimal investment for the game. In my experience, the largest, buggiest and most complicated parts of a game are all contained within the engine.