Go refuses variables that are declared and not used as well.
And honestly, I like the fact that it forces the source code to not include bloat. Lines of code that are there, but don't do anything, and can be misleading.
And in Zig, the language server can be configured to automatically add and remove `_ = variable;` statements, so this is frictionless.
Meanwhile in Haskell they introduced typed holes in order to allow you to compile more types of programs that aren't yet fully finished.
I'm all for preventing this kind of thing from making it into the main branch, but a compiler error by its nature blocks compilation, which forces weird shenanigans like commenting out lines of code or adding arbitrary meaningless uses. Why is that better than a compiler warning (made mandatory in CI)?
Yeah, in C, you at least add the unused attribute and can grep for it. In some languages that do not support "_" prefix or the attribute meaning unused, you have to comment out parts. It is really annoying and I think it is worse.
In some C code where I have to suppress an unused variable warning (for example to a pthread callback), I just do (void)arg at the top of the function.
The solution is easy, have separate debug and release modes and only mandate variables to be used in the latter (though still provide a flag to disable the check, for the occasional debugging an issue that only happens with the release config).
Alsl, automatic `_ = var` is the worst of all, now instead of the compiler showing me one-by-one the few variables I'm not using temporarily so I can fix those, I instead have made n modifications all across the codebase (remember, this is all recursive) that are syntactically correct and won't be shown by the compiler.
And honestly, I like the fact that it forces the source code to not include bloat. Lines of code that are there, but don't do anything, and can be misleading.
And in Zig, the language server can be configured to automatically add and remove `_ = variable;` statements, so this is frictionless.