Why many programmers need tools to enforce something upon them? Why not they can't take slow, be mindful about what they write and add a couple of layers as pre-commit hooks? Like a code formatter and maybe a linter?
This makes me sad. Programmers have the knowledge base to make some of the most sophisticated things bend to their will, and they tell a programming language is bad, because they can make mistakes with it. You can cut yourself while chopping onions, too, if you're not careful.
When I code C++, I generally select the structures/patterns I gonna use, use them, and refactor my code regularly to polish it. Nobody enforces this on me, but this is how I operate. I add a formatter to catch/fix the parts I botched on the formatting department, and regularly pass my code through valgrind to see whether it makes anything funny on the memory department.
> Why many programmers need tools to enforce something upon them? Why not they can't take slow, be mindful about what they write and add a couple of layers as pre-commit hooks? Like a code formatter and maybe a linter?
Why would I want to? Most of that is boring stuff that's hard to get consistently right and highly amenable to automation. Automation is what the computer is good at, it can have that job.
Like, why would I want to spend more mental bandwidth on tracking down whether every new goes with every delete than strictly necessary? Yeah, once in a while something is highly performance sensitive and it pays to design it just right for the use case. But it still tends to be surrounded with hundreds of other things that will do just fine with a smart pointer.
> Automation is what the computer is good at, it can have that job.
Pre-commit hook is automation. It formats the code and gives you linting notes the moment you write "git commit".
> Like, why would I want to spend more mental bandwidth on tracking down whether every new goes with every delete than strictly necessary?
It's not more mental bandwidth for me. Because I write the new, and directly delete at the point I need to, then I never think about it again. If I botch something, valgrind will tell me, even pinpointing it with line number.
> ...other things that will do just fine with a smart pointer.
If that works for you, use it. I have no objections.
> So isn't that a tool that enforces something upon the programmer?
It's a tool I voluntarily add to my workflow, which works the way I want, when I want, according to project needs. It's not a limitation put upon me by the language/compiler/whatnot.
> What is the benefit of needing valgrind? Isn't it even better to have the task automated away so that the problem doesn't even come up?
In most cases, performance (think of HPC levels of performance). If I don't need that kind of performance, I can just use the stack or smart pointers.
If we're talking about moving to other programming languages, I'd rather not.
> It's a tool I voluntarily add to my workflow, which works the way I want, when I want, according to project needs. It's not a limitation put upon me by the language/compiler/whatnot.
I spent a lot of time maintaining a lot of Perl code. While it was generally very well written, it also made me a big fan of strict, demanding compilers. Perl is quick to write, but can have very high debugging costs since it'll let you get away with a lot that it shouldn't like having a function called with arguments but that completely forgets to retrieve them.
Based on my experience, IMO any class of error that can be eliminated, should be.
So my modern approach is -Wall, -Werrors, -Wextra, and anything else that can be had to that effect, all the time.
Most people are not you. Which, I guess, also answers your original question of "Why many programmers [do the things not they way I would do]?". Incidentally, this is also the root of all the complexity in managing and coordinating people.
Every programmer knows the difference the language and the tools surrounding it. This seems like you're trying to create a pointless semantic argument for no reason.
First of all: I am failable. I do make mistakes, even if I concentrate. Secondly I want to verify code others wrote. If a tool does the first pass quickly and automatically, I can quickly ensure some basic level of compliance and can focus on the relevant part.
In the end it boils down to: Let the computer do what a computer does and do the things a computer can't.
Doesn't say, one shouldn't think, but computers are there and are powerful, so use it. If the checker doesn't find anything: great. If it does: good it's there.
the problem is of course that tools aren't particularly smart. When you create heavy handed restrictions in your language, you're not just eliminating mistakes, you're eliminating tons of potential programs that make perfect sense, that's to say you drastically reduce the expressiveness of a language.
That's why Rust say, has an escape hatch. Unsafe Rust wouldn't exist if all you could write in it were mistakes. Async Rust is to put it plainly, a pain in the ass.
These high level tools are more like chemotherapy. You hope that they kill more of the bad code before they kill you. They're not sophisticated, and it's fairly reasonable to prefer a language that let's you opt-in to stricter safety rather than opt out.
The biggest problem with C++ isn't that you can't write clean, structured code with it.
It's that the language is so vast that the odds of any two developers working on two different projects agreeing on what that means are low. I programmed in C++ for a decade, then for a year or two at a second place, then picked it up again in a third... And all three places had nearly completely different best practice protocols. Pre-processor macros are banned, but deep template abstraction and CRTP abounds. Interface and implementation are separated even when templates demand they be in the same header chain, but here we do so by squirreling away the implementation in a separate file with a non-standard suffix instead of splitting it out but keeping it in the bottom of the same file. In my previous company, we used pointers to indicate the called function could mutate the data... In my new firm, pointers in interfaces are just about banned unless absolutely necessary and we indicate whether data could be mutated via const and documentation.
The language is broad enough to let the developer change almost anything, but it is unfortunately broad enough to let competent developers build competing and equivalently (un-)safe styles.
Because not everyone has the same values, not everyone has an engineering background even though they like to call themselves engineers, most programming projects are delivered by external companies that don't care about quality unless it is required by law (aka liabilities).
Look at it this way: every hour, people are coming across C++ for the first time. You can’t expect them to have the same discipline as seasoned programmers. The thing is that even if you teach them “the way”, there’s always a new batch who is clueless. You’re never going to get rid of them, and even the experts are going to make mistakes and take shortcuts.
Better to have sane strict defaults with an escape hatch for experts rather than an open range filled with footguns for any newbie to pick up.
Because even the most careful C++ programmer still makes plenty of mistakes. Not the, "oops I cut my finger," kind. Those are usually found by linters and analyzers. More like, "there's a PCI breach and now we're liable for millions of dollars," kind: the semantics of C++ are hard to nail down and ensuring that private data doesn't leak to other threads is almost impossible to get right with only knowing C++ and being careful.
You generally need to use higher-level tools for that kind of work: Coq + the Iris framework, for example, in order to prove that your system can freely share memory and not leak secrets so long as X, Y, and Z hold, etc.
Or you need to run a ton of tools like Jepsen to find if your system behaves the way you think it does.
What baking more of the specification language into the programming language does (ie: better type systems) is enable us to have more of these guarantees earlier in the development process when changes are cheaper and faster to make (at the expense of requiring more expertise to use them).
> Why many programmers need tools to enforce something upon them? Why not they can't take slow, be mindful about what they write and add a couple of layers as pre-commit hooks? Like a code formatter and maybe a linter?
Running a formatter and linter in a pre-commit hook is literally using tools to enforce things?
I have noted elsewhere in the thread, but I think I was unable to express myself very clearly.
What I really meant is enforced externally on the programmer, in the form of compiler, development environment setup (from elsewhere) or other guidelines, without any free will to tune or disable them.
The layers I add are voluntary, just checks and balances I decided to add myself because I think they help me, and not enforced as part of the language or toolchain I'm forced to use.
IOW, a self-respecting developer striving to do a good job can continuously sharpen themselves iteratively, tuning their processes and fixing the problems they see as they go along their journey.
Perhaps pjmlp understood the gist of my comment, and his answer is a pretty nice hit on the head of the subject. Honestly, I'm coming from a point where programming is more of a passion which pays rather than work/job for me, hence I have an inner drive to do my best and improve continuously, and not everyone shares the same set of values or inner drive about programming, and want to be railroaded into an environment where they can do the absolute minimum to get things done.
I agree with you. If those tool actually enforced good quality it would be one thing but what is actually being enforced is mediocrity and enforcing the power play of some individuals who have decided for all others what is good.
That is all well and good on a small team of senior people , but if your project has more than a handful, with mixed experience levels, you want tools to enforce a minimum standard before even getting to code review
> Why many programmers need tools to enforce something upon them?
You can always count on the threads about C or C++ to have somebody ask questions like this.
As a hint, you won't see it asked on any other context. The answer is "all", no exceptions. It's widely known. For decades. In fact, C wasn't even universally used when people discovered this.
Most things learned don’t provide strict tools to enforce that you don’t use old practices where better modern practices exist. Do you question what the purpose is of learning most things?
Also, there are tools to look for old practices and suggest modern ones.
This is still the language that supports setjmp and longjmp and just documents that if you mix them with exceptions the behavior is undefined, right?
You can't have a shameful past when you started shameful. ;) This language's roots are in "I wrote some extensions to simplify C, but I don't want to make it incompatible with C so the extensions don't work coherently in all contexts and you only get sound code if you hold your mouth right" and it never actually got better because nothing ever got removed to make the language more sound.
Well, it did. When it happened, it created other languages.
I'm still trying to clean up the mess from someone who knew what STL and smart pointers were, and then made their own broken smart pointers (their version of shared_ptr has a particularly nasty bug).
Visual Studio will yell at you if your code isn't conforming to the "C++ Core Guidelines" (those guidelines basically define what "Modern C++" even means).
Unfortunately it also yells at you when your *C* code violates the C++ Core Guidelines (at least it was a few years ago when I permamently switched that "feature" off).
Many of the "Core Guidelines" are semantic requirements which are (provably) Undecidable, so even if tooling was created for them the tooling would necessarily have either false positives or false negatives (those are the only options, unless "both" counts as another option). In practice most of these are unaddressed, Microsoft understandably focused on checks which are never wrong and give actionable advice.
"Guideline support" does include libraries with classes that e.g. provide a slice which raises an exception on bounds miss, which is something, and it's certainly notable that the equivalently named C++ standard library feature is a foot gun† whereas the one in the guideline support library is not, so that's a good reason to adopt this. But VS does not in fact ensure you obey all the guidelines, it's very much a "best effort" approach even with that switched on.
† WG21 (The "C++ Standards Committee") seems to have concluded that because sometimes the fast way to do something is unsafe, therefore the unsafe way to do something must be faster... as if car executives noticed that 240mph crashes in their flagship sports car were often fatal and concluded that if they re-design their under-performing SUV so that the fuel tank explodes during any collision killing everybody aboard that'd improve its top speed...
Good point. Note that we can't necessarily tell it's an infinite loop, the constructive proof for this Undecidability reduces it to our old friend the Halting Problem, so we're in the land of the Busy Beaver.
It still does, given Microsoft's stance on security, and that C++ used to be the main systems language, hence why C support languished until pressure from relevant customers to pick it up.
I say used to be, given the recent announcement of Azure business unit to switch to Rust as the favoured systems language for new engineering activities.