It's worth noting that C++ in 2007 was a lot different than C++ in 2016. There were a lot of valid complaints back then which have since been addressed. (I know personally in 2007 I hated C++, but I find it pleasant to use now.) The compilers are massively improved, the STL is a lot more stable, tools and IDE's are vastly better, some of the new language features are a huge help (lambdas, simple type inference, range based loops, override keyword, etc.) I'd still use C for an operating system, but things have definitely improved massively.
If I understand correctly, Linus was not only dismissing C++, but also the object-oriented programming paradigm. His main concern was creating abstractions that result in inefficiencies which most of us consider not bad enough for writing userland software.
What's interesting is Stroustrup proudly states that C++ provides "cost-free abstractions". I'm not too familiar with C++ implementations; is it fair to say that?
It's fair, in a sense that it's possible to implement cost-free abstractions in C++.
However, you have to know what you're doing: some of the C++ abstraction mechanisms will cause the compiler to insert executable code at places a C programmer might find unexpected (e.g constructor and destructor calls).
In one word, you still have to learn how to use the language properly, and it's difficult - but definitely worth it.
Interestingly enough, some of the abstraction mechanisms provided by C++ (templates, inheritancy, polymorphism, ...) can also be partially simulated in C, with a lot of effort (the Linux kernel makes a heavy use of them (macro-powered)).
It's just that C++ makes those things easy to do ... allowing your incompetent teammates to easily shoot you in the foot.
What's interesting is Stroustrup proudly states that C++ provides "cost-free abstractions". I'm not too familiar with C++ implementations; is it fair to say that?
C++ was invented when efficiency was measured in instruction counts and processor cycles, so he optimized his language in those terms.
Processors are much more complicated now, and cache alignment and branch prediction and pipelining are much more important for efficiency than counting the number of instructions something takes.
But even then C++ is basically cost free if used correctly. Of course I am not comparing it with some hypothetical hand optimized assembly, but specifically when considering cache alignment C++ has massive advantage in this area compared to languages like Java. Compared to C there is nothing in C++ that would make cache alignment more difficult by design.
If the programmers wouldn't be dogmatic it would be possible, but most are. It's "we use boost/smart pointers/whatever for everything" programmers that produce the mess that I also wouldn't like to have in the kernel.
One example is calling virtual functions. There's an extra memory lookup to resolve the function call at runtime.
Almost all the time, that extra single memory lookup is irrelevant. But just occasionally, that cost can have an impact.
In principle it's no different to C - you need to understand the basics of what code the compiler is going to generate to do what you're asking it to. In C++ the constructs are much more complicated and that mapping becomes less clear.
Yes, and a little bit off topic but I say the same things about Java too. Java 8 is also great and I hated Java 6. But I love Java now and I like C++ too although much of my C++ looks like C.
Certainly, but the useful, stable subset of C++ with reasonable advantages over C with its cruft (e.g. setjmp/longjmp vs. exceptions) was already much larger than {}.
This seems like an dangerous appeal to authority to me.
Just because Linus said so doesn't make it true.
Especially when the context of this quote is almost 10 years old, mostly irrelevant to the language itself (crappy C++ programmers and rigid architectures) or obsolete (bugs in the STL and boost).
> "This seems like an dangerous appeal to authority to me. Just because Linus said so doesn't make it true."
What is "this" that you're referring to? (e.g., posting the link to Hacker News, the post making the HN frontpage, a specific comment on this discussion, something in reply to Linus, etc.)
Both of the key arguments Linus provided are pretty much solved. No wonder as this article is nearly 10 years old.. Also, Linus is a low-level programmer, where there's still not many choices besides C yet and where fewer layers of abstraction is usually better.
"Quite frankly, even if the choice of C were to do nothing* but keep the C++ programmers out, that in itself would be a huge reason to use C"*
His problem is assuming that there are no stupid programmers in C, and it can't be abused. I haven't had a career as long as Linus', but I've seen people abusing C more than C++, and writing their own inefficient implementation of extremely basic things.
This reminds me of my first mentor, thinking he is the target of the system, not the coders who need to keep there source in order. Fast forward a few years, the low-level unmaintinable code is no longer fast, and u still gotta refactor everything even though u avoided the object model.
Anybody recall the InterViews[0] native C++ toolkit for X Windows, circa 1993?
At the time, it felt like a huge leap forward in ease of writing native GUI apps for X. The point being, C++ was a medium for lots of great ideas, including one of the early Design Patterns books [1].
well specifically he was trashing C++ for performance sensitive systems applications. Fixing the speed of DVCS was a major goal of git. Plus you know, kernels.
His better argument about C being good is that it's easier to look at a diff and know what's going on. This is because C has minimal non-local effects, where with C++ you have the ultimate in spaghetti programming: inheritance and overrides.
No, of course not. But if any overridden operators do exist in scope then you need to know about them in order to correctly interpret an expression. Or are you saying that overriding operators is bad practice in general and should never be used?
C has fewer "magic" features than C++, and so it's more often clear just by looking at some code what it does. I like C++, but only because I'm happy to only use the subset of the language which suits me, and I tend to avoid features which increase the cognitive load of reading my code.
Besides being unsafe, C is one of the few mainstream languages that doesn't have any form of overloading or non-alphanumeric identifiers.
I never understood why it is so had to look at a + b and see it as a.+(b).
I also need to look at the documentation or source code to see if sum(a,b) does what I think it does.
Python, Ruby, Smalltalk, Ada, Lisp, Scheme, C#, C++, Swift, Rust, D, Scala, Clojure, Kotlin, JavaScript(ES7), ML .... developers seem to be able to cope with it, just C, Go and Java devs not.
Given his stance against C++, he could use other languages or even write his own.
As someone that enjoys C++, in spite of its security flaws inherited from C, I see as a victory for the C++ community that a C++ hater like Linus is forced for pragmatic reasons as you say, to make use of it.
> In other words, the only way to do good, efficient, and system-level ...
He didn't say it was horrible for everything, he's talk about system-level programming. I have every confidence in the STL that std::sort will always work, but when you're writing code at the bit and byte level it's not a surprise that it's not the correct tool for the job.
Git falls into application, not system, software. Yes, it can be a crucial part of one's workflow so we want it to be as fast as possible, but how inefficient could it had become had it used the object-oriented paradigm in C++?
Maybe we should just put the conclusion of these sorts of discussions somewhere in here as well:
This criticism applies essentially to any higher abstraction, not just to C++. Higher abstractions make it easier to do complex things, and thus they scale to bigger total software systems. The maximum complexity you can control using C++ is higher than the maximum complexity you can build into a C program before things spiral out of control.
However, higher abstractions work by figuring things out for you. Code in the libraries or compiler will decide when things happen, in what order, and how they relate to one another.
This has 2 consequences of note. Firstly they make it very hard to predict when things will happen. This is very good: you don't have to know or care in the "vast majority of cases". But yes, God help you when you do need to know. I find numpy code is similar: you almost never have to care about the internals, but when you do, the rabbit hole is miles deep. In a related effect, abstractions make it hard to predict the exact sequence in which things will happen, and mostly this does not matter. When it does matter, it is really, really hard to find.
Second, those abstractions mean that tiny changes in the source code frequently result in large changes in the sequence in which things happen. So tiny, seemingly unrelated changes in the source code can result in very different runtime behavior.
These things are very good, and very bad, depending on your viewpoint. You want to get complex new things working ? Higher abstractions are your friend. You want to get complex new things working fast ? C++ is your friend.
If on the other hand, you want a stable, productionized and bug-free implementation of simple processes that you have to maintain and keep running and keep stable and predictable for long periods of time ? Abstraction is the enemy. It will bite you in the ass time and time again. Don't use anything high-level.
There's a related problem. You need to know higher abstractions. When working with C++ math libraries or numpy you quickly find this out. If you don't have a very good math education, those abstractions will make very little sense indeed. This means that a large cohort of programmers without the old-style "math first, programming second" education quickly run into insurmountable issues. Not because the issues are necessarily hard to a math Phd, but because they don't have a math background. Hating abstractions is far easier than fixing your math understanding.
The right tool for the job.
As for the programmers, there are many incredibly good C++ programmers. As a rule of thumb, any successful long-lived language will have tons and tons of bad programmers using it. This has to do with managers trying to cut costs and the resulting effects on the marketplace for developers. It's (sadly) beginning to be true for python these days. Doesn't have anything to do with the language and it won't affect you if you don't screw up your hiring.
Don't use abstractions.. unless they're high enough that the compiler or runtime can optimize their implementation better and faster than you could - and the compiler/runtime environment actually does this (naturally C++ doesn't and can't).
Higher level abstractions aren't necessarily the enemy of optimal code. Modern tracing jitted JS runtimes are great examples of how higher level code can be transformed under the hood by an intelligent compiler into more efficient concrete code at compile/runtime. The problem with C++ isn't that it exposes abstractions, it's that there is no way to make the abstractions responsive to where/how they are used, no simple automated way to optimize by use case, and there is no reliable tooling that exposes the cost of abstractions during development thus concealing their potential costs.
That being said.. abstractions are probably the greatest single potential untapped source of massive performance gains as it allows performance optimizations to be automated by machine learning and runtime performance analysis.