Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great job measuring initialization time, now how about showing real world performance differences in actual large programs.


It is not about initialization time or even run-time performance, but about speed of compilation. When any non-trivial program in C++ takes minutes to build even on high end workstation all benefits that might come from it's "high level" features are moot.


Are you really saying that the time and energy you save coding and debugging by using high level features aren't worth a few minutes of compile time? It won't even be a few minutes except for the first time because of separate compilation.


That is exactly what I'm saying.

Compilation of C++ is really surprisingly slow. Mostly because compilation of modern C++ code involves parsing tens of megabytes of headers that mostly use complex to analyze constructs. Fact that C++'s grammar and semantics are incredibly complex does not help fast compilation either.

Separate compilation does not solve it much, because many simple changes, that affects only one file in other languages, force you to recompile significant subset of your codebase.

While you may save time coding I don't believe that debugging C++ code is in way easier than C code, I think it is quite the reverse. While you mostly eliminate some problems by using C++ you get whole lot of other C++ specific problems (static initializers, unexpected effects of overload resolution, unexpected effects of automatically generated code, weird performance characteristics...).

Most of this can be worked around, but in that cases you end up using subset of C++ that could be more flexibly implemented on top of C as few hundred lines of preprocessor macros and some coding conventions than by it's own compiler.


> Compilation of C++ is really surprisingly slow.

There's definitely something to what you're saying here. Or, rather, maybe something a few years ago. However, I don't think it really applies so much in 2011. Computers have gotten big and fast to the point where this argument is pretty meaningless.

Personally, I stopped caring about compilation time when I got a 3.6GHz i7 with eight in-flight threads. I'm OK with feeding the beast on this one--what I get from C++ is defnitely worth it. (Debugging isn't particularly difficult, I find - you end up using a subset of C++, but there's no way in hell you could implement my subset of C++ in C and not hate working with it.)


Concur. C++ compilation speed was legitimately a problem five years ago. Improved compilers, faster computers, and multi-core compilation have made it moot.

Often you'll see a C++ basher bring up compilation speed and then in the same thread advocate python + C as the happy medium, which I find crazy. C++ is so much higher level than C the need for scripting is often obviated. I've typically got about 2x SLOC expansion rewriting perl/python as C++, including stuff like headers, so really not much more code at all, just 100x faster and a tiny fraction the memory.

The other line you see, including elsewhere on this page right now, is that C and C++ are the right tool for the job in different domains. I really don't get this. C++ is straight-up a replacement for C. There is no C program that wouldn't be shorter and clearer rewritten as C++. The only occasion to not use C++ is when you don't have compiler support, which is extremely rarely a concern anymore. Even the lowest level embedded stuff is moving away from C in favor of C++.


Generally, people on Linux will be compiling using Makefiles, so just use make -j<num cores> to speed it up.


Using -j with C++ builds will always still be significantly slower than using -j with C builds.

Also, I believe the rule of thumb for the -j argument tends to be 2 times your number of cores. A surprising amount of build time is often spent on disk I/O, so you see worthwhile gains for quite a while running more jobs than cores.


GNU make also has -l (load limit) and make -j -l <number-of-cores> seems like better idea and in my experience works better. But still I tend to just run make -j without any limit which on modern systems works well enough.


Except when you use templates.


That's compilation time, not initialization time. Compiling C++ is tremendously slow compared to compiling C programs of equivalent complexity, and is a real issue even on modern hardware.


Also, using any stream buf in c++ is more "flexible" when compared to puts(), printf(), etc due to not having to define the type. Of course, you trade flexibility for performance.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: