Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

how about we stop the module and linker nonsense and just do it right?

stick everything in a single compilation unit automagically and have a flag to let bad legacy code that reuses global symbols produce errors/warnings so they are easy to fix, or be ignored so that we can still have backwards compatibility.

that is just the "hacky" way of doing things and it indeed works wonderfully, although often breaking any kind of incremental build - this is because we implement incremental build wrong, which is suitably easy to fix as well.



Separate compilation enables parallelism. This approach might save some time but also makes compilation single-threaded. I have many cores on my CPU. What are the others meant to be doing when I'm compiling?

So then what does the goal become? Splitting the codebase into 8 equally-difficult-to-compile translation units? What about someone with 6 cores, or 12? Or 32?


The point is that those separate processes are doing a whole lot of duplicate work that the linker must then deduplicate. There are definite pros and cons to each approach, but it is not as clear as "just use more cores" when it also means "duplicate a lot of work, that you'll have to clean up mostly serially".

I think devs working on large systems in soft-realtime domains are the ones to look at. There you'll often find build systems that are mostly unity builds, but that allow ad-hoc source files to be separately compiled, often with separate compilation options (such as with debug symbols and optimizations, etc.) that are too onerous to enable globally.

As far as problems with constructs like anonymous namespaces and name clashes are concerned, I think they're relatively easily resolved for new code, and these techniques are quite old. Many have been using this "new" approach for decades, so the "legacy" code is also clean in this regard.

But it's unfortunate that the net result is that anonymous namespaces are effectively useless. However, that's just case number 99 for "stuff added to the standard that you can't really use because of 5 or so other major problems with the language". And most new languages don't improve on any of this stuff, they just buy you off with other novel features -- admittedly sometimes quite intriguing ones.


The separate processes are only doing duplicate work if you put a lot of code into header files. In C this isn't a problem - you can put code into source files easily. In C++, the design of templates necessitates putting all your code in header files.

Most "modern" languages are designed around building the whole world and statically linking it into your program. As you say, they don't improve on any of this stuff.


It's possible to thread the compilation at the function level. Example: Jai.


C++ compilation times can become quite slow even when you're doing what you describe manually (known as a "unity build", and not particularly rare in some niches), even if you avoid including the same headers multiple times. Of course, a lot of this depends on what features you use; template-heavy code is going to be slower to compile than something that's more-or-less just C fed into a C++ compiler.


^^^ This programmer gets it


I used to spend lots and lots of time finding and working on header-only libraries that didn't have other dependencies or weird linking requirements - you'd just `#include` them, and that was the code, and you could use it just like that. But in large projects, this starts to get a bit unwieldy and the whole "every file is its own unit, and can depend / be depended on by any other units without issue" thing is actually super useful.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: