It makes sense. As projects grow, the average header file is included O(n) times from O(n) different .cc files - leading to O(n^2) parsed header files during compilation. And thus, O(n^2) work for the compiler.
Merging everything into one big .cc file reduces the compilation job back to an O(n) task, since each header only needs to be parsed once.
Its stupid that any of this is necessary, but I suppose its easier to hack around the problem than fix the problem in the language.
Which compiler parses the same header file multiple times in the same translation unit? Compilers have been optimizing around pragma once and header guards for multiple decades.
edit: ok, you meant that each header is included once in each translation unit.
Yep. Worst case, every header is included in every translation unit. Assuming you have a similar proportion of code in your headers and source files, compilation time will land somewhere between O(n) and O(n^2) where n = the number of files. IME in large projects its usually closer to n^2 than n.
(Technically big-O notation specifically refers to worst case performance - but thats not how most people use the notation.)
Merging everything into one big .cc file reduces the compilation job back to an O(n) task, since each header only needs to be parsed once.
Its stupid that any of this is necessary, but I suppose its easier to hack around the problem than fix the problem in the language.