Some people might not know this but Epic actually did a lot of this automatically within Unreal Engine by writing their own equivalent of CMake. The whole codebase is going through UnrealBuildTool which auto generates headers as well. It calls compilers, linkers, etc. for each supported platform.
It allows them to enable the "unity build" stuff automatically for example, and on a per file basis as well. If you have a file checked out locally (in Perforce terms), it won't include it into the usual unity compilation unit for it and will compile it on its own as it's likely it's one you're going to edit a lot if it's one you've got checked out.
Anyway, I thought it was an interesting datapoint as I think it's quite unique. To the level of Boost using a modified fork of Jam at some point...
UBT is a blessing and a curse. I've spent a lot of time with it, and contributee to it. It's definitely not the fastest of build tools, the change detection takes a few seconds even on powerful hardware. The way it handles UHT (the codegen step of unreal) isn't great either.
It has a nasty habit of getting itself stuck, and it's got a horrific architectural decision of only allowing one instance of the process to run at a time (and see previous point, it gets stuck). Combine those two with modern IDEs that call the build tool in the background and you bet a messy situation.
But the adaptive unity build is probably one of the coolest features I've ever seen in a build system
Do they offer the header generator as a standalone tool?
I figured this was possible but haven't seen anything that does it. I hate writing function signatures twice with subtle differences like default args and overrides. Waste of my time.
Is header generation substantially faster than compilation? I guess it skips codegen and instead replaces it with header generation, but isn't most of the time spent in parsing/semantic analysis? Or is header generation done in some hacky way that's much faster than full parsing?
I think the point of header generation is not to save compilation time, but to save developer time. Saving them from writing the same thing twice, in header and in implementation.
The header gen is much faster than compilation, and is just a code generator. My project takes about 20 minutes for a clean build, the header generation is about 10 seconds at the beginning.
It generates a header and cpp file for each header that it parses. It generates some pretty rudimentary c++ with a bunch of macros, and it looks like very generated code. It's then compiled after, but the nature of the generated code is fairly simple c++ so the cost isn't enormous
You don’t need to do a full clone to have multiple working copies of your repo checked out. You can use the git worktree command to create these working copies:
What is the advantage of worktrees over multiple clones? Using multiple clones is trivial - there are no new git concepts to learn - and works perfectly with all build tools particularly ones that heavily use caching. What productivity improvements do worktrees bring?
Significantly less disk usage is the main one for me. If you have a workflow where each task has a worktree, and you have 10-15 tasks in progress, and the repo is a decades old monorepo, then storing only one copy of the history is a big savings.
A task could be any small thing that requires its own checkout for some possibly stupid reason, like running the regression tester or temporarily running an old version of a binary out of your dev checkout over NFS if there was a bad deploy.
Yeah, but if you work with a central repository, when using local git clones the clones are hanging off the local repository, not the central one.
Sure, you can change back origin to point to the central one again, but you still have to do a dance to sync branches among your local clones (and I’m not sure what happens to the hardlinks).
worktrees just naturally basically are “views” of the same local repository, which may hang off a central repository (or not).
I work with massive repositories, and creating new worktrees works in an instance, because it only creates hardlinks instead of copying files. Saves disk space, too.
Also, all branches stay in sync between worktrees. For example, I do git fetch on one worktree, and remote branches are up to date on all of them.
I use them a bit at my work and the main reason is to reduce how much I’m fetching (which can take a couple of minutes). It’s also good for avoiding accidental inconsistencies between copies (Git will complain if you try to check out the same branch in MyProj and AltMyProj if they are both worktrees). You can checkout the same commit in two worktrees though so you can still do it in practice if really necessary.
Moving work/commits from one local clone to another is something I do regularly - just using vanilla git push/fetch.
I’ll grant that saving disk space is a concern for many - not for me - the largest repo I use is 2-3 million LOC with about 10 or 15 years history and the 4-5GB per clone doesn’t bother me.
I use worktrees, but some builds which uses git hash will fail in worktree checkouts, thinking that "it's not a git directory" (forgot the exact error, it's been a while); haven't found a solution to this
One way to get an error when retrieving a git hash is by building inside a Docker container. If you mount the root directory of a work tree, but not the “git-common-dir”, git will fail to give you the commit hash.
I'm having an intern start in the fall to work on integrating IWYU into the kernel's build system, kbuild.
One major deficiency in IWYU that we will have to solve is that IWYU only understands post-preprocessed sources.
If you have preprocessor guards for some code (as is done extensively in the Linux kernel) IWYU can recommend changes that break the build for other configurations than the one used when running IWYU.
how about we stop the module and linker nonsense and just do it right?
stick everything in a single compilation unit automagically and have a flag to let bad legacy code that reuses global symbols produce errors/warnings so they are easy to fix, or be ignored so that we can still have backwards compatibility.
that is just the "hacky" way of doing things and it indeed works wonderfully, although often breaking any kind of incremental build - this is because we implement incremental build wrong, which is suitably easy to fix as well.
Separate compilation enables parallelism. This approach might save some time but also makes compilation single-threaded. I have many cores on my CPU. What are the others meant to be doing when I'm compiling?
So then what does the goal become? Splitting the codebase into 8 equally-difficult-to-compile translation units? What about someone with 6 cores, or 12? Or 32?
The point is that those separate processes are doing a whole lot of duplicate work that the linker must then deduplicate. There are definite pros and cons to each approach, but it is not as clear as "just use more cores" when it also means "duplicate a lot of work, that you'll have to clean up mostly serially".
I think devs working on large systems in soft-realtime domains are the ones to look at. There you'll often find build systems that are mostly unity builds, but that allow ad-hoc source files to be separately compiled, often with separate compilation options (such as with debug symbols and optimizations, etc.) that are too onerous to enable globally.
As far as problems with constructs like anonymous namespaces and name clashes are concerned, I think they're relatively easily resolved for new code, and these techniques are quite old. Many have been using this "new" approach for decades, so the "legacy" code is also clean in this regard.
But it's unfortunate that the net result is that anonymous namespaces are effectively useless. However, that's just case number 99 for "stuff added to the standard that you can't really use because of 5 or so other major problems with the language". And most new languages don't improve on any of this stuff, they just buy you off with other novel features -- admittedly sometimes quite intriguing ones.
The separate processes are only doing duplicate work if you put a lot of code into header files. In C this isn't a problem - you can put code into source files easily. In C++, the design of templates necessitates putting all your code in header files.
Most "modern" languages are designed around building the whole world and statically linking it into your program. As you say, they don't improve on any of this stuff.
C++ compilation times can become quite slow even when you're doing what you describe manually (known as a "unity build", and not particularly rare in some niches), even if you avoid including the same headers multiple times. Of course, a lot of this depends on what features you use; template-heavy code is going to be slower to compile than something that's more-or-less just C fed into a C++ compiler.
I used to spend lots and lots of time finding and working on header-only libraries that didn't have other dependencies or weird linking requirements - you'd just `#include` them, and that was the code, and you could use it just like that. But in large projects, this starts to get a bit unwieldy and the whole "every file is its own unit, and can depend / be depended on by any other units without issue" thing is actually super useful.
The power of the optimizations available to C++ are what make it so fast (see how slow debug mode is vs -O2/etc), and what allow C++ to be fast in the face of common/easy-to-understand, but technically perf-hostile, patterns. Bit counting loops vs popcnt, auto-vectorization, DCE, RCE, CSE, CFG simplification, LTCG/LTO, and so on. These things let you write "high level" (to a point - there are some ways to do "high level" paradigms and absolutely eviscerate the compilers ability to optimize) code/algos and still get great hardware level performance. This is so much more important overall than the time it takes to compile your program, and even more so once you consider that often such programs are shipped once and then enter maintenance mode.
It doesn't really have anything to do with compatibility (not entirely, but the things that are the biggest issue to good optimization quality and are fixable are things that need a system-level rethinking on how hardware exceptions happen). It just isn't reasonable to expect developers to know how to optimize, and it doesn't scale.
In many contexts, one should rarely pass -O2/-O3. A project that is built thousands of times during development may only be run on intensive workloads (where -O2 performance is actually a necessity) a handful of times by comparison. A dev build can usually be -O0, which can dramatically improve compilation time.
It depends. O0 turns off a few trimming optimizations and could potentially causes more information (code or DWARF) to be included in the objects, which may eventually slow down the compilation. In our large code base, we found that -O1 works best in terms of compilation speed.
In https://ossia.io with PCH, using clang, ninja, mold, and some artificial split in shared libraries for development builds, I get a compile-edit-run cycle of a couple seconds in general... I wouldn't say it's too much of a problem if you use the tools already available
I can't really understand this take. Compilation times are tiny in most projects and manageable in the large ones. It's perfectly parallel and modules and other improvements reduce it by another order of magnitude.
Ninja, Icecream, ccache (I personally don't use that one), LLD or mold, breaking up the largest compilation units, avoiding internal static libraries at least for debug builds, not choosing the maximum amount of debug info... can result in edit-compile-run cycles under five seconds. Time for clean builds strongly depends on project size and template usage, obviously.
Try avoiding the standard library. Some headers like type_traits are so large and complex it can add a few hundred milliseconds onto each CU that ends up including it.
And support for ccache is not an option to some build systems. For instance, Visual Studio projects in general and the msvc compiler require a lot of fighting to get it to work.
Also, ccache is renowned for not supporting precompiled headers.
I wrote a comment in the linked thread as to why I really dislike Unity builds. Some folks make it work for them, and they have some attractive/tempting properties, but I think the siren's call of unity builds should be utterly rejected whenever possible.
I agree, in particular by the way unity builds can break code that relies on internal linkage to avoid conflicts between symbols. But it's nevertheless a technique that's supported by cmake and can be enabled by flipping a flag.
Yea, honestly these types of numbers really aren't surprising, but usually when you profile a build and dig into why the build perf was so bad to begin with, you generally find stuff like template abuse, bad header organization, and countless other untold sins that were better off fixed anyways.
Chromium once supported unity builds. For me it speeded up builds from 6 hours down to 1 hour 20 minutes on a Windows laptop. And Chromium tries to make their headers reasonable.
Chromium eventually dropped support for such builds. At Google the developers have access to a compilation farm that compiles within like 5 minutes. With such farm unity builds makes things slower as they decrease parallelism. So Google decided not to support them not to deal with very occasional compilation breakage.
Bad header organization is perhaps the gravest yet unspoken cardinal sin of both C and C++ development.
I lost count of the number of times that I had to deal with cyclic dependencies accidentally introduced by someone when they started to add #include without any criteria.
I see no reason to believe this will actually be the case based on anecdotal evidence from folks I know who have tried out the early module implementations. I’m hoping they are wrong, but I do not think that modules or anything else is going to result in a significant speedup for the average case, and large C++ compile times will remain a perpetual feature of the language.
Vulkan-Hpp has a module interface file[1] that just exports a bunch of `using` declarations. The speedup is significant, because the Vulkan-Hpp headers are immense: `vulkan_structs.hpp`[2] is 116000+ lines on its own.
With the module, the giant list of names is compiled once into a large file, and then imported into consumers as necessary.
I think that says more about Microsoft's poor optimisation of #include <iostream> semantic caching.
There's no good, fundamental reason why #include caching can't be done with similar mechanisms to import, resulting in similar perfornance.
Certainly, modules are cleaner than headers. That's a language improvement and a fine reason to switch to them. Headers leak more definitions, macros etc that are part of the implementation, and their interpretation is affected by definitions before the header is included.
But in normal situations those header semantic leaks need only add a very small or negligible time overhead to symbol resolution compared with modules, both from a compiler-friendly precompiled form. Articles I've see explaining why modules are faster tend to talk about parsing headers, repeatedly, and if not parse then still process a precompiled syntax in some complicated way, as opposed to modules storing a compiler-friendly data structure. But those are merely describing how things are already done, not what's possible.
Exactly. They have been cautious about claiming if modules will actually speed up compilation when they arrive. They might be a visual studio only win for speed. (Still useful for simplicity over include files)
This reminds me of an article I once read here about writing code in such a way that headers were included once and only once, but it required you to manually write the code in such a fashion. The catch was that it was distinctly not #pragma once, nor did the style require the use of header guards.
I'm wondering if anyone here remembers that article.
Dunno which article it was, but if you're saying that header guards don't completely alleviate the compile times, you're correct (In order to hit the guard or #pragma, the file still has to be opened and read).
The way to do it is to never have any `#include` directives in headers. Put them all in the implementation source files.
You have a function in a `foo.h` which returns a `bar_t` type, that is defined in `bar.h`? Then each source (.cpp or .c) file that includes `foo.h` must first include `bar.h`.
While this does not completely alleviate the problem of reading the same header 10x in a project with 10 source files, it does mean that at least you don't read the same same 50x in a project with 10 source files.
Each header is read once, and only once, for each translation unit. When using header guards or #pragmas, a single header file will be read multiple times because it can (and usually will) include other headers.
Pimpl greatly reduce build times by removing includes from the interface header at the expense of requiring pointer dereferencing, and it's a tried and true technique.
Another old school technique which still has some impact in build times is to stash build artifacts in RAM drives.
It strikes me as something of a language flaw that without pimpl any addition/removal/ modification of private member functions, or even renaming of private member variables triggers full recompilation of every source file that needs to use a particular class. I understand that changes to the size of the object should do so (even if it's just new private member variables), but if it can be determined by the rules of the language that a particular change to a class definition can't possibly affect other compilation units that happen to use that class, then why can't the compiler automatically determine no recompilation is necessary e.g. by calculating and storing a hash of everything about a class that can potentially have an external effect and comparing that value rather than relying on timestamps...
> strikes me as something of a language flaw that without pimpl any addition/removal/ modification of private member functions, or even renaming of private member variables triggers full recompilation of every source file that needs to use a particular class.
It is not a language flaw. C++ requires types to be complete when defining them because it needs to have access to their internal structure and layout to be in a position to apply all the optimizations that C++ is renowned for. Knowing this, at most it's a design tradeoff, and one where C++ came out winning.
For the rare cases where these irrelevant details are relevant, C++ also offers workarounds. The pimpl family of techniques is one of them, and type erasure techniques are also useful, and protocol classes with final implementations are clearly another one. Nevertheless, the fact that these techniques are far from being widely used demonstrates that there is zero need to implement them at the core language level.
> It is not a language flaw. C++ requires types to be complete when defining them because it needs to have access to their internal structure and layout to be in a position to apply all the optimizations that C++ is renowned for. Knowing this, at most it's a design tradeoff, and one where C++ came out winning.
This statement is incorrect. "Definition resolution" (my made up term for FE Stuff(TM) (not what I work on)) happens during the frontend compilation phase. Optimization is a backend phase, and we don't use source level info on type layout there. The FE does all that layout work and gives the BE an IR which uses explicit offsets.
C++ doesn't allow two phase lookup (at least originally); that's why definitions must precede uses.
There isn't a good reason why private methods should be exposed in the header. This makes refactoring the class implementations much more of a pain in the butt than plain structs + global functions. You can always add internal functions to the implementation without causing a reimplementation of all dependencies, while you can't do the same with private methods.
And private methods aren't exactly "rare cases". The situation is bad to the point that most good codebases make less use of classes, and many average code bases avoid adding private methods and resort to code duplication to a degree.
> There isn't a good reason why private methods should be exposed in the header. This makes refactoring the class implementations much more of a pain in the butt than plain structs + global functions.
Irrelevant. Private member functions aren't mandatory or required, and when developers decide to use them they explicitly state the class needs to export their symbols.
Those developers who somehow feel strongly about private member functions are a multitude of techniques to meet their needs, such as using protocol classes and move private stuff to concrete implementations, or use non-member functions either with internal linkage or stashed in anonymous namespaces.
I don't see the point of complaining that something used wrong is not being used right, while purposely ignoring the myriad of options where things are right based on your arbitrary requirements. I mean, this aspect of C++ is around for at least three decades. Don't you think that if it was a problem someone would already have noticed it?
If you use "class" / private members, you have to use methods to access the members. That, or a friend "class" containing static methods (a namespace won't do). Private methods are annoying because you're forced to expose implementation details, and are forced to duplicate function signatures in the code. Friend classes are annoying because you still have to name them in the class declaration, and having to use them with static methods leads to stylistically inconsistent code and you end up with a weird dummy class that isn't meant to ever be instanced.
AFAIK there isn't a nice way to deal with this other than simply not using private members and coding in a simple C like style. I don't think you've shown a way, either. I don't know what you mean by "protocol classes", but if you mean abstract classes with virtual methods that need to be overridden, those are a bad solution because they overhead of vtables without any technical need or benefits (unless you want runtime polymorphism and vtables are exactly the kind you want).
Private methods do need to be in some sort of privileged "single source of truth" for the class definition though - if you were permitted to have a partial class definition in the "public" header file that defined public/protected methods plus all variables, then a single other partial class definition in a separate header or compilation unit it would go some easy towards solving the issue.
But I still don't quite see why compilers can't make assumptions about what changes to class definitions can possibly require recompilation of other compilation units (even if it only applies at certain optimisation levels).
Private methods, unless they are virtual, do not to my knowledge have any bearing on the ABI of a class, nor need they be known to users of the class, so listing them in the class declaration seems nonsensical.
The reason why they have to be listed anyway could be 1) a vague idea of "consistency" with e.g. public methods and generally the enforcing access control only centrally from the class declaration 2) the idea of overriding the implementation in an inherited class. As far as I'm concerned, both are bad reasons to impose such an annoying limitation to the user of the language.
The whole point of private methods is that they have privileged access to the implementation details of the class (i.e. private member variables and other private methods). So there still needs to be a restricted place they can be defined - what are you proposing?
Mark the function as part of the implementation of the class. Simply prefixing the name with the name of the class (as it's already done) could be enough. Optionally request a keyword.
I don't see what's the big deal, the saying is to guard against Murphy not Machiavelli.
Or go the same route as namespaces -- mark start and end of the class implementation code (can be repeated) and nest functions inside.
There are other options if we ditch the C/C++ compilation model. Though arguably that isn't just bad -- it's an extremely simple way to achieve separate compilation without requiring a separate (probably binary, compiler-specific) representation for compiled interfaces. The latter could probably speed up incremental builds considerably, but it's possibly slower for clean rebuilds because of dependencies.
So you're saying it's perfectly fine for someone to use any class from any library (including the standard library) and define their own "implementation" private methods for such classes, that then access implementation details of that class?
How is that even protection against Murphy? (who sees other examples of such things and assumes that's just the "done thing").
For me, knowing that I can change the implementation details of a class and it having no possible impact on whether other code compiles or how it behaves (assuming I maintain the same "public" behaviour) is absolutely a fundamental language feature. All I'm arguing for is that compilers should be able to make the same assumption - only the private implementation details have changed, so it's unnecessary to recompile other code that happens to include the header file defining some of those implementation details.
If you're worried about these things (I am not), simply restrict the private method to be called only from other class methods. Wait, that's how private methods work already...
Ok I admit to feeling a bit stupid now - you're absolutely right of course. On that basis I don't see why it should be necessary to declare private method signatures inside the class definition at all. I can't really see any reason it shouldn't simply be allowed to define (non-virtual) private class methods anywhere you like - as you say, if someone else other than the original class-author does so, they wouldn't be able to call the function anyway!
So what is a good justification for the current language design? I did find one SO post suggesting if your suggestion were possible, the overloading rules would probably have to change, but that doesn't seem like an insurmountable hurdle.
To be clear, what you're suggesting is that header file (foo.h/hxx/hpp) would have:
class Foo {
public:
Foo();
void doYourThing();
private:
int _privInt;
std::string _privStr;
}
Whether or not some sort of keyword is needed to mark the private functions as such is stylistic I suppose - I'd prefer it were there, but I'm used to C# where the access specifier is part of every member declaration anyway. But it's certainly not necessary - the compiler would just assume "private" if the declaration is not part of the class definition.
And yes, someone else could come along and put
void Foo::anotherPrivateFunction() {
}
in their own code, but they'd never be able to call it anyway, so no harm done (arguably compiler could treat an "unreferenced" private function as an error in itself, but certainly the linker would just strip it out).
Obviously one downside of the above is that if you wanted friend classes to be able to call such private methods, they'd either have to forward declare them, or you'd put them into a separate "foo_private" header file, but again, I'm not seeing why that's a big problem.
If your internal representation is stable, you can put private functions in a private friend class that is only defined in the cpp file: `private struct access; friend struct access;`.
Interesting idea, can't say I've used that pattern used, and obviously the code will end up slightly more verbose (no automatic "this" argument) but in principle could be better than the pimpl technique.
The primary reason for Pimpl is to ensure binary compatibility for C++. QT uses it extensively precisely for this reason. Reduced compilation time is just a nice bonus.
I'll be glad if I never have to see PIMPL ever again. It makes tracking down the actual implementation of some functionality so much harder than it has to be.
> I'll be glad if I never have to see PIMPL ever again. It makes tracking down the actual implementation of some functionality so much harder than it has to be.
Not really. There are two main flavours of pimpls: one where the impl class is a pod that only holds member variables, and one where the impl class holds both member variables and private member functions.
On both, the implementation can and does stay in the very same translation unit. On the former, the code stays in the very same class,without any change.
You only experience the sort problems you describe if you bring them upon yourself. That's hardly the idiom's fault.
I’m curious: do you use a `const std::unique_ptr<Impl>` or just a `std::unique_ptr<T>` or do you have a template that provides value semantics for the `Impl`? If I used PImpl a lot I’d make a `value<T>` that encapsulates ownership with value semantics.
And conversely, if you are using classical polymorphism, you can get essentially the effect of PImpl by having an abstract base class with a static factory function that returns a unique pointer, then implement that factory function by in the cpp file having a concrete class that is defined there and returned as a `std::unique_ptr<IBase>`. That gives you separation of API from implementation without a memory indirection, but you then can’t have it as a value type.
> if you are using classical polymorphism, you can get essentially the effect of PImpl (...)
No, not really. That's only an option for the rare cases where you control the whole class and all it's members, and you can break any ABI whenever you like by converting any class to a protocol class.
In the real world, pimpls are applied to member variables that were encapsulated in the past but you want to remove their definition from the interface, or classes that are generated with code generators at compile time. It makes little sense to replace a class with a protocol+implementation+factory just because you want to get rid of a member variable or you need to consume a auto-generated class.
I feel you're just adding noise to an otherwise interesting discussion.
If all you have to add is handwave over "Pimpl has so many other drawbacks" and to reply "google them" when asked to substantiate your baseless claim, I feel it was preferable if you sat this discussion out.
The noise you add is particularly silly as all your references point is the pointer dereferencing drawback I already pointed out and you claimed there were more.
> all your references point is the pointer dereferencing drawback I already pointed out
I think maybe you didn't read through both the items I linked or the many others that come from a simple google query. One of my links for example points out the memory fragmentation issues, which can also affect performance, as another commenter here has also pointed out. There's more to the story than pointer de-referencing or memory context -- many drawbacks worth knowing about.
There is nothing baseless here; there are pros and cons. But it's not in good form to ask people for details that are easily looked up on a topic as well-known as this one. We are not a reference source for you.
Heap fragmentation is another, pimpl works but it's really papering over limitations in the pre-processor and header model that leaks details to downstream consumers.
For the rare cases where potential pimpl users care about heap allocations, there's a pimpl variant called fast pimpl with replaces the heap allocation with a opaque member variable that holds memory to initialize the impl object. Since C++11 the opaque object can be declared with std::aligned_storage.
> If you don't care about heap allocations then why are you using C++ :)
Because more often than not it is a made-up problem that at best is classified as premature optimization.
Take, for example, Qt and how it uses it's UI form classes. You can include it's autogenerated code by composition, inheritance, and through a pimpl. You are here complaining that everyone should care about heap allocations. Well, in this case they happen only at app start and behind a standard C++ smart pointer, and it's basically only used to instantiate a factory to create UI elements.
Can you tell me exactly any concrete reason why anyone should care about instantiating a few bytes in the heap with a smart pointer at app start?
Sometimes there are real problems,but sometimes there are made-up problems that have no relevance or meaning.
I've had to deal with fragmentation at some point or another across most titles I've shipped or larger programs. Small allocations are actually worse in some ways because they incur a lot of fine-grained fragmentation. My favorite was logic in a fairly well-known game engine that would allocate a generic name for an entity and then immediately rename it to something unique. That 5 byte allocation would be left behind multiplied by every entity cost us ~30mb of a 256mb fixed memory address space.
If you don't deal in those sorts of domains that's fine but don't dismiss them as those are some of the primary use cases for native languages. Pimpl is a hack, it's a clever hack in the context of C++ but it's not something intrinsic to native development. For instance it's a non-concern for languages like Rust(which just handle this whole domain much better due to not having to inherit a bunch of legacy behavior).
There's a bunch of feedback in adjacent threads that you seem to be ignoring so I suspect I won't change your mind here but I would encourage you to be a bit more open and consider that there are use cases where things like this matter rather than dismissing them offhand.
> Exactly this. Pimpl idiom is just a bandaid and typically goes against the main benefits of the language.
No, not really. The pimpl idiom is a basic technique that is employed to "improve the software development lifecycle", which is a description straight from Microsoft.
Even Microsoft's Herb Sutter himself has quite a few articles on the virtues of pimpls and how to implement them.
If you think they are wrong, are you planning on reaching out to them to correct them and to try to educate them on the matter? That would be something.
Herb Sutter's main discussions on this are nearly 15 years old lol. Like I said above, pimpl went out of style maybe a decade ago due to advancements in language design, learning about its flaws, etc.
This is a meaningless statement. As others in this discussion already stated repeatedly, pimpl is a very basic and fundamental C++ idiom that is pervasive in all application, and some frameworks are notorious for using them extensively even right now, as is the case of Qt.
At most, all you can claim is that you developed an irrational dislike of pimpls, but developers are renowned for making poor decisions and even introducing bugs, so it's worth what it's worth. I mean, some developers even in this day and age still complain about the whole notion of design patterns. Are these blends of opinions worth taking into consideration?
You asked "which ones?", I answered with some drawbacks to note, mentioned there are pros and cons, answered your question (with links you didn't fully read).. and you proceeded to go on a defensive tirade accusing me of noise and baseless claims (while you reference deprecated language features), etc. Have a nice day.
What leads you to believe that protocol class + concrete implementations + factory can replace pimpls?
I'm going to make it simple for you: take a Qt widget. You define its widget layout with Qt a designer, save a UI form file, and get Qt's UIC to generate a header file that implements that widget tree. You have three options to include the object in that header: either inherit from it, composition, and pimpl. So you swore off a pimpl. Pray tell, how do you use a Pure virtual interface in this case?
> What leads you to believe that protocol class + concrete implementations + factory can replace pimpls?
What leads you to believe they can’t?
> I'm going to make it simple for you: take a Qt widget. You define its widget layout with Qt a designer, save a UI form file, and get Qt's UIC to generate a header file that implements that widget tree. You have three options to include the object in that header: either inherit from it, composition, and pimpl. So you swore off a pimpl. Pray tell, how do you use a Pure virtual interface in this case?
Never worked with Qt. Can you give me FooBar example where pimpl is superior to pure abstract class, please?
Assuming you are using C++20, which as of now, few codebases use.
As work, now, we mostly use C++11 and C++14, some projects are C++17, but I am not aware of a single C++20 project, I think all active projects that are C++03 or below have been migrated to something more recent.
I was expecting its Java based nature to be a problem but in practise found bazelisk to be remarkably self contained and fast.
Bazel has issues but Java isn't one of them.
It allows them to enable the "unity build" stuff automatically for example, and on a per file basis as well. If you have a file checked out locally (in Perforce terms), it won't include it into the usual unity compilation unit for it and will compile it on its own as it's likely it's one you're going to edit a lot if it's one you've got checked out.
Anyway, I thought it was an interesting datapoint as I think it's quite unique. To the level of Boost using a modified fork of Jam at some point...