Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think I might be getting confused about the point you're making here. To me, pre-existing knowledge and the decades-long legacy of C++ feel like much stronger arguments against changing anything in a breaking way compared to making breaking changes to improve the language. I do agree with you around a lot of the new features being introduced not feeling super necessary, but I'm guessing that the stance of people in favor of them is that adding them doesn't feel like it's a huge problem either given that they can do them without breaking anything. My perception is that C++ has already been a fairly large language for a while, and that most codebases already develop a bit of a dialect of which features to use or not use (which you allude to as well), so I could imagine that they expect people who don't like the new features to just ignore them.

I think I'm most confused about the last part that you're saying. A significant overhaul to the language in a breaking way feels pretty much the same as saying "screw it" and starting from scratch, just with specific ergonomic choices being closer to C++ than to Rust. Several of the parts that you cite as the strengths of the language, like inline assembly and pointers are still available in Rust, just not outside of explicitly unsafe contexts, and I'd imagine that an overhaul of C++ to enhance memory safety would end up needing to make a fairly similar compromise for them. It just seems like the language you're wishing for would end up with a fairly narrow design space, even if it is objectively superior to the C++ we have today, because it would have to give up the largest advantage that C++ does have without enough unoccupied room to grow into. The focus on backwards compatibility doesn't seem to be that it would necessarily be the best choice in a vacuum, but a reflection of the state of the ecosystem as it is today, and a perception that sacrificing it would be giving up its position as the dominant language in a well-defined niche to try to compete in a new one. This is obviously a subjective viewpoint, but it doesn't seem implausible to me, and given the fact that we can't really know how it would work out unless they do try, sticking with compatibility feels like the safer option.



The biggest question I have around the viability of breaking changes in C++ is whether you can compile some code with a newer breaking standard, some with an older standard, and link them.

Headers would be a problem given their text inclusion in multiple translation units, but it's not insurmountable; you're currently limited to the oldest standard a header is included into, and under a new standard that breaks compatibility you'd be limited to a valid subset of the old & new standard.

EDIT: ironically modules (as a concept) would (could?) solve the header problem, but they've not exactly been a success story so far.


There was a proposal for “epochs” (the feature that rust calls “editions” now) but they were effectively rejected by the committee. Not permanently, but there’s some questions that need to be addressed that they initial proposers aren’t interested in following up on, in my understanding.


> EDIT: ironically modules (as a concept) would (could?) solve the header problem, but they've not exactly been a success story so far.

Because they are little different from precompiled headers. Import std; may be nice, but in a large project you are likely to have your own defines.hpp file anyway (that is going to be precompiled for double-digits compile times reduction).

Ironically too, migrating every header in an executable project to modules might slow down build times, as dependency chains reduce the parallelism factor of the build.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: