Given how embracing AI is an imperative in tech companies, "a link to something" is likely to be a product of LLM-assisted writing itself. Entire concept of checking through the internet becomes more and more recursive with every passing moment.
I am not sure if a slow pace is as beneficial as you say. I scrolled through the error handling issue brought up in this comment section ( https://github.com/ziglang/zig/issues/2647#issuecomment-2670... ) and its clear that only thing that happened there was communication on the issue was hindered. I come from C++ side and our "ISO C++ committee" language development process leaves a lot to be desired. Now look at error handling that they did passed in C++23 ( std::expected ). It raises some questions on how slow you can be while still appearing to be moving forward.
Disclaimer: I would like to see Zig and other new languages to become a viable alternatives to C++ in Gamedev. But I understand that it might happen way after me retiring =)
Judging from a lot of points people find -O0 code useful. May I ask if someone from those people tell me how you find it useful? The question is based on following experience: if we have lots of C++ code then all of the std lib and ours abstractions are becoming zero cost only after inlining. Inlining implies at least -O1. Why people even build -O0 for large projects? And if a project is not large one then build times should not be that much of a problem.
In AoT compilation, unoptimized code is primarily useful for debugging and short compile-test round trips. Your point on C++ is correct, but test workloads are typically small so the cost is often tolerable and TPDE also supports -O1 IR -- nothing precludes using an -O0 back-end with optimized IR, so if performance is relevant for debugging/testing, there's still a measurable compile-time improvement. (Obviously, with -O1 IR, the TPDE-generated code is ~2-3x slower than the code from the LLVM-O1-back-end; but it's still better than using unoptimized IR. It might also be possible to cut down the -O1 pass pipeline to passes that are actually important for performance.)
In JIT compilation, a fast baseline is always useful. LLVM is obviously not a great fit (the IR is slow to generate and inspect), but for projects that don't want to roll their own IR and use LLVM for optimized builds anyway, this is an easy way to drastically reduce the startup latency. (There is a JIT case study showing the overhead of LLVM-IR in Section 7/Fig. 10 in the paper.)
> And if a project is not large one then build times should not be that much of a problem.
I disagree -- I'm always annoyed when my builds take longer than a few seconds, and typically my code changes only involve fewer compilation units than I have CPU cores (even when working on LLVM). There's also this study [1] from Google, which claims that even modest improvements in build times improve productivity.
I mean my colleagues work hard to keep our build times around 3 minutes for full build of multimillion lines of C++ code that can be rebuilt and same or few times bigger code that is prebuilt but provides tons of headers. If I was constantly annoyed by build times longer than few seconds I probably would have changed my career path couple decades ago xD.
I am both hands for faster -O1 build times though. Point taken.
>I think most programs will build in "safe release" mode
Do you have any citations to support this 'safe release' theory? Like there are not many Zig applications and not many of them document their decisions. One i could find [1] does not mention safe anywhere.
> Standard optimization options allow the person running zig build to select between Debug, ReleaseSafe, ReleaseFast, and ReleaseSmall. By default none of the release options are considered the preferable choice by the build script, and the user must make a decision in order to create a release build.
But for more opinionated recommendations, ReleaseSafe is clearly favored:
> ReleaseSafe should be considered the main mode to be used for releases: it applies optimizations but still maintains certain safety checks (eg overflow and array out of bound) that are absolutely worth the overhead when releasing software that deals with tricky sources of input (eg, the internet).
You are sounding like rose tinted glasses are on. I think your glass is half full if you recheck actual versions and features. And mine is half empty in gamedev.
Anecdata: A year or so ago I have been in discussion if beta features of C++20 on platforms are good to be used on large scale. It makes it not a sum but an intersection of partial implementations. Anyway it looked positive until we needed a pilot project to try. One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times'. After confirming it that it is indeed not an error on our side it was kinda obvious. Proportional increase of remote compilation cloud costs for few minor features is a 'no'. After a year the beta support is no longer beta but still partial on platforms and no improvements on build times in community. YMMV of course because gamedev mostly supports closed source platforms with closed set of build tools.
> One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times'.
I think this just proves that your team is highly inexperienced in C++ projects, which you implicitly attest by admitting this was your first C++ upgrade you had to go through.
Let me be very clear: there is never an upgrade of the C++ version targeted by a project that does not require full regression tests and a few bugs to squash. Why? Because even if the C++ side of things is perfectly fine, libraries often introduce all sorts of unexpected issues.
For example, once I had to migrate a legacy project to C++14 and flipping the compiler flag to c++14 caused a wall of compiler errors. It turned out the C++ was perfectly fine, but a single library behaved very poorly with a constexpr constructor they enabled conditionally with C++14.
You should understand that upgrades to the core language and standard libraries are exceptionally stable, and a clear focus of the standardization committee. But they only have a say in how the core language and standard libs should be. The bulk of the code any relatively complex project consumes is not core lang+ stdlib, but third-party libraries and frameworks. These often are riddled with flags to toggle whole components only in specific versions of the C++ language, mainly for backwards compatibility. Once you target a new version of C++, often that means you replace whole components of upstream dependencies. This often requires fixing your code. This happens very frequently, even with the likes of Boost.
So, what you're complaining about is not C++ but your inexperience in software engineering in general. I mean, what is the rule of thumb about major version upgrades?
I am sorry for the confusion. It's fine to have some downvotes if its not what ppl like to see. I was not complaining. Message was purely informational from single point of view that a) game platforms have only partial C++20 support in 2025. b) there are features that are in C++ standard that do not fit description 'god-send'.
> One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times
Given that C++20 introduced modules, which are intended to make builds faster, I think just flipping C++20 switch with no changes and checking build times should not be the end of checking whether C++20 is worth it for your setup.
> Given that C++20 introduced modules, which are intended to make builds faster
Turning on modules effectively requires that all of your project dependencies themselves have turned on modules. Fail to do so, and a lot of the benefits start to become hindrances (Clang is currently debating going to 64-bit source locations because modularizing in this manner tends to exhaust the current 32-bit source locations).
Tbh I dont have exact numbers from 2024 at hand. I remember that decision was unanimous. A build times increase is a very sensitive topic for us in gamedev.
There are different levels of confidence in junior programmers code in different languages. For C++ it is one of the lowest possible.
If thousands of HN readers suddenly decide that they need to start their 10+ years learning of C++ with immediate contribution to the Ladybird project it would be not really helpful, right?
It would be a weird kind of bad situation, if literally thousands of juniors with little to no experience/understanding of programming simultaneously start learn-as-you-go contributing C++ code for Ladybird.
In the perhaps more plausible situation that two or three people with a reasonable foundation in CS and/or a bit of professional programming experience decide to learn C++ in order to help Ladybird, I think it would work out quite fine.
I had to think carefully if I would ever agree with 'plausible situation' at any point of my career. And the answer is no. If they really needed 2-3 ppl they would have adjusted their sponsorships/donations plan and picked up those ppl full time. There are costs of bigger teams and wider contributor networks that are rarely advertised.
But ofc what can I know about browsers, I am just a gamedev. From my PoV (studio tech director) in custom game engines juniors mostly do acts of wanton destruction in the name of curiosity. And then leave for better compensated industries anyway.
In my opinion folks inciting random contributions from webdev crowd unfamiliar with C++ are not helping. And those who are familiar should know better than to do random drive-by features.
I don't understand how they convinced him that code with limited set of aliasing patches remained 'correct'. Did they do 'crash ratio' monitoring? Disclaimer I had to do that in gamedev on 'postlaunch' for couple of projects. Pretty nasty work.
Normally "let me google it for you" is impolite on this site but I hope not in this case. Here we go:
"Intel Thread Director leverages machine learning to make intelligent decisions about how to utilize the different types of cores in a hybrid processor, leading to better overall performance and power efficiency."
Feel free to unsee it and ban me.
Disclaimer: I work in gamedev on engine and performance.
Imagine going back when 12th gen was released and posting your post. Alas, nothing has improved in 5 generations of hardware that required complete PC rebuild each time since then. Buying intel for gaming is like a test for ignorance now. There might be a decade before any trust can be restored in the brand /imho.
Not sure what you're talking about, NT/Linux are well aware of the P/e cores and how to schedule among them for these past handful of generations.
I also moved to AMD (5800X3D) due X3D alone being a massive improvement for simulations. Intel is still better in the laptop space and just outright more available (though I'm Mac-only for day-to-day laptop usage).
I am talking about gaming workloads being less efficient on particular E-core enabled CPUs. My point is that Day 1 has been generations ago and gaming workloads suffer the same as of that Day 1. Linked article does not mention running anything on Linux so not sure why to bring it up. Note that linked article sidestepped those issues by disabling E-cores.
Afaik Windows is delegating most of scheduling work to "Intel Thread Director".
What makes you sound optimistic about part "how to schedule" ? Do you have any links I can follow?
(meta) I am probably wasting my time commenting on linked article here. Nobody does that /s.
I don't think the measurements support conclusion that well.
What I want to have when I see those measurements:
I want language abstract machine and compiler to not get in the way I want code on certain platforms to perform. This is currently not what I get at all. The language is actively working against me. For example, I can not read cache line from an address because my object may not span long enough. The compiler has its own direction at best. This means there is no way to specify things, and all I can do is test compiler output after each minor update. In a multi-year project such updates can happen dozens of times! The ergonomics of trying to specify things actually getting worse. The example with assembly is similar to my other experiences: the compiler ignores even intrinsics now. If it wants to optimize, it does.
I can't run to someone else for magic library solutions every time I need to write code. I need to be able to get things done in a reasonable amount of time. It is my organization development process that should decide if the solution I used should be part of some library or not. It usually means that efforts that cover only some platforms and only some libraries are not that universally applicable to my platforms and my libraries as folks at language conferences think /s.