Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Modern C++ gamedev: thoughts and misconceptions (vittorioromeo.info)
221 points by ingve on May 16, 2020 | hide | past | favorite | 217 comments


After years of experience writing code for games, I find that the dumbest code is the best code. It doesn't matter if it's C# or C++, whenever I've used something like reactive extensions or template metaprogramming, it has been a terrible mistake every single time. Make your code simple, dumb and verbose all the time. Avoid using any complex abstractions or any overcomplicated syntactic sugar and you'll have a codebase that anyone can jump into and quickly be able to add features without introducing bugs (at least less likely). This matters more than anything else.


> Make your code simple, dumb and verbose all the time

This is a frequently encountered argument, and sure, if you look at any single line, it looks very obvious what it does. But I would argue that verbosity and lack of abstraction has severe drawbacks for a programmer's ability to understand the overall codebase, and it is disastrous for long term maintainability.

You start out with 20 identical pieces of boilerplate code, and a few years later, you have 20 subtly different pieces of code. Good luck guessing whether the differences were intentional, or accidental. Good luck refactoring the code.


I think you confuse simplicity with overt verbosity.

There is no reason simple code can't have higher-level constructs. The interface to them is just likely very domain specific, and you don't start with them.

But after you notice you are copying the same sort of code to a third place, you usually notice a pattern, extract that pattern (with no abstract frills attached) to a unique implementation that can be used everywhere, and move along.


"no abstract frills attached" I can't tell what you think a frill is, but I can't square that statement with the rest of your post, and with the OP. Higher-level constructs might require language features like template metaprogramming, or a level of indirection. There are a few ways in which you can have "the same sort of code".


A 'frill' in this case is any construct that is not obvious to a person a who has programmed a few years in the particular language. For example, this definitions makes most C++ templates used outside of STL-like usage very frillic.

"Higher-level constructs might require language features like template metaprogramming, or a level of indirection."

I think we have different definition what a "higher level" means. To me it means a particular pattern has been identified in the code and lifted to an implementation that needs less thinking and fewern lines of code.

You can have quite high level clever program logic using the basic algorithmic toolbox - the basic containers and large zoo of well known algorithms to operate on them - the array, the list, the map and the graph.


> To me it means a particular pattern has been identified in the code and lifted to an implementation that needs less thinking and fewer lines of code.

To give you a sense of how I think about this, we can just focus on control flow. One option for control flow is very uniform and easy to grasp for anyone, including a programmer from 1950: there are [conditional] gotos and labels. Another option is if/else, while, for, try/catch, yield, return f(),...

So what gives? Particular patterns of gotos/labels were identified and lifted into a situation that needs fewer lines of code. Does it need less thinking? That's where it gets tricky. It's obvious to me that the programmer in the 1950s will look at try/catch and require much more thinking than if they just had goto/label code in front of them.

Template metaprogramming (I don't understand what usage is distinct from STL-like usage) is exactly about identifying a particular pattern and literally lifting a concrete type into a type parameter so that now you have a related family of code. Analyzing the code requires higher level and lifted thinking, the same way that manipulating algebraic expressions instead of concrete numbers does.

That's what I mean by higher level.

I agree that code should strive to reuse "fundamental" container types (implementation, or at least interface), but I don't see the connection to the current conversation, aside from the feeling that using those containers without lifting the types (whether in your mind, or in the language) is impossible.


> Template metaprogramming (I don't understand what usage is distinct from STL-like usage) is exactly about identifying a particular pattern and literally lifting a concrete type into a type parameter so that now you have a related family of code.

I believe GP was referring to the myriad techniques for using templates not as containers, but as type-level functions, and composing compile-time programs that rely on SFINAE, variadic templates, andffunction overloading rules to express functional programs in the C++ template language. You can find examples in boost in various areas.

One somewhat spectacular example is boost::spirit/boost::qi, which allow you to define parsers with a DSL directly in C++ (e.g. using `*c` as '`c` 0 or more times').


If you're doing foo a lot, you make a foo() function. That doesn't mean you have to create a pure virtual FoolikeOperation class and FoolikeOperationFactory, a concrete ActualFooFactory and an ActualFooOperation class.


Suppose you do foo a lot. And sometimes you need to do either foo or bar inside of baz.

You can pass a flag to baz, to choose either foo or bar. Now you have a closed set of possibilities. If you want to extend the functionality e.g with a plug-in, or got any other reason you want to avoid committing to the choice, then you either need

1. first class functions, so that you can pass in foo or bar or whatever.

or if you don't have first-class functions, then you need a

2. FooLikeFactory to make a FooLike object based on a runtime value (e.g. read from a configuration file), and then you can call your FooLike object from baz.

I like the quote that design patterns are bug reports against a language. The factory stuff you're talking about doesn't just exist for fun. It solves an actual problem. I hate Java as much as you do, I'm sure, but I value understanding the reason the patterns exist before I decide to just use a language where I don't need to do any of that stuff.


Oh, for sure, you can still find yourself in a situation where that kind of heavyweight design pattern is appropriate. And if you do, then by all means uses it. I think they're just saying don't jump straight to the top of the tower of abstraction when the first step or two up the staircase will do what you want.


Yes, this very much :)


This is the theorical argument, but in practice when is the last time you encountered 20 identical pieces? DRY is so prevalent that pushing just 2 identical pieces is now rare in my experience. Professionally 99% of my code is used exactly once in one place. I’m not a library or framework developer, I don’t want component by default, I just want to implement a business rule in the most simple, robust and understandable way.


20 is probably an exaggeration, but I see nearly identical (long) blocks of code fairly often at work. the project has been in development for a very long time, and there have been periods where people were not terribly disciplined about DRY. once this happens, it can be pretty hard to refactor; like GP said, it's hard to tell what differences are merely superficial and which handle subtle edge cases.

a common pattern I see that leads to this:

1) construct a relatively expensive object. 2) use that object to do A, B, and C (each of which require setting up their own smaller objects)

there are a lot of different places where people want to do A, B, or C, but not necessarily all of them. but people are reluctant to break A, B, C out into their own helper functions because of the cost to construct the object and possibly the very large number of parameters that need to be passed. with enough time/effort, it is possible to detangle this stuff and encapsulate it more sanely, but it's usually easier to just follow the existing pattern.


DRY is so prevalent that pushing just 2 identical pieces is now rare in my experience.

I want to work where you work.

I've seen literally 4 separate implementations of the same UI component in the last week. It's obvious that they all started out from a common base (e.g. by looking at variable names, function names, etc). However, over time, each component has diverged, as each project that the component was copy-pasted into just made changes willy-nilly to their copy of the component, rather than recognizing that they have a copy of a shared component and refactoring changes upstream or building common abstractions upstream.

Now I've been tasked with doing the refactoring work to make these components DRY, and I can already tell that it's going to be far more work than management has anticipated.


No coding approach will ever solve an issue caused by a lack of discipline.


I no longer think that saying is helpful.

The problem is that every problem is caused by a multitude of causes, and there’s usually no clear way to distinguish issues caused by lack of discipline from issues caused by the wrong coding approach or style.

Different styles make different demands on how disciplined a programmer must be in order to write correct code. My experience in well-managed large code base with decent-size teams is that the recommended approaches and coding styles will evolve to address problems that could be solved by asking programmers to be more disciplined.

So that’s the problem with saying “no coding approach will ever solve an issue caused by lack of discipline”—it only makes sense if you already have a good idea of which issues are caused by a lack of discipline, and if you already understand that, then the saying doesn’t help you. The key insight here is to understand when changing your coding approach may allow you to write correct code with less effort (and discipline). But this can’t be distilled into an aphorism, so it won’t get quoted.

An example is holding a lock to access a certain field. If you’re “disciplined” you can just leave a comment on the field that lock X must be held to access it. But in practice, you want to solve that with code analysis. Another issue is using scoped locks so early returns don’t screw you. You can easily argue that these issues aren’t caused by lack of discipline—but if you travel back in time 20 years or so, “discipline” might be the only tool you have to solve them.


Compiler constantly saves me from errors that I would've made because of "lack of discipline". That's the main purpose of various coding abstractions: to make coding errors that you would spend your "disciple" points on compilation erros instead.


Broadly, that's not true. The correctness benefits of type safety could also be provided by discipline alone.


Not so much 'discipline' as mastery. You have to know your trade, and there will never be a way around it.


The mastery comes in at the specification level, though -- or at least, it should.

Ideally, a program's specification and implementation would be one and the same. The more the program departs from a plain-language specification, the more room for error exists, and the more discipline is required to avoid those errors.

IMO, what we need are better specification languages and better tools to compile them, not better programming languages. I believe we've gone as far as we can go with the latter. Some might even say that watershed was crossed in the COBOL era.


Matches my experience as well in the hobby gamedev realm. I can give an anecdote, I’m involved in a “private server” dev community for an old niche MMO from 2003. There exists 3 codebases: first the original code written in a windows-style C++ with heavy use of inheritance, custom collections, Hungarian notation, etc. The second is a collaborative open source version that is written in a naive/basic C++ style (e.g. c w/ classes) and the third is a from-scratch reimplementation in modern C++ with all the bells and whistles, cmake, heavy use of templates, etc.

Despite the modern c++ version being the highest quality and the original windows-style version being the most fully featured, the vast majority of people use and prefer the rinky-dink basic c++ version. Simply for the reason that you don’t need to be a senior level dev to contribute meaningfully to the code.


That's not really giving the full picture of ROSE online!

- The official server (arcturus) is awful to work with code-wise. But all the decently big private servers uses it because at one point we only had the binaries of it and it worked out of the box. When the source started to leak too, it was easier to continue forward with that thing.

- The "simple version" which I am assuming you refer to is os(i)rose. This was the only thing you would get BEFORE the official server got leaked. It had some momentum simply for being there since roughly 2006. It was based on Brett19's code which at the time was a 14-something teenager. The same brett that now works on a fully modern C++ codebase that is decafemu.

- The Modern C++ version which if I remember is worked by few folks from osrose came out something like 2 years ago. Passing 2015, the momentum for the game is close to none. So yes, no one will even spin that codebase.


Some other anecdotes:

The official server was stolen due to horrible code-practice (C++-wise and software engineering in general), like having plain SQL-injections when creating characters. Worst! This was one of the reason that made the company (TriggerSoft) behind the game go bankrupt. The game was full of security holes back in 2005. This made the game's economy being broken due to few cheaters, created few horrible roll-backs and such. This drained the player's base from the game.

The "simple C++ server" osrose was also plagued by security issues and technical issues. Up to a point that people preferred to patch the official server with dll-injections + assembly rather than trying to make this "simple C++ server" work.


So, naive code attracts naive programmers? I'm not sure this is the ringing endorsement you take it to be.

I should also add that Hungarian notation is the prototypical example of dumb, verbose code, that wants to make individual lines easier to understand by dragging type information into every single variable name.


Or you know, people who respect their time.


> I should also add that Hungarian notation is the prototypical example of dumb, verbose code

If used wrongly. Joel Spolsky wrote a whole post on it[0], but the TL;DR is that you should use the notation to differentiate between variables of the same type. For example, you might have world coordinates and object coordinates in a game script. Correctly used Hungarian notation would denote them, for example, with `wPosX` and `pPosX`. Even though they're both int (or float), you can easily see that you shouldn't assign one to the other.

Using them to notate types, however, as in `iPosX`, is completely useless. I fully agree with that.

[0] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...


For dynamic languages sure. For strongly typed languages it's better to just use the type system to prevent those kind of things. C++ doesn't have great support here (lots of boilerplate needed) & usually people reach for Boost Units or Boost strong types but it's not that hard (https://www.fluentcpp.com/2016/12/08/strong-types-for-strong...). Mozilla is also exploring this specifically for coordinate spaces & whatnot too (https://research.mozilla.org/2014/06/23/static-checking-of-u...).


There is no need for Hungarian notation because types are way too complex in C++ for that. On top of that, an IDE tells you right away the type if you want.


Could you please share the name of the game?


Could be Lineage 2, at least I am aware of multiple private server implementations there. And it came out in 2003. But then again it's far from being a niche game.


ROSE online


Sounds like Mangos for world of warcraft


Is the project you mentioned SWG:ANH by any chance ?


> heavy use of templates

That is not something you want in modern C++. Quite the opposite, in fact, and many projects avoid Boost for that reason.

Templates should be used when needed, no more.


>I find that the dumbest code is the best code

Not always, though.

See every bug and exploit with C arrays or pointers that exists because C devs think even minimal attempts at safety are too complicated or slow, or old-style PHP code that builds SQL queries out of printf strings directly from POST values, or probably countless other examples in other languages. C++ code that uses raw pointers instead of references or that uses std::vector but never actually bothers to bounds-check anything.

It's entirely possible for code to be too dumb for its own good.


> Not always, though. See every bug and exploit with C arrays ...

That can also be perceived as a a flaw of the language design in that it does not allow one to write dumb, safe and fast code. Which are such languages in existence today?

edit: fixed typos


I would say Java, which I suspect is bound to be a bit of an unpopular opinion here (I expect more startup people than enterprise denizens in HN); but the more I think about it, the more sure I'm that it fits the bill:

- Dumb: you bet. The inclusion of lambdas has shaken things a little, but usually Java code is straightforward with little space for "cleverness". The counterpoint to this, of course, would be the memes about AbstractFactoryFactoryImplementationSubclassDispatcher class names and all that, which IMHO does not represent much actual Java code out there. There are good reasons why big corps prefer Java, and readability is one of them. As a programmer, I've found it easier to jump into a Java codebase I didn't know much about than in any other language. And this has happened even when I had little experience in Java.

- Safe: yes. You have to go out of your way to be unsafe in Java. Memory allocation is done for you and the try-with-resources idiom is almost as good as C++ RAII.

- Fast: also yes. Usually about 2x or 3x the run time of C/C++ code, some times even less.


> Fast: also yes. Usually about 2x or 3x the run time of C/C++ code, some times even less.

not commenting on the Java part as I believe it's usually faster than that (though not so sure when you see the years of hoops that Minecraft java had to go through to stop being so damn slow all the time...) , but it's kinda frustrating to be fighting for microseconds almost daily and then hear people saying that 2x slower is fast... 2x slower means going from 100fps to 50fps which ... well, gets you fired ?


I know this thread is about game development, but not everyone has hard deadlines to churn out frames. The comparison is useful because it separates Java from many other languages that are possibly 10x slower, which make them unsuitable for a huge number of domains where Java can still be useful.


While Java isn't perfect, it was more lack of skills of Minecraft developers than anything else.

Using classes for everything with a full OOP approach, instead of DOA and ECS, with tons of new in hot paths, no wonder it had performance isuses.

This was discussed in some Minecraft forums,

https://www.reddit.com/r/programming/comments/2jsrif/optifin...

It basically boils down to

> Why is 1.8 allocating so much memory? This is the best part - over 90% of the memory allocation is not needed at all. Most of the memory is probably allocated to make the life of the developers easier.

HFT is as high demanding as games and it makes use of Java, however I think that Java developers with such skills rather have a HFT salary than what game devs earn on average.


How would avoiding GC even work? As far as I know that Valhalla thing still isn't there - wonder if it ever comes. Last I used it, you could only have "structs" together with GC. Maybe there is just no practical way to do this? What prominent examples are there?

I remember a story of a HFT trading software written in Java. Supposedly it had big issues with GC. That's why they built a system where multiple threads would attempt the same operation, and the first thread wins. This approach reduces the likelyhood of a GC ruining the timings. Funny story.

My bachelor's thesis involved writing a software in Java that would manage dozens or hundreds of millions of small objects. These objects were all instances of the same class; the contained only three ints. It was very slow, and especially in an OOM situation the GC would work for more than a minute before finally giving up. I changed the software to use SOA instaed of AOS - moving from a huge array of these objects to three int[] arrays. Since ints aren't boxed, that left me with only 3 objects instead of many millions. The code was uglier for it, but the performance was another world. Unfortunately, such a change is not practical if you have many classes.

That was 5 years ago with Java 8. Disclaimer: I haven't followed Java since then. I know next to nothing about it.


But that is exactly what you do when going after performance in game development, even in C and C++, and it isn't less ugly by using those languages instead of Java.

There is an EA available for Valhalla and there is now the roadmap to incrementally bring such features into the platform. Java 14 has a new native memory support as experimental and it might reach stable already by 15.

https://jdk.java.net/valhalla/

https://cr.openjdk.java.net/~briangoetz/valhalla/sov/01-back...

https://openjdk.java.net/jeps/370

The problem why it is taking so long is engineering effort to keep ABI compatibility, namely how to keep 20 year old jars running in a post Valhala world, while at the same time migrate value like classes into real value types.

Java's biggest mistake, from my point of view, was to ignore the GC enabled systems languages that had value types, non traced references, and AOT compilation from the get go, then again I guess no one on the team imagined that 25 years later the language would be one of the choices in enterprise computing.

Back to Minecraft, the game isn't Crysis or Fortnight in hardware requirements, so a language like Java is quite alright for such game, what isn't alright is what the new development team apparently lacking experience eventually ended up doing to the game engine.

If one is to believe a couple of posts like the one I referred to.


In C and C++ you don't need to make int-arrays. You can group data that is accessed together in structs and keep arrays of these structs.

With regards to performance, there must be some fine art in splitting structs into smaller structs, and keep them as parallel arrays, but there is also a limit to it. At some point you will need too many pointers to point at the same position in all these arrays.

I've never cared to split a lot, since it makes code harder to read. My guideline has always been to optimize for modularization: In OOP there tend to be large objects containing links to "all" related information. That violates the rule of separation of concerns. With parallel arrays you get perfect separation of concerns. One parallel array doesn't even need to know that there are others.

> Back to Minecraft, the game isn't Crysis or Fortnight in hardware requirements, so a language like Java is quite alright for such game, what isn't alright is what the new development team apparently lacking experience eventually ended up doing to the game engine.

I'm not in a position to judge, and I've never even played it, but it seems to me that Minecraft has a lot of voxels to maintain. Also massive multiplayer requirements?


Fair enough, however regarding massive multiplayer requirements most game shops are anyway using Java or .NET on their backends, as you can easily check going over their job adverts.

As on the client side, proper coding plus offloading stuff into shaders already goes quite far.

And even in the debatable point that Java isn't the best language for any kind of game development, well maybe Mojang would never have taken off if Markus had decided to prototype the same in something else.

Nowadays the Java version is only kept around due to the modding community, as the C++ version is found lacking in this area.


Just to add something that I forgot to mention on my previous comment that I think it is worthwhile mentioning.

Naturally at some level you will have a class full of native methods as FFI to OS APIs or libraries written in C or C++.

From that point of view, I consider Java still a better option as game engine scripting language as Python/Lua/JavaScript, because you get strong typing, there is still dynamic loading, a good set of AOT/JIT/GC infrastructure and more control over memory layout than those languages allow for.

Naturally that is a matter of personal taste.


Indeed, despite all criticisms Java as a technology is one of the best things that has happened to the software industry, next to Linux.


Java won because of its humungous and stable standard library with the full backing of Sun (and Sun was huge presence at the time). It was quite a joy to have pretty much all the functions you could ever need (not really but it felt like it at least) at your fingertips without having to do manual dependency management.

As a technology it really didn't have much to give. Object Pascal was born in 1986, Ada in 1980 with language support for design by contract, JIT with Lisp in 1960 and Java came in 1995.


It wasn't just Sun, though they did the heavy lifting marketing-wise. The Apache Project also hopped on the Java train way early and produced a crapton of libraries which made developing internet applications way easier, especially for corporate schlubs who might have been previously exposed to Microsoft (or, ugh, IBM) systems but didn't have acceas to the internet foljlore Unix gurus had.


They did the legwork to figure out browser Applets as well.


Cleverness is more a function of a programmer's habits and attitude than of the language: one tries to be clever in any language.

C++ offers "efficient" ways to be clever, with varied and difficult challenges that can be addressed in relatively little difficult code; some are good or harmless (e.g. aligning struct fields to cache lines or concise towers of useful templates) and some are bad (e.g. flaky homemade not-too-smart pointers and almost-STL-compatible containers).

Java, on the contrary, facilitates the creation of large, boring and easy to read generic object-oriented tumors, that become satisfactorily clever only when they go very far (e.g. one more layer of indirection than everyone else) or reach theoretical limits (e.g. nothing left to invert control of).


- Dumb: you bet.

Java as a language is dumb, but as an overall platform is not. In a way it is a two-layer platform: you have the outer layer, a boring language that is used by a lot of people to write most of the code, then you have a hidden layer, that most people ignore, made of bytecode manipulation, runtime code generation and language agents that let you do cool things like adding compile-time nullness checks, generate mapping classes to avoid writing a lot of boilerplate code or good old ORMs that automatically generate your sql queries for you.

Such functionalities are not exactly easy and straightforward to use, but in my opinion it is a good thing: they are there and can be used, but for most programmers will be hidden behind a few "magic" annotations.

This is in contrast to other languages where the advanced functionalities are "all over the place" and every programmer must be aware of them (I'm thinking of C++ and Common Lisp for example).

If you have a teams of great programmers you may achieve better results with the latter approach, but for the average company the Java approach is better because you can have average programmers write boring code while taking advantage of a few clever tricks here and there by using libraries/frameworks written by better programmers.


Cleverness in this case means:

- Too many jumps in code logic instead of serial logic

- Premature optimization 1: messy code (using lots of unclear variables etc), this is common within calculations, and games have quite some of those.

- Premature optimization 2: failing to properly architect

- Single-character variables. Java IDEs default to fullvars

- Bad function/method names

- Not expressing what you mean (for i= vs foreach)

- Too many abstraction layers / IoC

- Too many sideeffects / complex code (interwoven code)

- Callback hell

Nr 1 is important.. You need to be able to "follow the code". If that requires you to create a DSL in order to code something async in a serial way, then by all means do so.

For example with: network (duh.), but also in games: character dialogs, animations etc. etc.


  The inclusion of lambdas has shaken things a little
I'm curious, do you consider lambdas as more or less "dumb"? Because I consider them the dumbest, simplest and maybe best way to do polymorphism. In a way, OOP's whole shtick was about not using them and instead extend stuff with classes.


It's less dumb than usual Java code in the sense that it's less obvious. Java Lambdas are anonymous, inline implementations of single method interfaces (or abstract classes with a single abstract method). In classic Java you would instantiate an explicit object, from an explicitly named interface (anonymous classes were still allowed, but at least the code would have the name of the implemented interface and the overridden method). This made the code more explicit, therefore more cumbersome but also more obvious. I like lambdas because they make the code considerably less cumbersome but only a little less obvious. But they do make the code a little less obvious, and as such, I would say that they make Java less "dumb".


Golang is a pretty good example.

> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

~ Rob Pike


> Which are such languages in existence today?

Zig will be there soon.


Zig first needs to have code samples in the documentation that compile.


It's not really the fault of programmers or the language but pernicious and stubborn failure on the part of the C Standards Committee.


This sounds unnecessarily dismissive of C programmers. I see a lot of C programmers that shit on C++ but are intrigued by Rust, e.g. Linux kernel developers allowing modules to be written in Rust.


Yes, Rust is making a smart marketing decision by capitalizing on C programmers' antipathy towards C++ (and everyone else's).

I don't know that this is, ipso facto, evidence of anything about C programmers other than that they really hate C++, though.


Your view sounds very jaded to me. Maybe Rust is liked more by C programmers because they prefer its approach, rather than C++'s? Calling it a marketing ploy seems without merit to me.


I'm a C++ developer who occasionally writes Rust.

The experience of writing new code in one is very, very similar to the experience of writing new code in the other, right down to the compile times. The lifetime analysis in Rust is nice and pretty far ahead of what static analyzers can do in C++, but Rust Generics are a pretty weak approximation to Templates. Rust has better Browser integration, C++ has Qt. One imagines the languages will catch up to one another on these fronts. C++ has Inheritance, Rust settles for Interface Polymorphism (one can reasonably prefer either).

The one really big difference here is actually cultural - the Rust community all agrees on Cargo, and it's a bit happier to compile the world and distribute static binaries, which removes massive headaches for both the developer and the end user while setting the language up for an eventual, Electron-style tragedy of the commons where a user is running like 8 Rust apps with their own copies of the same 14 libraries resident in memory (but that's a good problem to have because it means you've displaced C/C++ as the linguae francae of native app development).

I guess the other really big difference is that there is no legacy Rust code.

I like C++, but I can understand hating it. But if you have written new code in C++17, and hated it ... I suspect you are going to hate writing Rust too. And if you love Rust and hate C++ ... I suspect what you hate is legacy C++ from 2005.

Finally, I was explicitly not concluding anything about C Programmers beyond that they hate C++.


As someone who built a career in C++ I like that Rust's generics are a poor approximation of templates(and that includes having worked with some of the modern C++ features). I have months of my life I've lost to the increased compile times from Boost on the applications I've worked on.

C++ also makes it way too easy to reach for shared_ptr instead of unique_ptr leading to all sorts of unfortunate things. Rust makes that much harder and RefCell/Rc/Arc push towards design that are "single owner" which I've found scale out much better once you move into programs that are of a significant complexity.

C++ still wins in portability on some platforms but I have a hard time picking it for anything greenfield at this point.


Right now Rust eco-system still isn't as mature as C++ in what concerns integration with Java and .NET, and GPGPU programming. The domains I care about.

However with the support of companies like Microsoft, Rust will eventually get there.

By the way BUILD 2020 will have Rust sessions.


I would pick C++ for anything to do with high-performance linear algebra. There are a few other domains (desktop GUI, CAD) where I don't trust the Rust library ecosystem.

But, yeah, there are a ton of domains (notably embedded) where I would want Rust.


I agree with most of what you're saying here except for the point on generics versus templates. I wouldn't say generics are an approximation of templates at all, templates are something in between generic programming and macros and that leads to them being hazardous, slow, and generally speaking unergonomic.

Rust's generics allow for some seriously powerful abstractions to be built in a very clean and readable way, although there can be friction with stuff that would be simple with templates in C++ and quite verbose in Rust.

Maybe concepts will change that.


> I wouldn't say generics are an approximation of templates

Template (at their origins) are nothing more than generics, and a pretty clean, powerful and zero cost way of doing generics.

What you name "hazardous, unergonomic" macros style is not the template system itself. It is mainly due to all the 2005-styles SFINAE hacks that have been invented by abusing templates properties.

SFINAE in C++ is nothing natural, it's at best a dangerous trick to have compile time resolution/execution.

Fortunately all of that should die progressively with C++-17 constexpr for the good of humankind.


Non-type Template Parameters ("const Generics") are on the short-term road-map for Rust. Small step from there to recursion and compile-time factorial.


> C++ code that uses raw pointers instead of references or that uses std::vector but never actually bothers to bounds-check anything.

When is the last time you accidentally mutated a raw pointer? My opinion is that references are just another C++ feature that solves a non-existent problem and has severe disadvantages. And that is non-orthogonality / combinatoric type system explosion. I've seen more than one codebase that consisted to a large degree of adapters for calling the same functionality.


I wish this kind of thinking would die. 9 times out of 10 when I'm working on a large code base at (any company I've worked at so far), the main problem I have is with disorganized, messy code with poor abstractions, state everywhere, functions that are too long, functions that are too short, references to objects everywhere with no regard for lifetime and the list goes on and on. The times I have been stumped with clever syntax are few and far between. Almost never have I said "oh man I wish they didn't use a std algorithm/container here, makes the code more obfuscated!".

Yes I have seen cases where classes or functions are unnecessarily generic, adding templates when you only needed to support a specific type (YAGNI).

But in the end, most bad code I look at, I completely understand it's syntax. It's the semantics of this so called "dumb" code that prevent me from modifying it or fixing a bug in it for days until I actually understand the rats nest of ideas expressed in the code.

I think using features like using const as much as possible, preferring return by tuple rather than multiple in out parameters and a bunch of other modern C++ features more often than not make code bases simpler than the other way around.


You made me realise that term "dumb code" isn't exactly what I mean. I want to advocate writing code that is very easy to statically understand. You should be able to look a piece of code without needing to read too much around it to understand what it is doing. One of the best ways of doing this in my experience is keeping things simple (or "dumb" as I put it earlier). Using simple abstractions, keeping code organised and avoiding global state also help in this regard.


It's the endless copying-and-pasting that gets you.

Fix a bug somewhere ... pray to the Nasal Demons you got there before more than two developers duplicated it elsewhere.


C++ fold expressions, the main C++ feature the article covers, are simple and dumb (at least, enough to use them) but not verbose, and definitely harder to get wrong than a much more verbose for loop.


I love fold expressions, but if you're inside a variadic template, you've long left the realms of "simple and dumb". IMO.

I mean, they're only readable to people who have dabbled in variadic templates in their free time. That's how many people on your (future) team?


> I mean, they're only readable to people who have dabbled in variadic templates in their free time. That's how many people on your (future) team?

This line of reasoning is vacously true for any syntax and semantics though. Move semantics and rvalue reference are only readable to people that have taken the time to understand them -- they're undoubtably useful though.


Move semantics and rvalue references are too complex and error prone to be useful in general code.

It is best to use them only in performance sensitive places and containers.


I strongly disagree. Move Semantics allow you to communicate ownership information at API boundaries with the type system.

C APIs come, of necessity, with tons of documentation about who is deleting what, when. Or, you know, maybe they don't and you have to learn the hard way. std::unique_ptr (implemented with move semantics) largely solves this problem.

And you can imagine notions of ownership more complex than "I'm deleting this at some point" (maybe "I'm versioning this object now, don't worry about it"). If you want to encode these transfers of ownership into your API, that's Move Semantics!


If you read carefully, I said use move semantics to implement containers. That includes std::unique_ptr (a container for pointers with a deleter).

The point is that you shouldn't be using rvalue ref parameters, std::forward, etc. in most of your code. Even std::move should be fairly rare.


Any time you want to pass a named unique_ptr across an API boundary, you'll need to std::move it


...which you should avoid as much as possible do.

Passing std types across API boundaries is a code smell.


What? That's one of the primary motivations!

"I have created an object and will pass its unique ownership to you." -> std::unique_ptr

"This routine needs a function that takes two ints and returns a float (without putting all my code into headers)." -> std::function<float(int, int)>.

Can you elaborate in what circumstance you should not pass std::types across API boundaries?


The heap allocation is an implementation detail.

std::function is useful in some situations, but "without putting all my code into headers" is not a good argument.


Or you know, anyone who has used them in their day job writing C++. Just like literally every language feature.


I just fixed a segfault the other day because one of our new hires fresh from college is eager on using modern c++ and didn't put parentheses at the correct place in his fold expression.


It sounds like an interesting bug, can you elaborate? On the surface it sounds like your new hire merely used fold expressions to call functions and operators that were already treacherous on their own.


Sorry, I can't look it up right now, but trying to reconstruct it in my head it must've been something like

    y = (f(x1), f(x2), f(x3))
vs

    y = f(x1, x2, x3)
Not entirely sure anymore though. But it was something about causing side effects but throwing away return values with the ,-operator and involved function calls, I think


Nothing in templates is 'simple and dumb': it may look so but it isn't: try to write some, you'll inevitably makes some mistake and the errors are really awful!


As I've learned more about programming I've been leaning more and more in this direction too. So very much of the complexity and abstraction we build into software is gratuitous and unnecessary. The real art of writing software is writing simple, obvious code that also happens to be fast and efficient.


> I find that the dumbest code is the best code.

This is the idea behind Golang, isn’t it? That everything should be written out explicitly and not hidden behind abstractions. Some people love that, others hate it.


I'd love to try Golang, but as a game developer, the gc makes it a no go due to perf. I begrudgingly accept C# due to unity. GC is a different topic altogether, but has also always been a pain. You end writing code to avoid allocations, which at that point you ask "Why am I not just writing C++?" I also discovered GC makes bad programmers worse by allowing them not to care about ownership, enabling them to develop systems in an adhoc way. Some programmers need a segfault screaming at them to make them realise they are writing terrible code.


While I’m not suggesting using Go for game development, don’t most games include code written in a language with garbage collection? If not C#, then a scripting language like Lua?


And everyone wishes they hadn't :P Even modders that use Lua have to write code that avoids allocations, caching and reusing all the objects they can. I can't stress enough how much time is spent on optimising code to avoid GC pauses. The main goal of my C# coding style is avoiding unnecessary allocations at all costs. I'm not exaggerating.


Weirdly enough I actually have a 3D project that I started in C# that I moved over to Go so I have a little experience with it. So far it's been much less painful on the GC front. Basically the GC pauses are around 0.5 ms even on very large heaps [1], so that changes the conversation from "everything must must be manually managed / pooled in a language designed to offer zero help with this in order not to drop frames" to "you'll make frame rate as long as you can leave some performance on the table and don't go crazy on hot paths" which was a lot easier to live with.

It's also much easier to avoid allocations to begin with than C# - pointers can be used as interface types without boxing (and taking interior pointers to fields / array elements is allowed), and it's possible to allocate regular Go objects / arrays in unmanaged memory and feed it into any API.

Obviously it's not really designed for it and the ecosystem isn't there but using it hasn't been completely terrible so far.

[1] https://blog.golang.org/ismmkeynote


Not to mention acceleration is an issue with the go/c ffi


Some games have servers, too. And many of those are request-response (not "hold socket open") servers - which are ideally suited for GC.


There is always a middle ground as there are other native languages. In regards to C++ you can limit yourself to convenient feature set you are comfortable with.

C++ can be a real bitch and commit one to a mental institution or it can be very helpful if one does not try to play PhD.

I write servers and some other stuff in C++ and find it incredibly easy to use. I just do not do any esoteric things.


It's just so frustrating that they keep adding things that scratch no itch I've ever had as a programmer, while leaving out incredibly obvious things that would make my life easier and my code safer.

Named parameters, for instance. It's insane that function parameters still can't be specified in any order with name=value expressions. That would have saved numerous bugs over the years, but apparently I'm the only one who thinks so. When a language such as C++ is harder to use than Verilog, somebody has screwed up badly.


Here's a link to the named parameters spec. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n417...


Named arguments are a good example of a simple feature that C++ still lacks, and cannot be effectively 'faked' without it being built into the language.


They can be faked to an extent with structs, but the reality of default arguments is that you have to be aware of them anyway so that you don't get behavior you don't expect. When there are default arguments you are building up assumptions and expectations that can become problems when they aren't right.


> Avoid using any complex abstractions or any overcomplicated syntactic sugar and you'll have a codebase that anyone can jump into and quickly be able to add features without introducing bugs (at least less likely).

The thing is what is complex abstraction changes with time. At one time, floating point, especially transcendental functions would have been a complex abstraction. Functions at one time were a complex abstraction. Classes and polymorphism were a complex abstraction. Pointers are a complex abstraction for a lot of people. Linear algebra is a complex abstraction. Transforms are a complex abstraction.

Many times “complex abstraction” just means “an abstraction I am not familiar with”.

Back in the 80’s most games were written in assembler. I am sure many people thought that C was a complex abstraction. I mean, instead of just doing a jmp to a location, now you had a stack and a calling convention and which registers to save and restore...

Since “complex abstraction” is often code for “unfamiliarity”, education, like what the original article is doing is very helpful in moving the state of the art forward. As people become familiar with new abstractions, that becomes the new baseline for “simple” code.


> Many times “complex abstraction” just means “an abstraction I am not familiar with”.

A lot of times it can also mean "an abstraction that my development environment is unaware of or otherwise has deficient tooling for"


Yup, I think everyone goes through a phase of enjoying creating ‘elegant’ abstractions but you steadily learn nuts and bolts is way more preferable. That’s why I’m way more excited about Zig as a language to write games with than modern C++.


I've worked on a bunch of engines, both big AAA and small indie scaled ones and i agree with you. It is actually from this experience that i have a hard dislike for C++'s "auto" outside of cases where you can't do otherwise - it makes the code you didn't write hard to understand exactly what is going on (and sometimes error prone). Sure IDEs can show you the type if you mouse over (at least some of them), but if the type is explicitly typed you do not need to do that and you can just read instead of pause, move the mouse over the auto, read the type, then move the mouse to the next (if any), etc. And that is assuming you are reading the code inside an IDE - it doesn't work if you are reading the code in a web-based code review tool that at best can show you syntax highlighting (and it is exactly in that environment where you want the code to be at its most understandable).

Now not all features are bad, lambdas are OK when used as local functions and can make the code more readable if the alternative is to define some static function outside the current method (e.g. you want to pass some custom filter or comparator). They can certainly be abused though, but it is one of those cases where their usefulness is greater than their abuses (and i can't say the same for "auto").

For the example given... it might be a bad example, but honestly i was looking at that code for a bit and i simply cannot read it - i do not understand what is going on just by reading the code, i'd need to run it in a debugger and go through it step by step (and i've actually written texture atlas packers before). It completely fails to sell me on the "fold expressions" and "parameter packs" and it certainly doesn't look at all "elegant" to me (but note that it might be that the example is awful, not the language feature itself).

And it did make me skim through the rest of the article though since after completely failing me on all fronts at the introduction bits, i couldn't get the feel that i have any common grounds with the author.


You're exaggerating. There's nothing complicated in the code - it looks straightforward to me, and I've never written an "atlas packer". The only slightly confusing line is the one before the last: I believe you should not use side-effects when unpacking the fold expression; if you need an external state, use a proper loop construct (EDIT: although, to be fair, it might be impossible in this case, unless you can reify the fold into runtime, or get a special-purpose loop). But everything else is straightforward, and I could understand the algorithm just fine. The higher-level constructs, which people seem to hate in this thread, make the code shorter and more general, and also very familiar to people using (properly) higher-level languages - for example, the code here is very similar to how you'd write a macro using syntax-rules in Scheme. If it performs the same or better than other, more explicit and verbose, ways of writing the same algorithm, then it's a win overall, and the approach should not be dismissed just because you're not familiar with the features used. Well, I managed to read this snippet just fine while I don't know modern C++ at all (last worked with C++ when Y2K was still a thing), so a professional C++ developer should be able to grok this effortlessly.


Eh, no, i'm not exaggerating. I really have a hard time following the flow of the posted code. I can get a rough idea of what it is doing by ignoring most of the Modern C++-isms, but i still can't tell you with confidence that i know exactly what is going to happen (...and i'm not asking for an explanation, btw, that is besides the point :-P).

I mean, sure, if i take that code and run it through a debugger - perhaps while also crossreferencing the features it uses at cppreference.com - then i'd be able to follow it. However at that point any relevance to readability would have been thrown out of the window long ago.


Well, readability is like this. It's not a property of the code alone, it's an emergent property based on both the code and your knowledge as a reader. It's very, very subjective, which our little disagreement here proves. Basically, the same code using the same feature can be both insurmountable wall of text and an elegant, readable solution - depending on your background, current knowledge, and personal taste (among other factors).

For me, this whole spread/fold feature is easy to grok, because I've worked with many similar features elsewhere. In this case, the feature looks almost identical to how `...` is handled in one of the Scheme macro systems, syntax-case (and syntax-rules, by extension)[1]. As mentioned, the use of spread on a comma operator with a side-effect is tricky and maybe too clever, but, otherwise, I don't see anything out of ordinary.

> perhaps while also crossreferencing the features it uses at cppreference.com

That's the thing - if you already knew the meaning and syntax of these features by heart, you'd find the code using them very readable. You also wouldn't need to step through it in the debugger, because there's really not much happening there in terms of control flow.

In general, "readability" is simply a bad word to use: it's too overloaded and means too many things to too many (kinds of) people. Every language can become readable to you if you put enough effort into it; and no, the amount of effort needed is also dependent more on your prior knowledge than on the language in question. So it's just too subjective to be useful as a metric for anything, unfortunately.

[1] https://docs.racket-lang.org/reference/stx-patterns.html#%28...


It isn't just about knowledge, but also about how much knowledge you'd need to keep in your head just to read something - the least the code requires from you before you even start reading the code, the more you can focus on understanding the code itself. And even when you know about the features shown, it still is hard to follow the flow. I mean, i do know about lambdas in C++ and have used them a lot, but i can still find it harder to follow code that uses them extensively with the flow jumping around as, e.g., calls to other functions call back to local lambdas.


> but also about how much knowledge you'd need to keep in your head just to read something

Yeah, but the effect of this is greatly overstated most of the time. As I said, given enough effort, you can learn - and learn to keep in your head - anything. It matters in the short term, while you're learning, but in the long run, once you've learned and internalized all the required information, it stops being relevant.

It's probably harder to learn to read and write kanji instead of the Latin alphabet. For most people, that difference matters for a few years in their childhood, but once they have the characters drilled into them, it no longer matters: they can read and write as well as any Westerner.

The same is true for (natural) languages: some are inherently more complex and hard to learn than others, yet once you become fluent, you stop noticing the complexity. You simply speak, read, and write your thoughts directly, without thinking about grammar and spelling too much.

It's also visible in sciences and engineering. Mathematical notation is especially notorious: not only every symbol can have multiple meanings, but you're expected to also guess which meaning was intended from other symbols and text around. That's on top of introducing hundreds of made-up words for equally made-up concepts, like a "number", or "monoid in the category of endofunctors".

Finally, it manifests in programming and programming languages. In various ways. For example, there are some people who use APL, K, or J - because it's "easier to read and keep in your head a single line of APL than a 500 loc of equivalent C". If given a chance, they will tell you that something like this is of course very readable and straightforward:

    ⍝ John Conway's "Game of Life".
    life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}
You just need to learn a few things first, and that may be hard, but once you do - I'm told - reading and writing code this way becomes effortless, and a thousand times more efficient than writing in C.

Basically, if you're going to be switching languages every year, then yes, there's a difference between having to learn the language for a month rather than six before you can ship something. On the other hand, if you're going to stick with a language for a decade or two, then the long learning process becomes irrelevant, as it's dwarfed by the rest of the time where you actually use the language.

> it still is hard to follow the flow

It may be hard if you're not familiar with the common patterns of using higher-order functions. HOF and lambdas are not GOTO: there's a structure there, it's just richer than the basic set of if/for/while statements. You could call such a structure an FP equivalent of OOP design patterns.

> calls to other functions call back to local lambdas

Yeah, but that's also true for every abstraction, starting with a procedure definition. Also, you don't need lambdas to have this problem, it's enough to register procedure as a signal handler, or register an event handler in some async framework. When you pass a comparator function to `qsort`, you similarly don't know when and how that function will be called, even though it's a named procedure.

To summarize: no matter the language, you can learn it, you can fit all of it in your head, and you can make it readable for you. It requires effort, which is an investment: it might not be worth your while, depending on your circumstances. However, if you encounter a code you don't understand or have trouble with reading, because you didn't invest enough time into learning the language, that's not the code's (or features') fault. Just be honest with yourself and don't blame others for what is a result of your conscious decision.

Also of note: yes, the features often do differ in their complexity, and the differences influence the readability (for lack of a better word). However, to see this and to be able to compare, you have to first learn the features in-depth.


Because there is yet no proper for loop for argument packs or tuple-like objects, fold expressions over the comma operators are unfortunately the next best thing.

Edit: also the default comma operator discards its lhs, it is pretty much always used for its side effects.


Yeah, I figured it might not be currently possible. I was thinking about something like Scala HList[1], which provides a map/flatMap (which enables for-loop) method for tuples (among other functionality).

[1] https://github.com/milessabin/shapeless/wiki/Feature-overvie...


boost.fusion, boost.hana provide similar functionality (i.e. arbitrary runtime or compile time transformations over tuple-like objects) but they are relatively large dependencies and it is not worth it just for a tuple for-each.


I wonder how wide spread dislike of "auto" is in c++ (I've seen that a few places) when the equivalent type inference has become fairly standard and preferred in other languages like C#, Rust, Go, etc...


With Java I've seen coding standards that you can use `var` only when type is obvious from declaration. For example:

    var person = new Person();
    var car = selectCarById(carId); // Car
if type is not obvious, it should be explicitly declared.


I'd avoid the second example since you'd need to know the return type of selectCarById to know what the actual type that will be returned is (the name doesn't help, it might return something like, say, a "ref<Car>" or something like that - e.g. in a game engine i worked on a couple of years ago all resource pointers were passed around encapsulated in a special template that handled automatic resource management - methods would still be called something like "GetMesh" but what you'd get wouldn't be a "Mesh" but a "TResRef<Mesh>", however since in other places in the engine you'd work with "raw" Mesh types, unless you knew what GetMesh returned - which could be the case for, e.g., some programmer that normally worked with at a completely different subsystem with its own rules - you'd might expect a "auto mesh = foo->GetMesh()" to be a "Mesh" but instead it is "TResRef<Mesh>").


This is also common in the C++ community. Clang-tidy has an auto fix for this that can be applied to code bases.


I sometimes see people stating exactly this, but then writing:

`doSomethingToACar(selectCarById(carId));`

Kind of weakens the argument. I'm not sure what's the best approach, but I'm usually ok with autos even when the type is not explicitly known - when reading code, I do not really need to know what exact type a variable has ("it's a car, goddamnit, it says so in the name!"), just how it's used (and then meaningful function names become very important).


Traditionally C++ code is often considered harder to read than code in these other languages, and the "excessive" use of 'auto' does not make understanding code easier. Still, according to my observations the split in opinions on this is about 50/50; mine is that the use of 'auto' improves the "genericity" of code (on par with the use of templates) and its amenability to refactoring with less chance to make a mistake. As to the readability of code, it also improves due to not having to repeat yourself as often - as long as the names of the variables remain self-describing or are clear from the context.


I'd also say that using auto makes your code easier to read, especially when you are the user of generic code. Looping through a vector where you need to keep track of the iterator, for example.

    for(auto iter = vec.begin(); iter!=vec.end(); iter++)
    for(std::vector<project_namespace::class_name>::iterator iter = vec.begin(); iter!=vec.end(); iter++)


These days there is also:

    for (auto i : vec)


Yeah, this is exactly what i dislike - unless the declaration of "vec" is somewhere close by (and assuming it isn't itself "auto" :-P) you have no idea what "i" is.

Especially when that "auto i : vec" should have instead been "auto& i : vec" or "const auto& i : vec" and now you are at best wasting cycles and at worst writing to copies that will soon be discarded, ending up with a bug that can be very hard to spot.


I love the idea of auto range-based iteration but it's full of warts like the one you mention. Recently I found myself wanting an iterator over a combinatorial family.

Generation of a single solution: 3 easy lines (calling on a few hundred lines of goofy math that actually describes the structure, but that's common to all of these approaches)

Writing a for-loop to fill a std::vector of solutions -- about 10 lines of a familiar stack-walking pattern which could confuse a novice.

Making a fake container that defines a begin() and end() along with a nested iterator class: about 20 lines of necessary boilerplate, another 20 lines to replicate the stack-walking, now sprinkled about the boilerplate. The novice is completely bewildered, so we add another 10-20 lines of comments to explain it.

So I have this strong urge to keep the first two implementations in place, just to provide a gentler ramp. But I won't use the code in the end, so it would only add maintenance overhead, so a lone tear rolls down my cheek as I delete the clear, readable code.

In python, this is often as easy as changing square brackets to parentheses to change a list comprehension into a generator.


I usually write auto &i : vec out of reflex, but left the reference part out into visually match what the parent had, with no reference. (But an iterator, so not having that issue.)


If the type is not obvious one can also write

    for(Class entry : container)
This is still an uncontroversial improvement over having to typedef or use auto for the iterator.


Sure, that is what i'd probably write myself too.


I think pretty much every one agree with auto for iterators and duplicated types (casts, initialisation).

The debate is about all the other cases.


There is definitely disagreement in C# as to the proper usage of var.


Perhaps the people who dislike auto in C++ would also dislike the equivalent feature in other languages but they just happen to not work in them?

I know i do not use any of the languages you mention, for example - and if i did, i'd explicitly write any type names.


> Perhaps the people who dislike auto in C++ would also dislike the equivalent feature in other languages but they just happen to not work in them?

How do you reconcile that world view with the fact that people are shipping billions of line of codes that obviously work in languages where until recently you couldn't even write any type anywhere (JS, Python) ?


I'm not sure what is there to reconcile or even what world view you refer to. Personally i do not use these languages much and when i do it is usually very short code and looks very different to code i'd write in a language with static strong typing.


I don't think so. auto adds some more complications in C++ than var or let in other languages. Consider "const auto& a = x" vs "auto a = x". What exactly is the type of a? It depends.


Auto makes it harder to know the type in question, if C++'s auto is slightly more cryptic than var or let in other languages, doesn't really matter that much if what you dislike is not knowing the type in question in the first place.

But honestly i can only talk about me here, i can't guess why some imaginary other developer who dislikes a feature does dislike it.


I think it was welcomed with open arms by less experienced devs because it made code easier to compile, and rightly so. C++'s compiler messages are off-putting.

For others, it was worrisome because it made code easier to compile, and rightly so. If it complies it doesn't mean it's correct.


An all or nothing approach to auto ends up being silly for reasons of clarity and specificity in types. Auto with compound types makes programs easier to read and write, especially if an IDE is there to expand complex type information. If the type is small, basic, intrinsic, etc. then auto can be a hindrance.


> An all or nothing approach to auto ends up being silly

I already wrote that there are some cases where auto is necessary (usually when used with more recent C++ features).

> especially if an IDE is there to expand complex type information

And i also already wrote that this information is not only often cumbersome to obtain but also such an IDE is often not available - e.g. in a web-based code review tool which also happens to be an environment where you want the code to be most understandable.


What I'm saying is that auto is very useful and not an exotic or niche feature, it just works the best when not using it in places where a type definition is already small or direct.

This usually means types that are from inside the scope of another class. Compound types that are used frequently can actually be aliased.

Also writing programs that are clear when reading from plain text is great, but I don't think that should ever be a higher priority then what it is like to work with inside an IDE. The days of writing programs with notepad are over thankfully. Languages aren't the only way to make programming easier and aren't even where the low hanging fruit is. People get caught up in languages, but tools can help much more without the herculean effort of redoing decades of work, so I lean on them whenever possible.


Well, i already wrote about my thoughts on auto, so i do not see a reason to repeat them.

However, about IDEs, you still ignore that code is not only worked with inside IDEs - i already wrote twice the case of a code review tool... have you ever worked on a team with code reviews done by a web-based tool? Or even with a source control program that you want to check the differences between commits that someone else made long ago (they may not even be at the company anymore) and the diff tool obviously has no idea about types and such?

There are many reasons for why you need to work with code outside of an IDE and none of them have to do with using Notepad to write the code.


Nobody thinks "I hate having simple code, I'm going to replace it with a complex one!"

It's always "ugh, this code is a copypasta convoluted mess. I'm going to replace it with a simple solution".


That's why even using C# I wanted to have some of C's simplicity. Keep it simple. As the Da Vinci said, "simplicity is the ultimate sophistication"


Is the code really that complex? Overly verbose, maybe, but its really not that hard to follow even for someone who doesn't know much about C++ templates.


Isn't that true for, like, any code? Don't turn your code into an academic exercise. You are not writing it for the compiler - you are writing it for me.

It's not that different from the English language either. Laying out your thoughts in a clear and structured way is the real skill. Start using words and expressions that I have to constantly google for, and I will hate you very quickly.


I agree 100%, here you can see the result of an all-nighter getting my dumb engine to finally render models properly with GPU skinning (the problem was vertices with 5 weights, I'm dumb too!): http://talk.binarytask.com/task?id=5959519327505901449


I agree wholeheartedly. I spent so much time needlessly trying to be smart.


Yet, it is important not to abandon the hope to become smarter. Learning new tricks never gets old.


I think it's best trying to be smart in areas that will stand the test of time, not the latest lasagna architecture or library hype.


Template metaprogramming is not 'smart'. In fact, it's the dumbest and simplest programming language, not counting esolangs.

You just need to know functional programming.

Know your tools, people.


This advice is true across all areas of programming.

I always say it as "write code that you could debug at 2:00 am drunk."

Be simple, do not be clever and be clear.


The tool is only as good as the one who uses it.


Reading the twitter thread he mentioned in the introduction gave me a very bad impression of the author. He started a nonconstructive flamewar, called all constructive criticism misguided, and thereby shit on half a dozen video game development VETERANS. The arguments by Omar in particular are worth reading much more than this article.

Also, all his code examples are bad. No one would build a texture atlas the way he did for example, and in any practical algorithm one wouldn't be able to use folds like he did. So suddenly one would have to use something entirely else like a for loop or std::accumulate().


Yeah, the weird thing is that this is not an atlas packer! It's a very crude long texture with arbitrary bounds, and most GPUs have a maximum texture dimensions, so given enough particles this would have broke. Even a simple skyline algorithm to pack rectangles would have worked far better and it's hard to see how to make that into modern C++ like the author shows.


Bloomberg has a higher hiring bar than most, if not, all game studios. I like how you fallback on seniority while the video game industry isn't exactly known for top tier talent.


It takes a special kind of arrogance to assume that you know better than the people who've been successful in a completely unrelated field.

It reminds me of the arrogance displayed in the Twitter thread, which is a shame. I think like someone said, experience is valued only by those who have it.

I've been there too though, I was that arrogant young programmer once thinking all these old-timers weren't a match for my technical skills. It usually passes.


While I do agree with various points in the article, it is kind of funny that the author works in the financial sector and has, apparently, no experience in working on big games devloped in long stretches by hundreds of people at the same time: for example, this article about "C++ and gamedev" shows examples from his own Quake VR codebase, an insanely cool but clearly one-man project started and brought forward by he himself alone...


So? Doesn't mean people have to be toxic about a code snippet he posted because he found it interesting. Personally, I'm glad he posted about fold expressions, because I'm not up to date on C++17 yet and I found it quite interesting. He didn't deserve the replies he got and I feel that the only reason this article is about gamedev at all is because the gamedev community were the ones who jumped on him (and probably only because he mentioned his Quake VR project).


Does the author have no experience developing C++ on large teams or are you asserting that large team game dev is unique compared to other industries?


> or are you asserting that large team game dev is unique compared to other industries?

Well, that would be a good question to ask to the author, since he titled the article "modern C++ gamedev" and not simply "modern C++" although to me what is discussed seems to be general enough, and "gamedev" here happens to be just the type of project the code comes from... To me, it does not look like he makes any significant contribution on how "modern C++ could serve specifically games development". IMHO he has detected that people in games react strongly to content about C++ and he benefits from the additional exposure he gets by putting "gamedev" in his C++ articles/tweets/anything (which is something I am not judging btw).

Personally, I do think that when your code is simulating a whole parallel universe using a not-super-high-level language, you might end up with challenges regarding readability and flexibility of the raw code that other softwares might not encounter. Not being really part of that industry I feel I can't know for sure, and therefore I have applied the same reasoning to him since he is also not part of that industry, it seems.


I have experience with large scale high performance C++, albeit not in games specifically. The recommendations in my experience are very sane, conservative even. I think at this point the onus is on critics to substantially respond to the points given.

I don't think questioning credentials is particularly elucidating here. I don't see any reason the entire thesis of the article is flawed.

As to the throwaway reference to gamedev, the article is in response to shade thrown by gamedevs, attempting to concede specific concerns and propose solutions.


That's the thing though. Gamedevs aren't throwing shade, it's a strawman. Look at the twitter threas mentioned in the introduction and you'll see a bunch of civil gamedev industry veterans offering constructive criticism, only to be shut down with "that's misguided" and no counter argument. It gets old fast and it's been like this for dozens of years. I'm not surprised that they stop caring.


If you are making the point that Twitter arguments are counterproductive, I agree. I will concede all objections about tone because my life is too short to get dragged into Twitter tone policing.

I do find the recommendations interesting and would like to keep the conversation about them. I would especially like to avoid gatekeeping and No True Scotsman arguments in a thread about code.


No, I did not make the point that Twitter arguments are counterproductive at all o the contrary this one looks very productive EXCEPT for the author.

So now you're straw manning me as well, then say I'm gatekeeping and no true-scotsmanning. I don't appreciate that all and am therefore out.


- winterismute questioned qualifications to have a position on writing C++ for game dev. That is gatekeeping and No True Scotsman.

- At least someone on Twitter called some tweets "retarded", etc., so OP thought to continue the conversation by ignoring nonsense and restating concerns in a healthier tone. This post is his attempt to be productive, apparently.

- Any perceived slight against you, you have inferred. No offense intended.

- Perhaps HN discussions are only marginally better than Twitter if people can't avoid making discussions about fold operators personally.


> winterismute questioned qualifications to have a position on writing C++ for game dev. That is gatekeeping and No True Scotsman

Well, not really. Some of the people on twitter (and the author himself) basically pointed out that "one of the main reason we do not use that coding style is that, despite the advantages, when you need to work on a codebase that requires both performant simulation of the planet and rapid experimentation of mechanics at the same time by hundreds of de-localized devs, you end up seeing many of its limitations". If you want to handle this argument, you need to have a clear idea of what are the priorities in such a scenario, which I doubt the author has. Even more interestingly, the article does not raise the point "this is how I write modern C++, I think it will benefit all the industries" but specifically seems to want to argue that this style helps gamedev particularly despite showing basically no domain-specific observations, nor anything that convinces me he knows what are the critical problems in the "triple A game" scenario... I don't think I can call this gatekeeping if I am not convinced - even by reading only the article - that the author has a good grasp of the fundamental problems involved in the development of big games, or not?


The piece makes the point that the features presented have negligible to no downsides in the context of gamedev (and other contexts as well), and your argument doesn't refute that other than by assertion. It instead tries to disqualify the relevant points based on the credentials of the author.

While making big assumptions about the author's experience in the process. It's plausible he has experience in complex, performance-sensitive, highly collaborative C++ as well. The whole argument hinges on partial information and disbelief, really.


There was a good comparison about those floating around:

https://imgur.com/a/u1N4Fpy

For me the code on the right is far more readable and easier to understand.


The code on the right feels in mildly bad faith to me.

  - consts are dropped
  - variable definitions are merged onto multiple lines
  - usefully named constants like `nPixels`, `nBytes` are elided
  - the `idx` lambda is inlined
The net effect, for an initial skim, is that the code on the right looks terser and simpler. But in reality many of the "short cuts" hurt the long term quality of the code.


I love C++17 fold expression, but...

Local consts are not particularly useful, especially for things like ints. Compilers know it's const, and a reader doesn't need to worry about it in a local scope.

width, height usually come in pairs, so putting them on the same line is usual.

nPixels and nBytes are not used more than once so they are not useful abstractions, and the pattern of `width * height * bytes_per_pixel` is so common that there is no ambiguity about what it does.

The idx lambda is probably a distracting abstraction which requires the reader to think of it in a different context than the immediate what pixel goes to where. Again, the pixel moving pattern in the right is so common it's familiar to most people working in similar areas, and in terms of error-prone both are not better than the other.


When I read that it's const I can in my mind "drop it". It's there, I know where the value was set and don't have to skim any longer for updates. It isn't for the compiler, its for me, the next guy who has to read your code.

Same thing with doing two separate things in the loop. Not only can it stop the the compiler from optimisations from time to time (the c++ compiler is very clever... but many times it gives up), I've had speed increases in breaking it up into several loops, but I also have to search and in my mind break things up. Very easy to do when I'm writing the code (and I'd probably do the same because of laziness), but quite annoying when I read someone else's code that does it.


If you as a reader need const to make sure a local variable is not changed this is usually a symptom of this function being too long to see at a glance. And quite often as code changes I do need to make some local variable non-const, and it quickly becomes annoying to fiddle back and forth enforcing const-ness of every local variable.

The argument about optimization is almost certainly premature optimization. Most of the time how you write a loop doesn't matter. You only find out what matters via profiling and refactor accordingly.


> If you as a reader need const to make sure a local variable is not changed this is usually a symptom of this function being too long to see at a glance

If the compiler can enfore an invariant (constness), why cede that functionality on the hopes you can ensure it yourself?

By the same line of argumentation you should do away with the static type system.


Most people hate const at first (myself included) but once you get used to it, it really does reduces mental load. Not having to glance around is precisely the point, its not a big thing but it does help.


I stopped using it for locals after it increased my mental load. The maintenance cost of const for all local variables is huge during refactor like you're fighting it just to get things done. And in most cases where the type of a variable matters I do have to glance around like when I need to remove a variable or change its type or refactor its dependent variables so const doesn't really for those cases.


If you have to assign to a const variable during a small refactor, then maybe it shouldn't have been const in the first place? I'm struggling to imagine examples where this is a real problem


So you prioritise the writing of the code instead of the reading.

In that case, it makes sense.


> Compilers know it's const

I can still assign to it. Sure, you can argue that if the code is complex enough that you might accidentally do that without it being obvious, then the code should be simplified or split up, but that's besides the point. If its const, the compiler will complain, if its not, it will silently let me. Nobody can write ideally-factored code all of the time and following good consistent practices helps. Also, I hear arguments like that all the time, but so much of actual real-world shipped code breaks these "rules", so I'd rather be pragmatic and choose a style that improves otherwise imperfect code.

Its also as much a hint to future me that the variable is intended to not be modified.


Regardless of the induvidual merits of the changes (and I do disagree with you), it's not reasonable to say: "Folds are overcomplicated, here's a version without folds which is simpler" whilst also making a range of unrelated changes to reduce code size.


Folds are not complicated. They are just not familiar to the intended readers.

The changed version is idiomatic in image processing or similar areas that deal with pixels. Being idiomatic makes it familiar to read, and easy to change.


But the wider context here -- and the reason the changed version was presumably written -- is the accusation that folds are overcomplicated.

It's under that lens that I am criticizing the changed code.


Not just folds. Also the lambda and the inner loop which is memcpy instead of range3. It’s about every part of the function, including but not limited to the folds.


I agree with the author on many things - parameter packs however.. I've looked at them several times. Even if you get a grasp on their syntax, I've almost never been in a situation where they could be used. As he correctly observes it's a compile time thing.

> In my particular scenario, all the texture file paths are hardcoded

That feels like a very unique case. Usually data is given in a vector or something and then you're out of luck anyways. Plus the syntax is very unintuitive (think about your colleagues), debuggability is zero.

Also the argument about constness feels a bit contrived. Sure with this you can write const and feel good, but the other version is hardly a bad nonconst. You can wrap things in functions or whatnot. The function seems to be a bit long anyways.


Parameter packs are quintessential compile time lists: you can pass them around, transform them, destructure them…they are really quite useful when you need to make sure certain things are done at compile time. For example, I recently used them heavily to generate code at compile time for a virtual machine I designed, all from a single instruction architecture that I encoded in the type system.


> Even if you get a grasp on their syntax, I've almost never been in a situation where they could be used.

Parameter packs can make some ugly code substantially simpler, IMHO. For example, I contributed some changes to the Godot C++ bindings a couple of years ago that made a number of super common functions variadic instead of having to create and pass in collections (eg debug printing, specifying argument types when registering signals, stuff like that). While not a strictly necessary change, it makes the resulting code easier to read. Parameter packs allow this.


Variadic templates are essential for simple, argument forwarding template functions. std::vector::emplace_back is a very simple example, this kind of use comes up now and then during work.

[1] https://en.cppreference.com/w/cpp/container/vector/emplace_b...

edit: This also shows that you don't have to be able to write variadic templates to reap the benefits of it. You can enjoy the benefits while using a library that uses it.


The key to understanding techniques based on templates is to realize that C++ templates are just another programming language that (a) is functional, (b) is interpreted at compile time, and (c) where values are C++ types.


Sure I've written variadic templates. But for this situation it feels quite forced.


I find the twitter thread to be specially sad, since I see a lot of people genuinely try to explain that compile times/familiar syntax are more important to them, but the author seems adamant that they just don't know what modern tech is and they should learn the "right" approach.


I feel that most of the problems with debuggability is tooling (still). No, I do not want to debug the standard library most of the time, please let me optimize that, while I keep my part unoptimized.

Hopefully modules will allow mixed optimization levels for template headers.

Metaprogramming also lacks a good debugging story, but I would be happy to be proven wrong.

In my opinion these are not language problems, but tooling problems.


There are designs and usage choices that make the tooling problem orders of magnitude harder. You still have to use the tools you have, today, while the tools of tomorrow arrive. The debate is about what and why (some of) those choices are taken or rejected by different people.


His personal blog post has eight obtrusive ads plus a Donate button (to an engineer in finance in London.)

I get the sense the author's deliberately stirring controversy.


"This is my personal website. It's statically generated by a C++14 program".... classic


His Twitter profile pic isn't helping either.


Maybe he changed it, but.. what is wrong with his twitter profile pic? It looks pretty ordinary to me. Am I missing something?


Sorry, meant the banner pic on his profile page, at top.


What's wrong with his profile pic?


I’m often disappointed by how disrespectful people can be in programming discussions. It’s one thing to discuss tradeoffs but ridiculing peoples’ choices or speaking in absolutes is not helpful. There’s no One True Way to program.


I always wondered whether it's programming in particular that attracts a lot of people that think in only black and white, or whether it's the same in other industries as well.


I know this kind of turns things up to 11, but about 5 years ago there was a book published called, "Engineers of Jihad." A bunch of news articles were written when it was published. I have zero idea how reputable it is, or if its been later debunked (I never read the book itself or heard any followup). My recollection is that one assertion offered was that many engineers are predisposed because engineering has logical, straightforward order and hierarchy while most of life is a lot more messy.

Separately, I think programming itself is solitary and predisposed to people who don't focus on building social skills (myself included). I think online (pseudonymous) communication can introduce all sorts of problems.


The author of this article has made the Dive into C++11/14 series, which is about using modern C++ for Game development.

[1] https://www.youtube.com/playlist?list=PLTEcWGdSiQenl4YRPvSqW...


Really the “problem” here is that C++’s functional constructs will never allocate memory, giving them somewhat strange signatures that don’t really conduce themselves well to typical functional concepts. At compile time there is no such goal and as such some of these constructs can only be found there.


According to https://en.cppreference.com/w/cpp/algorithm/stable_sort:

> This function attempts to allocate a temporary buffer equal in size to the sequence to be sorted. If the allocation fails, the less efficient algorithm is chosen.

Or is this not what you consider a "functional construct"?


It's more of an issue with things like std::transform or std::copy_if. Ergonomically, it's nice if these kinds of functions either allocate and return a new container, or return an iterator that yields the elements of the result.

But the C++ versions of these take the result location as an argument. It makes chaining them together a hassle because you have to create all the intermediate containers explicitly.

I think there are good reasons for the STL to work this way, but it can make programming in a functional style pretty inconvenient compared to a lot of other languages.


Ranges will help with some of the composability issues of standard library algorithms.

[1] https://en.cppreference.com/w/cpp/ranges


I know some basic C++ and STL. Is there anything like streams in C++. I mean things would be as easy as vec.stream().map(lambda).fold(T::operator+), so it wouldn't require any more allocations than those done by copy constructors.


The upcoming ranges library will provide exactly this! It uses | for composing operations.

See e.g. here: https://github.com/ericniebler/range-v3/tree/master/example


That site is awful on mobile and filled with ads. Makes it impossible to read.


He got upset when ppl with 15+ years of experience, working at dice or directors of tech at Activision pointed out that modern c++ is over complicated and they've been in that phase of liking abstractions but moved on. Funny he blanked the names of those dudes and some other milder replies.

Then he wrote a blog post on it.


I would personally use for loop without size_t. It is short and simple and works as intended, anyone can understand it.


Do you mean with decltype(std::declval<const Image&>().width) as the article suggests? Or with an ad-hoc typedef, e.g. Image::size_type? Or with a fixed type that should work correctly? This use of auto is particularly valuable.


Honestly I would just use auto. I favor heavy use of auto actually, if you got a modern IDE auto is a really great feature.


I stopped reading at "I would like this to happen at runtime".

I got burnt so many times by runtime craziness already that the only thing I want happening at runtime is trivial code running from top to bottom.

But if you enjoy runtime hacks, more power to you! You must be a much better developer than me.


The mentioned cppcon presentation by Dan Saks is awesome. Highly recommend watching even if you are not interested in C or C++ : https://youtu.be/D7Sd8A6_fYU


I don’t think this really covers the whole topic, but yeah the Twitter thread is toxic in the exact way most threads are tbh...

I’ve been using C++ for about 15 years now, and also tbh, and dislike just about everything beyond 11-14. The cognitive load reading code has just skyrocketed. The simple basis of the issue was when the meaning of existing syntax elements being “overloaded”. &, &&, [] in my mind. It is not complicated or profound reasoning - simple as now you need to decipher more context to understand what is being expressed. I’ve been materially disappointed to spend 5-10 minutes reading a small bit of unfamiliar code only to learn that what it ultimately expressed was truly trivial. It just makes you feel like c++ has become like the toxic parts of academics/math where everything is expressed in the most complex way possible to establish superiority over others.

The comments about debug ability are true. STL sources/template stacks are absolutely terrible to work with. Luckily with the STL though, what 99%+ of the time, the bug is in your usage, so it doesn’t matter much. To be fair though that is a trade off - I feel like I’m on vacation debugging c# or Python, but that is (almost) an intrinsic benefit of interpreted languages, with the well-known associated costs. Just templates... the cognitive load of having to understand (simply read) and debug code from an abstract definition introduces significant pain in terms of debug ability and general usability.

“C++ is not the STL” - true but c’mon - whatever is built-in mostly defines the stuff that you can be sure is portable, and rely on as a standard practice/resource in large projects. Very few sane people want 5 different implementations of a vector or string from god knows where with god knows what bugs. In the industry for c++, there is serious need for the _option_ to “reinvent the wheel” of standard libs, but it shouldn’t be that this is necessary to achieve baseline performance or usability in common use cases.

I am of the same opinion as at least one other below - simplicity of implementation and readability of language are king in the “real” world. What’s better than the beauty of a meta-programmed, absolute masterpiece of modern C++ and zero-cost abstractions? A program that I can understand in 10 minutes two years after it was written, or better yet that can be understood and debugged by 90% of skill levels instead of 10%.

This conversation could, and does, go on without end. My net-net conclusion so far is that modern C++ has done more harm to itself than good because it is trying to be too much. When the creator of the language can’t even keep up with it enough to call himself an expert, it’s a pretty obvious red flag that things have gone off the rails.


Kind of agree, and the root cause may be backwards compatibility so another construct is invented which is slightly different.

The other day I explored std::promise and std::future, only to realise that all it is under the hood is a semaphore (which starts with count of zero) and a pointer. The promise.set() updates the pointed memory and releases the semaphore, while future.get() waits for the semaphore and reads the memory. My workaround was just as many lines of code with abstractions that are no more complex. So whats the point of promise/futures? Async adds spawning a thread to the mix, yet another unneeded abstraction. Coroutine adds more. Just be done with this and add proper Actor model and call it a day.


I don't understand why does he show loop-based version using 2 for loops instead of using just one. I usually work in higher-level programming languages, but can't believe that cache hits or something else makes it faster than finding sum and maximum both while going through all images only once.

And having just one loop makes it simpler too.


Choosing C++ over C means his programming abilities aren't very good.


The fact that you seem to think C is blanket superior to C++ means that your programming abilities aren't very good. There are many reasons to choose C++ over C, especially if those two are your only options.


Is this comment a joke or intended to be serious?


The author mentions that you shouldn’t use std::, but offers offers no alternative other than “write your own version that is shorter and better”.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: