I once had a possible series of PRs that would increase performance of godot renderer, fixing considerable bottlenecks on scene rendering on CPU. Reduz didn't like the changes and it went into the "will be fixed for 4.0" deflect. To this day most of those performance fixes arent there and 4.0 is slower than the prototype i had. My interaction was very much not positive.
Even then, i believe that godot leadership is doing a great job. Its almost comparable to the amazing process Blender has. This post looks to me like a ridiculous statement. Reduz and other godot leads get constant pestering from hundreds of people daily, Godot even has thousands of issues on the github repo for bug reports.
The godot project and W4 spend their money wisely. I know some freelance developers who got hired for doing some project features. Someone like reduz just does not need to scam anyone, because if he wanted money he could likely work for other companies as a low level Cpp engineer and get more money than what he pays himself with the w4 funding. That w4 funding is being used to make godot into a "real" game engine, with console support which is needed for the engine to be taken seriously by commercial projects and not just small indies or restricted projects. Setting up a physical office and a place to have all those console development kits costs money, and hiring developers experienced in those platforms is not cheap.
In the way i see it, the godot project often develops features in a "marketing" fashion. This clashes quite directly with people using it for serious projects. Unity engine has a very similar issue. We get things like development effort being spent on fancy dynamic global illumination while completely rejecting the classic static light maps (needed for lower end), and even basic features like level-of-detail or occlusion culling which are considered a must-have that every engine has. I think this is what makes the poster cyberreality in the linked forum so angry. Its one of the big faults of the engine, but its not that bad as a development idea. Those fancy big features attract a lot of users, who can then provide feedback and bug reports, PR their fixes to the engine, and of course, funding and hype. The main team makes sure that the architecture of the engine is good, with some fancy big features, and the bugfixing and more niche/professional features are left to community that can PR it. Github is filled with "better" engines from a technical standpoint with a total of 1 user.
Vkguide deals with the basics, starting from scratch. The book above can be a great read after one completes vkguide, as it shows how to implement some advanced features. I helped review the book and think they should have been more clear that its a book for the people who already know graphics to a high level and want a refresher/info on some new state of the art techniques.
I wrote extensively about that as part of my vulkan guide. https://vkguide.dev/docs/gpudriven .
The TLDR of it is that you have the cpu upload scene information to the GPU, and then the GPU performs culling and batching by itself in a series of compute shaders. In most games the scene is 99% static, so you can upload the scene at the beggining and never touch it again. this decreases CPU-GPU traffic by orders of magnitude, and if used right, gives you 10x or more performance improvements. Unreal 5 Nanite tech is based on this concept.
Important detail to comment on this movie. Its rendered using the EEVEE renderer, which is kind of a "game" realtime renderer. Its not using pathtracing on a massive machine to calculate the final image.
Nanite in consoles already uses mesh shaders. On PC it emulates them through some trickery but having it using real mesh shaders would be only a small change in code.
I believe mesh shaders are the future of the geometry pipeline for graphics, specially given that both of the new consoles have an implementation for them. Freeing the mesh pipeline from the legacy of opengl and similar stuff is going to allow a lot of new techniques and implementations.
Mesh shaders in a way are a new version of what the Playstation 2 did back in the day, where you had a vector processor that would output triangle lists to be rasterized. Its flexible mesh pipelined allowed very interesting techniques, like the multi-layer meshes for fur in shadow of the collosus, or multiple types of smooth LOD interpolation seen in other games. With them, now we can emit geometry from a programming model that is more or less normal compute shaders. Each threadgroup will output a small mesh directly into the rasterizer units of the GPU, so the flexibility is quite great.
Game developers are running into the limits of the current vertex shader model, as you can see with trickery like doing mesh processing in compute shaders and writing the final indices into a big linked list through parallel compaction and similar. With the mesh shaders, all that becomes both more optimal and simpler to write. Techniques like Nanite become a fair bit faster using mesh shaders than emulating them by writing indices into memory, as it avoids that whole memory and syncronization round trip. Completely removing the disliked geometry shaders is also a great feature.
Its great to finally have a crossplatform implementation of Mesh Shaders, but i think the implementation leaves a bit to be desired given it seems to be a direct translated vulkan version of the DX12 implementation. Im not sure how its going to scale to hardware other than the 2 big desktop vendors.
The biggest issue with the mesh shaders here is that the cache sizes or recomended amounts vary depending on the GPU. To take the most advantage of mesh shaders and get great speed out of them, there are some fast-path counts for triangles,vertices, and shader memory usages. Those vary by the vendor, so im not sure how are developers going to deal with that correctly other than having multiple versions of the shader by vendor.
I find them cool, because in a way it is like going back to the Amiga days, but with better hardware support for this kind of stuff, or Cell stuff like you point out.
In a way this is already what companies like Otoy are doing by using CUDA for 3D rendering.
After Nanite, it seems to me that mesh shaders are just a transitory point before everything inevitably moves primarily to raster in compute. Sticking to fixed-function hardware doesn't make sense unless that hardware is faster.
(As I understand it, Nanite does use mesh shaders as a fallback for larger triangles, but those naturally require less oomph in the hardware pass.)
I believe thinking of design patterns as something you "use" is one of the biggest mistakes being marketed to new programmers. A lot of them are niche use or only really exist to cover lack of features in languages, such as the Singleton pattern that has no real use in Cpp because you can just put a global pointer instead. It makes people think more about the code than about the problem they are trying to solve to begin with and often ends with a lot more code than is needed to solve the problem for no real good reason. Ive seen a lot of young programmers being mislead considerably by articles like this, including myself.
I see design patterns books and articles as more of a bestiary than a toolkit. Their purpose is not to be a repository of prebuilt solutions (because patterns dont solve anything by themselves). Their main purpose is to give names to a good amount of very common code patterns so you can communicate about it with other programmers.
I can't agree more. Design pattern resources often omit the fact that "design patterns are missing features of programming languages". It's not a piece of common knowledge yet (at least outside HN), and I hope people could talk about this more.
Take the visitor pattern as an example: Say we have M types, and each type has N operations. OO languages like Java are good at extending M - you add another class. But it sucks at extending N because the obvious way is to add the operation as a method in every type. Visitor pattern helps you invert the problem, make it easier to extend N while suck at extending M. In fact, the visitor pattern feels more or less like it reinvented the plain, old "function" - just write a function and handle each type in it and you just achieved the same things.
I find it's always more readable when I just use a function. There's no exhaustive type checking so it makes sense to use the visitor pattern in Java, but I often people still use the pattern in, say, TypeScript without a second thought.
What if we want to extend M and N at the same time? There's a sophisticated design pattern called object algebra [0]. In this case, it's pretty clear that features like type class in Haskell and traits in Rust could solve the problem in straightforward ways without wrestling it with design patterns.
> A lot of them are niche use or only really exist to cover lack of features in languages (...)
This take is a reflection of a failure to understand the concept of design patterns, their usefulness, and their role in software engineering.
Design patterns are not features missing from a toolkit or language. This is perhaps the biggest miscomprehension regarding design patterns. It's immaterial if a language supports or not a given programming construct in it's core language or standard library. The whole point of design patterns is that they represent higher-level programming constructs that pop up often in implementations and are frequently used to address common problems.
Think about it for a second: do futures or promises or callbacks or sharex pointers cease to be design patterns or lose any usefulness if these features are supported by the core language? Is the observer pattern not a design pattern anymore in C# or Java once it was implemented into them as first/second class citizens?
Or do we understand what these techniques involve just by mentioning these keywords?
Clearly the main problem plaguing design patterns are those who feel entitled to criticize things they don't know and clearly failed to grasp even the basics.
>Design patterns are not features missing from a toolkit or language. This is perhaps the biggest miscomprehension regarding design patterns.
It's relatively popular opinion that some design patterns are used because some languages lack some features that can achieve similar stuff in "better" way.
So what's design pattern in your opinion?
For me design pattern is just approach to some specific problem, very often connected with some $implementation_example in order to communicate more effectively.
Design pattern's purpose is to improve communication by naming solutions to "common" problems.
> It's relatively popular opinion that some design patterns are used because some languages lack some features that can achieve similar stuff in "better" way.
You're confusing a couple of unrelated issues there. Just because you need to implement a design pattern if you want to use it with a language/framework that doesn't provide it's own implementation, quite obviously that does not mean that design patterns only exist if you implement them yourself. That's a terribly silly misconception, and demonstrates a misunderstanding of the very basics of what a design pattern is.
Just to be absolutely clear, even Wikipedia defines design patterns as "a general, reusable solution to a commonly occurring problem within a given context in software design", and a "description or template for how to solve a problem that can be used in many different situations." What exactly is there in the definition that ties this to an implementation?
>What exactly is there in the definition that ties this to an implementation?
In definition nothing, but I believe there are very popular implementations (mostly one) of given design pattern that people are aware of and associate with given design pattern
and I guess it may kinda improve communication? idk.
WoW used to run update only twice a second, which gives a lot of time for logic. The WoW game logic is very simple, as the players cant really affect the world other than the enemies they are fighting at that moment. Also, the players are spread around the world and that limits the N^2 problem of players sending data to other players. Events like Gates of Anquiraj made servers break due to clumping too many players in one location.
The game also used to do a lot of logic on the clients, for example collision checks would only run in clients, allowing easy flight-hacks. The server really only broadcasts player positions to other players, and manages the combat logic which is basically turn based.
In private servers using code you can look at (Mangos) they manage to run 4000+ players on a single gaming-spec machine without much trouble. Its more of a design problem compared to something like minecraft where the simulation is much more detailed. In some research projects for wow private servers people have reached 20.000 simulated players in a given machine.
Parts of this pipeline have been used since 2015. You can see most of the techniques explained in here https://www.advances.realtimerendering.com/s2015/aaltonenhaa...
. What nanite does is to continue the development of this by combining it with Visibility Buffers (decouples rasterization from materials) and adds incredibly fancy LOD system and rasterization for small triangles.
Even then, i believe that godot leadership is doing a great job. Its almost comparable to the amazing process Blender has. This post looks to me like a ridiculous statement. Reduz and other godot leads get constant pestering from hundreds of people daily, Godot even has thousands of issues on the github repo for bug reports.
The godot project and W4 spend their money wisely. I know some freelance developers who got hired for doing some project features. Someone like reduz just does not need to scam anyone, because if he wanted money he could likely work for other companies as a low level Cpp engineer and get more money than what he pays himself with the w4 funding. That w4 funding is being used to make godot into a "real" game engine, with console support which is needed for the engine to be taken seriously by commercial projects and not just small indies or restricted projects. Setting up a physical office and a place to have all those console development kits costs money, and hiring developers experienced in those platforms is not cheap.
In the way i see it, the godot project often develops features in a "marketing" fashion. This clashes quite directly with people using it for serious projects. Unity engine has a very similar issue. We get things like development effort being spent on fancy dynamic global illumination while completely rejecting the classic static light maps (needed for lower end), and even basic features like level-of-detail or occlusion culling which are considered a must-have that every engine has. I think this is what makes the poster cyberreality in the linked forum so angry. Its one of the big faults of the engine, but its not that bad as a development idea. Those fancy big features attract a lot of users, who can then provide feedback and bug reports, PR their fixes to the engine, and of course, funding and hype. The main team makes sure that the architecture of the engine is good, with some fancy big features, and the bugfixing and more niche/professional features are left to community that can PR it. Github is filled with "better" engines from a technical standpoint with a total of 1 user.