Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As Bevy Engine's lead developer (which uses wgpu), this release excites me for a number of reasons:

* A smaller (and less abstracted) codebase means that we can more easily extend wgpu with the features we need now and in the future. (ex: XR, ray tracing, exposing raw backend apis). The barrier to entry is so much lower.

* It shows that the wgpu team is receptive to our feedback. There was a point during our "new renderer experiments" where we were considering other "flatter" gpu abstractions for our new renderer. They immediately took this into account and kicked off this re-architecture. There were other people with similar feedback so I can't take full credit here, but the timing was perfect.

* A pure rust stack means that our builds are even simpler. Combine that with Naga for shader reflection and compilation and we can remove a lot of the "build quirks" in our pipeline that come from non-rust dependencies. Windows especially suffers from this type of build weirdness and I'm excited to not need to deal with that anymore.

* The "risk" of treating wgpu as our "main gpu abstraction layer" has gone way down thanks to the last few points. As a result, we have decided to completely remove our old "abstract render layer" in favor of wgpu. This means that wgpu is no longer a "bevy_render backend". It is now bevy_render's core gpu abstraction. This makes our code smaller, simpler, and more compatible with the wider wgpu ecosystem.

* There is a work-in-progress WebGL2 backend for the new wgpu. This will hopefully ultimately remove the need for the third party bevy_webgl2 backend (which has served us well, but it has its own quirks and complexities).



Wow, this feels like a win-win-win for wgpu, bevy and the rust ecosystem in general


> This means that wgpu is no longer a "bevy_render backend". It is now bevy_render's core gpu abstraction.

Experience has shown me that you will eventually regret this decision


Snarky answer first: my experience tells me that I won't regret this decision :)

Real answer: You can use this argument to justify abstracting out literally anything. Clearly we shouldn't abstract everything and the details of each specific situation will dictate what abstractions are best (and who should own them). Based on the context that I have, Bevy's specific situation will almost certainly benefit from this decision. Bevy needs an "abstract gpu layer". We are currently maintaining our own (more limited) "abstract gpu layer" that lives on top of wgpu. Both my own experience and user experiences have indicated that this additional layer provides a worse experience: it is harder to maintain, harder to understand (by nature of being "another layer"), it lacks features, it adds overhead. Wgpu _is_ a generic abstraction layer fit for an engine. If I was building an abstract layer from scratch (with direct api backends), it would look a lot like wgpu. I understand the internals. I have a good relationship with the wgpu project lead. They respond to our needs. The second anything changes to make this situation suboptimal (they make massive api changes we don't like, they don't accept changes we need, they add dependencies we don't like, etc) I will happily fork the code and maintain it. I know the internals well enough to know that I couldn't do better and that I could maintain it if push comes to shove. The old Bevy abstraction layer adds nothing. In both situations we are free to extend our gpu abstraction with new features / backends. The only difference is the overall complexity of the system, which will be massively reduced by removing the intermediate layer.


A situation will inevitably arise where you’ll be unable to support a new platform in a reasonable amount of time because implementing support for that platform yourself in wgpu would be a significant undertaking and/or distraction. Progress from the community on supporting that platform in wgpu will lag because either you might be the only stakeholder interested in that platform and/or modifying wgpu to support that platform may require significant internal restructuring to support the platform in a non-hacky way or it would slow down or increase the complexity of the other main platforms. This is a hypothetical situation but it’s likely it will eventually happen.

Maintaining a fork of wgpu simply for your own project will also likely be more effort than necessary since wgpu is a more general API than bevy requires.

The core issue is that bevy uses a subset of wgpu so your own focused and limited abstraction layer will almost always be easier to implement and maintain. Some call this “the rule of least power” https://www.w3.org/2001/tag/doc/leastPower.html

The meta-core issue is that you and the wgpu team aren’t 100% aligned on your goals. Maybe you’re aligned on 80% but the 20% will eventually come back to haunt you.


Adding new "backends" to an abstraction always includes the risk of not being compatible with the current abstraction and requiring re-architectures. This would be true with our own abstractions as well. I agree that two separate projects often have different goals, but wgpu's goals are an (almost) complete subset of our goals: cross platform modern gpu layer that cleanly abstracts Vulkan/Metal/DX12 and best-effort abstracts older apis, rust-friendly api surface with RAII, limited "lowest common denominator" defaults that run everywhere with opt-in support for advanced features and lifting lowest-common-denominator limits. The biggest divergence is their increased need for safety features to make wgpu a suitable host for WebGPU apis in browsers, but this is something that still benefits us, because it might ultimately allow Bevy apps to "host" less trusted shader code.

We've been building out the new renderer and we have already used a huge percentage of wgpu's api surface. It will be close to 100% by the time we launch. This proves to me that we do need almost all of the features they provide. Bevy's renderer is modular and we need to expose a generic (and safe) gpu api to empower users to build new render features. Wgpu's entire purpose is to be that API. I know enough of the details here (because I built my own api, thoroughly reviewed the wgpu code, and did the same for alternatives in the ecosystem) to feel comfortable betting on it. I'm even more comfortable having reviewed bevy-user-provided wgpu prs that add major new features to wgpu (XR). If you have specific concerns about specific features, I'm happy to discuss this further.

You can link to as many "rules" as you want, but solving a problem in the real world requires a careful balance of many variables and concerns. Rules like "the rule of least power" should be a guiding principle, not something to be followed at all costs.


Since other commenter had concerns about the PS4/PS5/other consoles wgpu being proprietary due to SDK restrictions (and consequently, Bevy PS4/PS5/other consoles port being proprietary), I will ask: does this mean that Bevy for consoles will cost money? (apart from the console SDK cost). Will Bevy for consoles be source available, as in, developed exactly like current Bevy but under a non-open source license?

Or actually: is it feasible to license console-specific Bevy code as MIT/Apache and have the only proprietary bits be the console SDK? (This means having Bevy, an open source project, call a console SDK in the open - is that allowed?)

For me those are my main concerns regarding Bevy.


We will need to restrict access to console backends to comply with console developer contracts. We would only be able to give our code to other people who have been approved to look at the "proprietary console code".

That being said, there isn't a requirement to charge for the code. Kha (another open source project much like wgpu) offers free console support. You just need to reach out to them and request access. I would like to follow their model if I can. But it all really depends on who does the work and the terms they decide to release it under.


I think wgpu is amazing. But, if I want to take the previous commenter seriously I'd think about PS5, Switch/Switch2, etc as places where someone will have to write a wgpu implementation (non open source since those SDKs don't allow it) if you ever decide to ship on those platforms.


wgpu's recent license change was specifically done to allow these implementations to exist for consoles.


As someone who's developed and maintained platform backends for all sorts of obscure platforms (actually, it's still my day job!), I don't think wgpu backends are going to be difficult to develop for any of the remaining platforms APIs still alive.

It's a pretty well-thought-out API that mirrors most of the modern rendering abstractions seen in game engines.


> platforms APIs still alive.

The APIs that currently exist are not the cause of the potential issue I am raising.


And without knowing what those future APIs are like, you can't design a future-proof backend API to handle it. You'll design it for the past problems which will be different than future ones.


What you can do is design a backend for your project that is less general and tailored to your specific problems so as to increase the likelihood that it is easier to implement on new platforms that may arise. This is why I mentioned the “rule of least power”

For example using drawTriangle(p1, p2, p3) instead of drawPolys(TYPE_TRIANGLE, point_list, n_points). The former is unequivocally easier to implement, the latter requires more complexity.


And it has been explained to you repeatedly that wgpu is already more or less "least power" - that Bevy already uses most of the API surface and will soon be using nearly all of it.


Wgpu is the equivalent of “Turing complete” for a graphics API. I think you’re not fully groking the principle of least power if you consider that a “least power” graphics abstraction.


The thing about graphics in general is that typically you want to use whatever the underlying hardware gives you to get the best performance. This can conflict with the law of least power, but so what?

You can’t optimize for everything, so you decide what tradeoffs make the most sense.


I think you and likely most people reading this discussion are missing the point. Wgpu is fine and even great as a general purpose API but it’s inappropriate as an application-specific abstraction for a graphics backend. It should absolutely be used to implement the latter.

Compare this to the Rust compiler. Rust uses MIR as its intermediate compiler IR for Rust-specific optimizations, because it retains Rust-specific semantics, before it compiles down to LLVM bitcode. If it used LLVM bitcode as its native IR then it would be difficult to implement Rust-specific optimization passes. In this case an application specific graphics API is analogous to MIR and wgpu is analogous to LLVM.


Sure, but if you’re never going to do any MIR-level optimizations, then there’s no reason to implement it.

Similarly for graphics APIs. If at some point in the future you need to support a platform not supported by wgpu, you can add the abstraction layer at that time. Doing it before then doesn’t really buy you anything. You might never need it, and if you do, you’ll have a much better idea of what it should look like once you know what the new platform actually looks like.


I know you don't really want to hedge too much on your example, but do note that in your attempt to simplify, you accidentally turned an API which can, in theory, draw many triangles into one that only can draw one triangle at a time. What if I want to parallelize the work of drawing triangles?

Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.

In the worst case, if you never use, for instance, Timestamp Queries in your engine, you can half-ass a backend and just nop the implementation. Lots of game engines do that kinda thing. So many game engines where half the graphics API implementations were no-op stubs because we never needed the functionality on that platform.


> What if I want to parallelize the work of drawing triangles?

What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering? In that case porting your app to a new platform requires implementing functionality your application does not need.


> What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering?

Okay, but... what if it does?

It's one thing to talk about "this could be simpler if you don't need more general functionality." But that's also just kind of an assumption that the functionality actually isn't needed. The odds that you have to support a platform/application that both can't handle parallelization and that will be substantially held back by the option even just existing -- I'm not sure that those odds are actually higher than the odds that you'll run into a platform that requires parallelization for decent performance.

It feels kind of glib to just state with such certainty that a cross-platform game engine is never going to need to draw arbitrary polygons. That doesn't seem to me like a safe assumption at all.

I agree with GP here:

> Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.

In many cases, needing to no-op some functionality for one or two platforms may end up being a lot better than a situation where you need to hack a bunch of functionality on top of an API that fundamentally is not designed to support that. It's a little bit annoying for simpler platforms, but simpler platforms are probably not your biggest fear when thinking about support. The first time that you need to do something other than linearly draw triangles, for any platform you want to support at all, even just one of them, then the API you propose suddenly becomes more complicated and harder to maintain than a single `drawPols` method would be.

This is not saying that abstraction or narrowing design space should never happen. It's just saying, understand what the design space is before you decide that you're never going to need to support something. I expect that the Bevy core team has spent a decent amount of time thinking about what kinds of GPU operations they're likely to need for both current and future platforms.


> It feels kind of glib to just state with such certainty that a cross-platform game engine is never going to need to draw arbitrary polygons.

It’s an example


There are only like 21 resource types in the whole WebGPU API. Which of these do you think are going to go away?

* Adapter, Device, Queue, Surface

* Buffer, Texture, TextureView, Sampler, QuerySet, BindGroup

* ShaderModule, BindGroupLayout, PipelineLayout, RenderPipeline, ComputePipeline

* CommandEncoder, RenderPassEncoder, ComputePassEncoder, RenderBundleEncoder, RenderBundle, CommandBuffer

Adapter, Device, and Surface are abstractions that would be needed no matter what the GPU architecture. What else is there that's even fishy... QuerySet, maybe? The various kinds of encoders are already abstractions on some platforms, and I guess there are cases where there might be better resource binding abstractions, but I'm not sure what could cause this set of supported features to change on the hardware other than completely dropping the rasterization pipeline (in which case you can just emulate everything with compute shaders and buffers anyway). And the relationships between these structures are pretty well-specified, I doubt that will change either.

So at that point, at least if you want to stay cross platform, you're talking about things like the specific implementation of buffer mapping or tracing or whatever, which is the sort of thing it's relatively easy to add as an extension (or refactor to make more configurable).


> The core issue is that bevy uses a subset of wgpu so your own focused and limited abstraction layer will almost always be easier to implement and maintain

This isn't obviously true. WebGPU API surface is fairly small (unlike Vulkan), and Bevy may easily use most of it.

Generally speaking, depending on another library always involve some amount of trust. Going the path of zero trust is also possible, but is much less effective.


> This isn't obviously true. WebGPU API surface is fairly small (unlike Vulkan), and Bevy may easily use most of it.

Small is not the issue. Generality is.


Keep in mind that wgpu is intended to be something you can write a game engine in and achieve at least comparable performance to going through something like Vulkan directly. The WebGPU API is intended to be optimizable enough that even low-level applications don't need to reach for their own solutions, and the WebGPU team have worked very hard to find universal and efficient abstractions over the underlying drivers. It also provides unsafe opt-out functionality for cases where the goal of safety clashes too much with the performance needs of some applications (such as using precompiled shaders).

Given all this, what do you think is missing that makes it "too general" for use as the basis of a game engine? Many of the features you're probably thinking of are a nonstarter if you want to be cross-platform, and being efficient cross-platform is a key motivating reason to use wgpu in the first place.


> Given all this, what do you think is missing that makes it "too general" for use as the basis of a game engine?

My concern isn’t that wgpu’s level of generality makes it unsuitable for implementing a game engine. It’s of course perfectly suitable for implementing a game engine. The concern is around the pitfalls of not architecting your game engine in terms of your own limited graphics abstraction (which may backend to wgpu).


You’ve made like 15 comments on this thread with this idea. Cart is intent on sticking to his approach even after listening to your point of view.

I feel like we could just put this conversation on ice and check back in after 6-12 months. Bevy release notes always reach the top of HN. If they go back to creating an abstraction over wgpu, you can say “I told you so” then. And similarly, if Bevy is able to use this approach to provide support for Android, iOS and web (in addition to existing support for windows, Linux and Mac) then you need to own up to that.


> You’ve made like 15 comments on this thread with this idea.

Yet other people beside Cart in my thread still fail to properly understand the idea and instead think I am suggesting not using wgpu at all.

> you can say “I told you so” then

My aim isn’t to say “I told you so” it’s to make sure people accurately understand the point I am making. Cart seems to understand the point I am making therefore I have made no further comments to him.


I understand your point, but I don’t think there’s any value in including an extra layer of abstraction before you need it. Sometimes it’s clear that you need it from the start. But sometimes it’s not.

And if you end up needing to support something sufficiently different from what can fit into wgpu’s API, the right point of indirection might just be the rendering system. E.g., you might just write a separate renderer for the new platform, so you can write to its model in the most efficient way.


That depends.

I would agree if it were a dependency on third party closed source solution, but wgpu is a thriving and well maintained OSS project.

Nothing prevents you from forking it and treating it like your own code in a catastrophic scenario.


> Nothing prevents you from forking it and treating it like your own code in a catastrophic scenario.

It’s much easier for deployment and distribution purposes to add a level of indirection in your codebase than maintain a fork of a large project. Maintaining a fork is a full time job in itself, especially for active projects where you’ll need to constantly be merging new changes and keeping up with the evolution of the internals of the project via the mailing list or other means.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: