What you can do is design a backend for your project that is less general and tailored to your specific problems so as to increase the likelihood that it is easier to implement on new platforms that may arise. This is why I mentioned the “rule of least power”
For example using drawTriangle(p1, p2, p3) instead of drawPolys(TYPE_TRIANGLE, point_list, n_points). The former is unequivocally easier to implement, the latter requires more complexity.
And it has been explained to you repeatedly that wgpu is already more or less "least power" - that Bevy already uses most of the API surface and will soon be using nearly all of it.
Wgpu is the equivalent of “Turing complete” for a graphics API. I think you’re not fully groking the principle of least power if you consider that a “least power” graphics abstraction.
The thing about graphics in general is that typically you want to use whatever the underlying hardware gives you to get the best performance. This can conflict with the law of least power, but so what?
You can’t optimize for everything, so you decide what tradeoffs make the most sense.
I think you and likely most people reading this discussion are missing the point. Wgpu is fine and even great as a general purpose API but it’s inappropriate as an application-specific abstraction for a graphics backend. It should absolutely be used to implement the latter.
Compare this to the Rust compiler. Rust uses MIR as its intermediate compiler IR for Rust-specific optimizations, because it retains Rust-specific semantics, before it compiles down to LLVM bitcode. If it used LLVM bitcode as its native IR then it would be difficult to implement Rust-specific optimization passes. In this case an application specific graphics API is analogous to MIR and wgpu is analogous to LLVM.
Sure, but if you’re never going to do any MIR-level optimizations, then there’s no reason to implement it.
Similarly for graphics APIs. If at some point in the future you need to support a platform not supported by wgpu, you can add the abstraction layer at that time. Doing it before then doesn’t really buy you anything. You might never need it, and if you do, you’ll have a much better idea of what it should look like once you know what the new platform actually looks like.
I know you don't really want to hedge too much on your example, but do note that in your attempt to simplify, you accidentally turned an API which can, in theory, draw many triangles into one that only can draw one triangle at a time. What if I want to parallelize the work of drawing triangles?
Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.
In the worst case, if you never use, for instance, Timestamp Queries in your engine, you can half-ass a backend and just nop the implementation. Lots of game engines do that kinda thing. So many game engines where half the graphics API implementations were no-op stubs because we never needed the functionality on that platform.
> What if I want to parallelize the work of drawing triangles?
What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering? In that case porting your app to a new platform requires implementing functionality your application does not need.
> What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering?
Okay, but... what if it does?
It's one thing to talk about "this could be simpler if you don't need more general functionality." But that's also just kind of an assumption that the functionality actually isn't needed. The odds that you have to support a platform/application that both can't handle parallelization and that will be substantially held back by the option even just existing -- I'm not sure that those odds are actually higher than the odds that you'll run into a platform that requires parallelization for decent performance.
It feels kind of glib to just state with such certainty that a cross-platform game engine is never going to need to draw arbitrary polygons. That doesn't seem to me like a safe assumption at all.
I agree with GP here:
> Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.
In many cases, needing to no-op some functionality for one or two platforms may end up being a lot better than a situation where you need to hack a bunch of functionality on top of an API that fundamentally is not designed to support that. It's a little bit annoying for simpler platforms, but simpler platforms are probably not your biggest fear when thinking about support. The first time that you need to do something other than linearly draw triangles, for any platform you want to support at all, even just one of them, then the API you propose suddenly becomes more complicated and harder to maintain than a single `drawPols` method would be.
This is not saying that abstraction or narrowing design space should never happen. It's just saying, understand what the design space is before you decide that you're never going to need to support something. I expect that the Bevy core team has spent a decent amount of time thinking about what kinds of GPU operations they're likely to need for both current and future platforms.
For example using drawTriangle(p1, p2, p3) instead of drawPolys(TYPE_TRIANGLE, point_list, n_points). The former is unequivocally easier to implement, the latter requires more complexity.