This situation reminds me days when we, at W3C HTML5 WG, were trying to sneak SQL specification into HTML5 "umbrella". And one particular flavor of it - the SQLite's SQL as it was defined at the moment of writing.
Haven't got through for many good reasons as we know.
As of GPU exposure to the Web ...
It makes sense only when we will have stable and unified GPU abstraction. As for now DirectX.12, Vulcan and Metal are close but different.
Like the WebGL that is (more or less unified) OpenGL. But even that looks too foreign to HTML/CSS/script - immediate mode rendering in architecturally retained display model of web documents.
And conceptually: HTML5 umbrella is large but not infinite.
3D rendering is too far from HTML "endless flat text tape" model.
I remember those <applet> days when browser was used as a component delivery platform for stuff that does not fit into DOM and structured yet styled text. That was conceptually the right way "to grasp the immensity".
These days, with WebAssembly, we have another incarnation of the <applet> idea and I think GPU functionality belongs to it rather than to HTML. GLSL to be expressed in WebAssembly bytecode terms but not in JS.
Web standards have to be backward compatible and I doubt that current still ugly GPU paradigms will survive on the long run. Like tomorrow someone will come with practical voxel based system instead of current abstract vector ones, what will we do?
> Like tomorrow someone will come with practical voxel based system instead of current abstract vector ones, what will we do?
For one thing, any new approach will still have to solve the same problems around data transfer and formats that make up a lot of current APIs.
And for another, no matter how good this hypothetical new approach is, the old one will still be around to handle existing content and workflows forever anyway.
The SQL situation is a bummer. It made sense to avoid a monoculture, but indexedDB is a near-useless wreck due to its terrible performance and it has no meaningful querying primitives at all so you get to implement them yourselves and the performance is even worse.
Wish we could've just had two SQL implementations. There are other ones out there - Microsoft ships JET and an embeddable version of MSSQL, someone could've embedded postgres or something. As-is people who care about performance are just going to compile SQLite down to wasm/asm.js and run it in a worker.
Canvas is immediate mode rendered and already breaks through the DOM. There are also popular movements to make the DOM appear more as an immediate mode abstraction (React). In contrast, most retained mode 3D abstractions are seen as pretty bad.
The majority of game engines scene graphs are retained mode 3D abstractions.
Everyone thinks that they can do better with an immediate mode API, then we start building some kind of data structure to track down what needs to be drawn and when, with the result being similar to the traditional joke of half-implemented Lisp or ORMs, but applied to retained mode rendering.
Sure, some experts manage to get it right, and those are the ones that get to write AAA game engines, but many don't.
>> The majority of game engines scene graphs are retained mode 3D abstractions.
... but they are not built on an abstract retained mode API. That would be the wrong level of abstraction for a Web API that people can build on properly. I think that's the point here.
Unfortunately, Web APIs tend to be far too high level while missing out on low-level hooks, like the disaster that is Web Audio (and media playback in general).
Everyone needs their own retained mode API for sure, it’s just too different to generalize. Ideally, you would want to take your model and directly turn that into your UI, adding dependency tracing or domain knowledge to handle invalidation. React and it’s ilk make it easier to do just that.
> "This situation reminds me days when we, at W3C HTML5 WG, were trying to sneak SQL specification into HTML5 "umbrella". And one particular flavor of it - the SQLite's SQL as it was defined at the moment of writing. Haven't got through for many good reasons as we know."
Can you elaborate on the arguments against a dialect of SQL being available for use with local storage? I can only think of arguments in favour of it. Would be good to understand the grounds on which the idea was dropped.
The lesson of the Web is that when you expose some interface for people to use, they will start depending on documented features, undocumented features, and downright bugs of the first popular implementation. Second and subsequent implementations will need to spend a bunch of time reverse-engineering those bugs so they can be documented and reliably implemented in future so that existing websites keep working.
The only alternative is to make sure all new interfaces have multiple, popular implementations simultaneously, so that authors can't afford to take advantage of bugs in one implementation. This is the sort of activity you see in WHATWG these days.
The problem with "a SQL dialect" being available for use on the Web is that every browser intended to use SQLite as the backend. Nobody wanted to invest the time and effort to write a second, compatible, equally-reliable implementation; nobody wanted to exhaustively research and document bugs and flaws in that specific version of SQLite; nobody wanted to commit to back-porting security fixes from later versions, or forward-porting the required bugs to later versions.
And since nobody wanted to do the work to make it possible, using SQL for local storage remains impossible.
The W3C attempted to say version X.Y.Z of SQLite will be the SQL standard for the web, mandating 1) that it be frozen in time 2) that the only way to be compliant was to put SQLite into all browsers, bit for bit, there would be no other way to be compliant.
As much as I wanted it, and was bummed that Mozilla protested so much, they were/did/are doing the right thing by saying that SQLite cannot be used as a standard, it isn't a spec.
What one can do, is compile SQLite to emscripten or wasm, or write a SQL engine in JS and use that in the browser. That is totally fine.
As someone who works on a product which compiles SQLite to JS to read a SQLite based file format in the browser - this is a terrible, terrible “solution” and doesn’t scale for files larger than X MB, where X is the device dependent limit based on memory available to the browser.
HTML and CSS are slowly losing ascendancy as the main render surface for web apps. The canvas provides a standard pixel buffer for both JS and compiled applications, making HTML's limitations no longer a restriction.
It's not about eye-candy, the Web has become an application platform (against all resistance) and it can't thrive on standards bodies to deliver every bit and piece required. Even in 2018, web applications are shunned by many professionals for their poor quality and performance.
A lot of HN comments dismissed my prediction[1] that WebAssembly will bring opaque "compiled applications" that treat the canvas as a "standard pixel buffer", allowing adblocking CSS and request filters to be bypassed.
A lot of people seem to be focusing on the benefits of new technological changes ("no longer a restriction"), when they should first be concerned with the potential risks that change will create.
>my prediction[1] that WebAssembly will bring opaque "compiled applications" that treat the canvas as a "standard pixel buffer", allowing adblocking CSS and request filters to be bypassed.
They could already do that. It's called serving the whole page as an image
I don't believe that's true for any modern browsers. They all use hardware accelerated rendering in the common case of drawing to a canvas that will be composited on-screen, and make a copy into process memory only when you access the pixels with a call like getImageData().
What are you talking about? You’re able to access random pixels, yes, but first you need to copy them to normal RAM. Canvas 2D and WebGL are both hardware accelerated.
I’ve actually noticed a return recently to simple “HTML native” UI design on the web. Not as many people trying to emulate an “OS native” experience by adding a bunch of extra DOM elements to render borders and effects and such.
Eh, not for WebGL it won't. Nobody even implements DX other than high-powered desktop GPUs.
Will it live on? Sure, we will still have games and Xboxes. But if you're going to pick a standard that can work on mobile and desktop, there is no contender other than Vulkan.
2. Both of them... If you are gamer, you are going to Steam (and Valve is not a friend of the Windows store idea). I'm not happy with this situation either, as I prefer GoG, and GoG is a distant second.
3. Doom, Wolfenstein II, F1 2017, The Talos Principle from the other side of genre spectrum or the upcoming Star Citizen. Vice versa is more true, there's no DX12 game worth playing, that's not also Vulkan game.
1. Not everyone is rich enough to buy flaghship phones.
2. Lots of gamers just use XBox or PS4 (no Vulkan there). And on PC, not everyone uses Steam. Plus Microsoft already started to be more agressive regarding games on Windows 10, with Age of Empires remaster being the first example.
3. Well it is a matter of taste, not everyone craves for FPS, then there is also the small matter that Vulkan is not supported on XBox anyway, while DX 12 is. With much better developer tooling.
1. Then a game console is not much relevant either.
2. Age of Empires is Microsoft's game in the first place. Of course they will want their assets to use their technologies. 3rd party adoption is a rounding error. On the PC, except for 1) hardcore indie games players and 2) games locked to the publisher's platform, everyone uses Steam. In the second case, where the games are exclusive to a publishing platforms (Origin, Uplay), they have similar attitude to Windows store as Steam.
Windows store is an existential threat to them. They will ignore it as long as possible.
3. Sure, that's why I mentioned The Talos Principle (a puzzle game). Xbox API will be handled exactly as PS API is.
I have this idea that if they split off WebAssembly, WebGL and WebAudio into a external format for applications it would be easier to implement than a full browser.
Your ideas is actually really cool (very tech noir).
But that's not what my idea is. My idea isn't really a browser. It's just a separate format/mimetype (.app or .game) for a app runtime.
Small projects could implement it. We can have it embeddable as a object in browsers. Clients other than major web browsers (Gopher, Dillo) could embed it. Embedded devices (Roku) could include support for it. It could even be used in a physical media like SDcards or DVDs.
Here's what I'm trying to reconcile: if the idea is to break off the awesome subset of multimedia web tech, because it would be easier to implement than a full browser -- I 100% agree! -- then why is there a need for a new mimetype/format?
Many things could be done better than the current HTML-as-laundry-list approach, but if we invent a new format, that adds the extremely difficult problem of getting everyone on board, rewriting things for it. The brilliance of asm.js (which gave birth to WASM) was that everyone had already implemented it before it existed.
Like you say it would be a subset. It's all just standard WebAssembly. The APIs of course would need to be made callable from WebAssembly but that's already planned for in browsers anyway.
Until browsers supported the mimetype they could just be served as .wasm files. Not that the mimetype is important at all I just think it more clearly states it's intended use.
You could probably do this now as a standalone Node app, if you want more explicit control over which APIs are available and where code can be run from.
The browser is indeed quite complex (and not just because of the massive historical baggage), but its job is to give the user control while safely downloading untrusted code and running it locally.
So, if your goal is to just have a simple standalone app I think you could stay largely compatible with APIs available in the browser environment.
> Easier implementation is the goal. There are currently only four companies working on a web implementation.
Easier implementation of a browser? You might find it interesting to see what Servo has chosen to implement and what they have not. Some things you'd think would be easily removable (such as document.write) turn out to not be so simple to skip.
One of the most valuable things about the Web is the care taken around backwards compatibility.
I do think it'd be quite interesting if you had a user agent that did the DOM differently (not sure what you have in mind specifically re: "documents didn't automatically gain the same privileges as applications") and focused just on providing a GL canvas and audio APIs.
I think you might find that these APIs aren't quite as nice when it comes to re-implementing things that CSS and DOM make easy, and it'd be hard for such a browser to really compete with existing browsers given the backwards-compat situation on the web (mandating GL would leave some devices behind, and web authors as a whole don't really adapt all that quickly).
In any case I think it might still be useful as a reference implementation / proof-of-concept on how minimal a web user agent can be, if it was just focused on hosting applications.
There's interest in wasm-land about having "non-web embeddings", which wouldn't assume things like JS APIs exist at all.
I think in that sort of world, you could probably find nicer APIs to target than WebGL and WebAudio... however if you don't mind still having a JS interpreted available then it'd probably be easy to build this sort of thing today using Node.
I've used WebGL to visualize scientific data. It's nothing special, but the size of the data was large enough that anything like SVG or even Canvas were just slow. There is a lot of interesting stuff you can do even just in 2D in WebGL, because you're closer to the hardware.
What's annoying is that you don't get double precision or any of the compute stuff on WebGL currently. The interesting stuff isn't just graphics, but also doing computations directly in the browser.
Do you have any resources on how you did that? I am developing a web app and I have to visualize a ton of datapoints in a bunch of charts and all the charting libraries I've tried are way too slow. I've been looking for something that could utilize WebGL to speed things up, but haven't found anything so far.
The future of AR/VR exists in 3D worlds. If the web wants to be part of that future, developers will need access powerful 3D APIs. Otherwise, the web will miss another tech wave like it missed mobile (as native mobile was the clear winner over web mobile).
There's an ICO for that.[1] Really. They want to offer browser based virtual reality which mines tokens in the background. The initial token sale is in progress now. The virtual reality system appears to be total vaporware.
Monero isn't efficiently mined on GPUs (it works better on CPUs which is why it's so popular for browser mining already), but other cryptos with equihash algorithms, sure.
I see it more as way to use the GPU to do calculations in JavaScript similar to the speed C/C++ does for crypto-currencies. Yes you can create a Native Application (NaCl) but that is only going to work in Chrome/Chromium browsers and it's compatible variants. Electron and NodeWebkit/NWJS based projects are the most common. But NaCl still doesn't give you direct GPU access because of the sandboxing.
Well, as a small developer I'd love to see a higher-level, easier-to-use API (than Vulkan) that can be used as a modern, cross platform replacement for OpenGL. If it could be used both in a browser and standalone even better. If it was available outside of a browser (even as a Vulkan/Metal wrapper), I think it could become a no-brainer replacement for where OpenGL ES is used today.
I understand that engine developers can get some (small?) percent improvement from an ultra-low-level API that exposes more platform specific details. But they almost always support multiple APIs natively, and already use the low level ones where available. There may be a small performance benefit (over a higher level API) in the browser, but I don't think the browser is a likely target for ultra heavyweight apps / games, that usually are multi-GB downloads anyway, in the near future. And keep in mind that if smaller software sticks with WebGL because of API complexity, that might be a big performance loss for the user.
But a higher level API would have great benefits everywhere! Right now the only real cross-platform graphics API is OpenGL ES 3.0, which is becoming more and more obsolete, with little sign of the situation changing. No compute shaders, no AZDO, etc. Any move beyond that feature set now requires multiple APIs, shaders etc. And the easiest way for a small developer to get those features is still to skip Vulkan and to stick with GL ES 3.1+ on Win/Linux/Android and choose Metal on Mac/iOS. Of course on the web for those features there are no options at all.
I think if a new API was available that was easy to use and truly cross platform (including web), it would be the obvious first API to implement for all new graphics software. And this would be a much larger benefit than an unknown performance improvement that is accessible mostly to engine developers.
If you are making a commercial product Vulkan + MoltenVK should be perfectly serviceable as a target. There are also free software attempts to make a wrapper.
At this point, so long as Apple is in your target market, there will never be an everywhere API because Apple does not want there to be one. The whole point of Metal is to make your life harder so developers currently writing for Apple first are less likely to port their software elsewhere.
Yeah that seems like that might be the best option right now, and it doesn't look like it will be too expensive either. What I probably would like more is the reverse, a Metal wrapper for Vulkan, since it seems so much easier to get started with Metal. Too bad Metal is a Swift/Obj-C API so it's not straightforward how to make it cross platform.
I don't know how much bad faith I want to assume on Apple's part, since there are probably some legitimate technical reasons why they don't want to support a lower level API (and they came out with Metal first). Vulkan is such a pain to use anyway that it's probably mostly used by engine developers who generally don't have a problem supporting multiple APIs like Metal / DirectX etc.
But at the very least, it would be nice if they upgraded their OpenGL version, since they already support that and it's only them holding back some of the newer features.
Vulkan is almost 1:1 equivalent to "modern" OpenGL except the need for a more advanced allocator. You had to write the exact same thing in OpenGL if you wanted decent performance, except through an entire translation layer.
I don't think anyone who actually worked on a serious engine thinks it's somehow harder to use. Sure, there's more boilerplate and it might get pretty difficult to port something to it, but that's something else entirely.
Except one also gets to compare Vulkan with Metal, DX 12, LibGNMX, LibGNM , NVN, all of them with a bit more developer friendliness and tooling in mind.
Which I guess, it is where the resistance is coming from, hence the need for such presentation.
Maybe I could have explained this better since it's getting downvoted. Right now the two big modern graphics APIs are Metal (iOS / Mac) and Vulkan (Win / Linux / Android). These aren't the only ones, there are more for game consoles, UWP apps, etc, but they are arguably the most important ones.
Metal and Vulkan are not at the same level of abstraction. If you look up the code needed to draw a triangle on the screen (maybe the most basic graphics task), it is much, much more long and difficult to do it in Vulkan than Metal. Vulkan is a lower-level API. It gives engine developers more flexibility, at the cost of making basic things time-consuming and complicated.
OpenGL was the previous cross platform API. WebGL is almost identical to OpenGL. OpenGL still runs on Mac / iOS, but Apple has stopped supporting newer versions. The newer versions of OpenGL have closed the gap some with Vulkan and Metal (it won't catch up entirely, but it added some important features like compute shaders and it's easier to use it efficiently). OpenGL is still easier to use than Vulkan. The problem is the newer versions are not cross-platform, since Apple wants to focus on Metal.
Apple does not want to support Vulkan. Metal came out before Vulkan did, and it is a higher-level API. It's arguable if Apple should support it or not, but that's how it is. Microsoft also wants to focus on DirectX 12 (their API).
I was making the argument that a higher level API, Metal-style, would be a good base for the new web standard. Metal couldn't be used directly, at the very least it would have to be changed from Swift/Obj-C. But the idea of roughly basing it on Metal as mentioned in the article doesn't seem unreasonable, even Vulkan was based on a previous AMD technology called Mantle.
A low-level standard like Vulkan is hard to use directly, its adoption will depend mostly on people using frameworks / engines that use Vulkan. It's possible that due to its low-level nature there would be some performance advantage, although games using Metal also seem to get good performance on iOS. The disadvantage of Vulkan is that it is much harder to use than WebGL.
A fair amount of the WebGL content is not web specific. It is possible to write OpenGL content and compile it for desktop / mobile and the web. The most popular game engines, Unity and Unreal, both support compiling to the web and desktop from the same codebase.
In my view, there is no good replacement for OpenGL, now that new versions are not cross platform. Vulkan is much more work and does not work on Apple platforms, while Metal is easier but only works on Apple platforms. If they could provide a standard that is both easy and cross platform (by providing a C API library in addition to the web standard), it would provide the best of both worlds. The main downside is that it might leave some performance on the table compared to a low level API, and it would be yet another standard (which is why it would be important to provide a native library too, so developers have the choice of coding to only one API).
I would add to your remark that the very fact that OpenGL ES ever took off was because of Apple.
Before iOS, most mobile devices were having their own experiments with 3D APIs, Nokia N95 was the very first with OpenGL ES compatibile GPU, but it was thanks to iOS games that it ever took off.
Apple was also pursuing Quickdraw 3D before the NeXT acquisition, so not too keen on OpenGL anyway.
Their OpenGL's adoption was mostly survival related, now they are back on top, they can afford to dictate their 3D APIs just like all other console vendors.
I don't understand why this is a priority when WebGL is still so rough. Maybe we wouldn't need a new API for performance if WebGL worked better. There seems to be lots of room for improvement. My WebGL programs were much slower and were harder to write than the native versions of the same programs.
We should also probably sort out the native low-level APIs before setting the standard for the web, because otherwise we're building on top of a big mess. Though, my impression is that the WebGPU initiative is basically just another battleground in that struggle. I don't have any faith that this is being done for the good of users. It's just strategic ground to capture.
Because WebGL is an evolutionary dead end, for a variety of reasons. The initial idea was to track OpenGL ES, but that isn't really true anymore. Because of Windows, WebGL has to stick to a subset that can be easily translated to Direct3D. Because of GPU process sandboxing, anything that flows back from GPU to CPU is a huge synchronization problem. On top of that, mobile GPU drivers continue to suck badly, which means most of the extensions that were supposed to expand WebGL's scope into this decade are still out of reach, with 50% or less support in practice.
On the flipside, the native low-level APIs have all diverged. Vulkan, D3D12 and Metal each made different decisions, so it's pretty much inevitable that a 4th standard will have to be created to unify them. It will probably be higher level than any of the 3, and it will still be subject to strong sandboxing limitations.
Personally I think the big issue is that people stare themselves blind on the traditional graphics pipeline. Modern renderers have evolved past this, with various compute-driven and/or tiled approaches common place now. They're a nightmare to implement against current APIs, because a small change in strategy requires a rewrite of much of the orchestration code. The job of figuring out how to map your desired pipeline onto the hardware's capabilities should be the job of a compiler, but instead people still do it by hand. Plus, for GPU compute-driven graphics to be useful for interactive purposes beyond just looking pretty (i.e. actual direct manipulation), you absolutely need to be able to read back structured data efficiently. It's not just about RGB pixels.
There's an immense amount of potential locked inside, but the programming model is a decade or two out of date. Only AAA game companies and the vendors themselves have the resources to do novel work under these constraints. Everyone else has to throw together the scraps. Even the various attempts at LISPy GPU composition fall short, because they don't attempt to transcend the existing pipeline.
I would like to hear more about this. I was quite surprised how difficult things were when I started dabbling in Opengl and I thought that there has to be a better way. I know that there are libraries that build on top of Opengl and the like, but then always its a sacrifice of the power that you could have. It seems weird to me that it is so difficult because conceptually it seems to me that the model could be closer to the CPU/Memory model that everyone is already familiar with. You just have some RAM and some processor(s) that are going to do some computations right? Although I guess what really makes it a mess is that there needs to be a connection between what the GPU and the CPU are doing. I don't know, I was a bit surprised by how difficult it was. Perhaps I just don't understand it well enough.
To attempt to explain (desktop) GPU architecture: you don't just have memory and a bunch of individual cores on a GPU like you would on a CPU. You've got memory, texture sampling units, various other fetch units, fixed-function blending/output units, raster units, a dispatcher and then a ton of processing elements. These are all things the programmer need to set up (through the graphics API). Each of those processing elements runs several warps (wavefronts in AMD terminology), each which contains 32 or 64 threads (vendor-dependent), that all have their own set of registers. The warp holds the actual instruction stream and can issue operations that occur on all or some of those threads. Branching is possible, but pretty limited unless it's the same for every invocation. So the programming styles/models are incompatible from the start.
Then the real problem is, since all shader invocations share those fixed-function units, if you need to reconfigure them to use a different set of textures, buffers, shaders, etc you have to bring the whole operation to a complete halt, reconfigure it and restart it. And, contrary to popular belief, GPUs are the exact opposite of fast - each shader invocation takes an enormous amount of time to run, which is traded for throughput. Stopping that thing means having to wait for the last pieces of work to trickle through (and then when starting back up, you have to wait for enough work to be pushed through that all the hardware can be used efficiently), which means a lot of time doing little work.
So if you're trying to deal with the above, any notion of keeping things separate and clean (in terms of what the hardware sees, anyways) immedietely goes out the window. That's why things like virtual texturing exist - to let you more or less pack every single texture you need into a single gargantuan texture and draw as much as possible using some God-shader (and also because heavy reliance on textures tends to work well on consoles). Then you also have to manage to make good use of those fixed-function units (which is where tiled rasterizers on mobile GPUs can become a problem), but that's a relatively separate thing.
Also: transfering data back and forth in itself isn't necessarily that bad in my experience (just finnicky), it's usually the delays and synchronization that gets you.
I agree, yet I still don't understand how WebGPU is going to get buy-in from Apple and Microsoft if previous attempts at defining cross-platform APIs could not. Any web API will be open and cross-platform. If WebGPU is well-designed, it could very well be adopted as the next OpenGL.
I would rather target WebGPU and write my program once rather than implement my logic three times in Metal, DirectX and Vulkan. But, Apple and Microsoft don't want me doing that, so why would they support WebGPU?
Canvas and webgl is lovely in the way it quickly lets you get something on the screen... Pixel pushing on canvas is, easy and accessible.
It makes me wonder if anyone has created some sort of port to a standalone app with no browser involved where you could use javascript/canvas-api/webgl to draw pixels on a canvas-like surface... without the fatness of the browser. Just spawning some window, that would be a lovely scripting/game-dev environment, maybe with some sdl-bindings or whatever. Anyway just rambling, does such a project exist? Anyone knows? :)
I built exactly that a few years ago: V8 with WebGL bindings that passed through to OpenGL ES. It was to enable WebGL experiences on the GearVR, where a full browser wouldn't cut it.
I've got sample using an OS X window but it should be easy enough to do the same for Windows
It's been sitting in a private repo since then but I can give you access if you'd like
That being said, I think a better alternative for a thin graphics scripting environment would be haxe with Lime or snowkit (which provides OpenGL, SDL and windowing). I've used these in the past and loved working then
i think the web browser is the modern terminal and it should make use of all capabilities of the device, including hardware accelerated rendering. i think the current webgl is too low level though and only needed if you want to make your own 3d engine. if it was a higher level 3d renderer it would be easier for browser wendors to make secure and optimize, and easier for developers to use.
How many people do you think are actually interested in running the client in browser? I have Folding@Home on about 5-6 machines and I can't imagine a situation where I could run a web client but not a full client.
It's not an incredibly common use case, but many computers exist simply to show a dashboard or visualization. It would be excellent to say "drop this script tag on your page" and you instantly turn that Mac Mini into less of a waste.
If you just care about GPU compute and not much about the shape of the API or it's overhead, then something like OpenGL ES 3.1 Compute Shaders or WebCL would be much easier to reach than WebGPU, technically.
Mozilla won't support WebCL in favor of compute shaders, but that announcement has been over 4 years ago and there still is no general compute in browsers: https://bugzilla.mozilla.org/show_bug.cgi?id=664147
The OpenGL compute shader feature is not needed for doing GPU compute. It's just another type of shader that is not connected to other GL rendering that may be happening at the same time. People have been going GPGPU with the traditional types shaders for a long time. And WebGL 2.0 is a huge upgrade from 1.0 from a GPGPU point of view.
It is possible to perform some computations using OpenGL ES 3.0 / WebGL 2.0, but many types of operations (e.g. anything that involves random-access writes) are impossible, and many others (anything that normally requires shared memory) are very inefficient. Programming GPU through WebGL 2.0 is akin to programming desktop GPUs pre-CUDA: it is too intricate to take off.
Compute shader extension for WebGL 2.0 would be cool, but it would require to port a large part of OpenGL ES 3.1: OpenGL ES 3.0 / WebGL 2.0 doesn't include even random access buffers (SSBOs)
I agree that "too intricate" is the other main problem in WebGL uptake. But we're not even seeing WebGL versions of textbook GPU applications that are straightforward to implement with the tools WebGL 2 gives.
For example, here's WebGL compatibility stats from a site that has counters on technical web sites and graphics programming web sites: http://webglstats.com/ - As you can see, WebGL 2 compatibility is only at 40%, despite having been enabled in stable Firefox/Chrome for over a year. WebGL 1, a 7 year old standard, is now at 97%. (And even for the nominally WebGL-enabled browsers, users often report browser or OS crashes, so the percentages are upper bounds).
From an armchair quarterback position, if I wanted to effect GPU compute uptake, I'd work on compiler tech and tools targeting WebGL GLSL.
Maybe GPUs need to be fundamentally redesigned to allow multi-user access with enforceable security boundaries. I'm sure this would be highly non-trivial to implement, but it would be great for other scenarios like shared access to GPU compute resources in "the cloud".
That has existed for over a decade. In fact that's sort of the point of Vulkan, et al.; with all user code being behind an MMU context, it's safe to provide a more console like 'bare metal' API. There are bugs, but the infrastructure is all there. The end goal is that you can only crash your own process.
The issue with this in the browser is that the API isn't part of the JavaScript/WebASM, and just exposing it in the same way will allow you to subvert the sandboxing of the VM>
This is called GPU virtualization, and there have been experiments ("Sugar" [1]) of using it for WebGL. We looked into it, consider it promising but not urgent. It addresses some of the security concerns at the cost of performance (and implementation complexity), but the portability issues are unchanged.
What does that even mean? It's an Xbox, they all literally have the same variations of hardware. There aren't 8000 different build specs, there's at most, what 8? How could compute sharing provide any possible enhancement in experience at all? If anything, it's eating frame budget and it shouldn't.
When it was first announced, there was talk of off-loading certain tasks to (presumably) Azure. That may be what OP is thinking of.
I'd also wonder if you could share the power of any Xbox One Xs connected to a multiplayer session, given the gap between an original Xbox One and the X is rather large (seems like it'd be far more trouble than it's worth though).
Net Neutrality, right? It doesn't matter what the page does, it only matters that you are allowed to visit any page on "The Inter-Web" that you like, right?
Safari has "tab paused/reloaded due to high power consumption" (good), Chrome has "auto-mute tabs" extension (which I have turned on), I have "tab suspender" extension installed.
Basically, I'm trying to be a responsible consumer:
- this tab would like access to your hard drive files
- this tab would like access to your video camera
- this tab would like access to your microphone
- this tab would like to play sounds
- this tab would like to download more than 5mb of data
- this tab would like animation/movement
- this tab would like to use your CPU a lot
- this tab appears to be using a lot of your battery
- this tab would like to use your GPU (at all)
- this tab would like to maintain state > 24hr
the_internet.js is actually potentially really hostile (suck 9999mb at full speed, ddos@1.2.3.4, while(1){alert(1)}, mine_bitcoin( $hacker_wallet )), and I am much in favor of treating it as untrusted by default (low access, limited # of cpu cycles) until "trusted" (ie: android permissions swap: i give you the executable, you give me the permissions).
YouTube? Yes to whatever they ask.
ShadyWebsite.1234.some-random-domain.ru? You can d/l 300kb and can't do anything else (ie: web 1.0/no-script).
While each "tab" in a web-browser attempts to provide "safe" access to the computer resources, it is still not "permitted" access to computer resources. Whatever the browser defines as "safe" is 100% ok, which has to work equally well for WASM-unreal-tech-demo-castle as well as cnn.com.
I'd prefer cnn.com only had 300kb download, no external domains, no sound, no battery, no cpu, etc.
As I _trust_ cnn.com more (to the same level as youtube), I would then permit sound by default, permit large downloads, permit gpu, permit animation/video/etc.
It has same downside as WASM. Running precompiled bytecode is a bad for security as it always been.
It was no more than a year when a remotely exploitable WASM hole was exposed (derivatives of Spectre and co.) Knowledgeable people told that ISA level hole that can be exploited remotely over the web will be "a one minute global IT disaster" if somebody would resort to propagating it through a big adnet or paid traffic scheme.
As for WebGL as it is now, there were numerous sites on my memory that froze/crashed/rebooted both Linux and Windows systems, which means that the prime suspect there was a buggy shader as it is the only thing resembling raw instructions that can be passed to gpu through webgl.
You are being downvoted, but it is important to remember that graphics APIs were not developed with security as a first class requirement. They tend to be large, arcane, and often interfacing with large binary blob drivers on the system. Their threat surface is enormous. IMHO it is just a matter of time until exploits for WebGL and the like start showing up regularly.
As of GPU exposure to the Web ...
It makes sense only when we will have stable and unified GPU abstraction. As for now DirectX.12, Vulcan and Metal are close but different.
Like the WebGL that is (more or less unified) OpenGL. But even that looks too foreign to HTML/CSS/script - immediate mode rendering in architecturally retained display model of web documents.
And conceptually: HTML5 umbrella is large but not infinite. 3D rendering is too far from HTML "endless flat text tape" model.
I remember those <applet> days when browser was used as a component delivery platform for stuff that does not fit into DOM and structured yet styled text. That was conceptually the right way "to grasp the immensity".
These days, with WebAssembly, we have another incarnation of the <applet> idea and I think GPU functionality belongs to it rather than to HTML. GLSL to be expressed in WebAssembly bytecode terms but not in JS.
Web standards have to be backward compatible and I doubt that current still ugly GPU paradigms will survive on the long run. Like tomorrow someone will come with practical voxel based system instead of current abstract vector ones, what will we do?