If I could just use my favourite language and not feel like a second class citizen, then I am not sure there would be anything else to complain about as a developer, really. A mark-up bytecode so that we could forget about the nightmare of HTML and CSS as well?
And remember the nightmare of lazy Java class loading, or huge SWF apps with distract-the-user loading splash screens?
The Web evolved to do incremental/progressive rendering, it's one of the best aspects. Undumping a frozen Dart heap was a goal to speed up gmail, but over long thin pipes (my "LTE", most of the time), incremental/progressive wins.
Sure, games come in big packages and good ones are worth the download time. For most other apps, small is beautiful. iOS app update still sucks, not just because semi-automatic.
Fast-forward a few years, and imagine if a browser engine were nothing more than a WebAssembly engine and a "default" HTML/CSS implementation. You could replace that engine with anything you like written in WebAssembly, doing all its own rendering and input using the lower-level APIs provided by WebAssembly. So, every browser gets the latest rendering and elements, or at the very least polyfills via WebAssembly rather than JavaScript.
To me: this more or less sounds like what Mickens, et al. are aiming for w/ Atlantis[0][1].
The browser becomes a simple kernel which knows how to execute a bytecode (their so-called "Syphon Interpreter.")
The browser itself provides much simpler, lower-level APIs for doing I/O.
To actually render a page you still need a layout engine, a renderer, and maybe a scripting runtime. The difference is these components are provided _as bytecode at runtime_, they're not shipped as part of the browser itself.
Your page then specifies the environment it needs by requesting the different components you need. Then you just let the environment go to work on whatever page you served.
You can already build awful, non-semantic HTML today, with a giant pile of <div> tags, CSS, and JavaScript. The web hasn't fallen apart.
Similarly, just because it'll be possible to implement a full web engine efficiently in WebAssembly doesn't mean sites will suddenly stop paying any attention to standards or accessibility.
As for adblockers, I can think of several ways to block ads in such a world, including hooking the sockets API to block or sandbox access to ad-serving hosts, or providing hot-patches that stub out common ad-serving libraries. It'll get harder, but not impossible.
"The web hasn't fallen apart" for sighted users. Those non-semantic, unusable-to-screen-reader sites are in fact an accessibility disaster for blind users, who can find it nearly impossible to use some sites.
(And a disaster for the companies that build them and get sued later.)
Maybe there will emerge a common "screenreader toolkit" for sideloading a non-visual interface instead of expecting every designer to remember to interleave ARIA hints at magic places in an otherwise fundamentally visual interface, keeping it all in sync as the design evolves, etc.
> "The web hasn't fallen apart" for sighted users. Those non-semantic, unusable-to-screen-reader sites are in fact an accessibility disaster for blind users, who can find it nearly impossible to use some sites.
I'm saying that div-itis is possible today, but it's a bad idea, and as far as I can tell many new sites still use semantic markup when available.
Along the same lines, I would expect that even if people implement a web engine in wasm, many will still use semantic markup.
That said, I agree that people don't pay enough attention to accessibility, and should. But I don't think wasm or wasm-based rendering engines will make that worse.
Even if you use only divs, the actual text is there in UTF-8 to parse out of the DOM, today.
If you go all-in on wasm to do "your own rendering", external software won't know where to find a "DOM" or how to understand it, unless some other kind of reinvented HTML-for-a-wasm-rendering-engine-defacto-standard is invented?
(This is more a rant against "let's reinvent html/css in canvas" rather against a VM-based javascript replacement in general. Even though the latter sounds a bit terrible as well for the open web; imagine what the web would have looked like if .js never existed, and a flash/java .swf/.jar engine was the only way to do scripting in a webpage.)
What I can tell you for sure is that most sites are accessible to the extent that HTML makes accessibility the default, and the instant people start doing stuff that's not accessible-by-default (JS checkbox widgets, for instance), it almost always gets ignored and dropped.
So when you start talking about ignore-the-DOM stuff, my strong suspicion is that it would all be completely and totally inaccessible.
While it may very well run the WASM it'll need to interpret the output. How do you rank a running game of candycrush vs. the candycrush website in google results?
Actually, I think this might be a great opportunity to separate applications/datum that don't need to be indexed by search engines from those that do. Of course, we have to be careful that we don't get walled off content in applications which can and already does easily happen, but I don't think that is truly in the spirit of the sharing web that we have all come to know and love.
There are at least two distinct types of content - structured/unstructured content that should be exposed and can be indexed via some known mechanism by external visitors, and application content that perhaps need not be indexed.
This might be a time to evolve/establish what those mechanisms are for the betterment of both application development and content development/sharing.