Hacker News new | past | comments | ask | show | jobs | submit login

> If you built an applet in one of these technologies, you didn’t really build a web application. You had a web page with a chunk cut out of it, and your applet worked within that frame. You lost all of the benefits of other web technologies; you lost HTML, you lost CSS, you lost the accessibility built into the web.

But that's also true of an application which relies on WebAssembly (or JavaScript): it loses all the benefits of the web, because in a very real sense it's no longer a web site, but is instead a program running in a web page.

WebAssembly or JavaScript, neither is document-oriented; neither is linkable; neither is cacheable. It's Flash, all over again — except at least with Flash one could disable it and sites were okay. with WebAssembly & JavaScript, every site uses them for everything, meaning we get to choose between allowing a site to execute code on our CPUs, or seeing naught but a 'This page requires JavaScript' notice.

It is the return of Flash, and that's a bad thing. We thought we'd won the war, but really we just won a battle.




I envision horrible "all WASM" websites, just like the old "all Flash" websites, that won't have accessibility, won't be able to be linked to, etc. Worse, I envision this as being another step in the ad blocker arms race. Inevitably there are going to be websites that package an entire WASM-based browser that will need to be used to access the site, nullifying client-side ad and script blockers. I can see the pitch now-- "Keep your existing website but add our tools to prevent ad blockers!"

(Edit: Typos. I should know better than to post from my phone by now. Grrr...)


This is a criticism that would be more suited to the Canvas API than the WASM API. WASM is still meant to drive the DOM API which is still as introspectable as before.

[EDIT]: Steve is right of course, and I misspoke here, "WASM is still able to drive the DOM" is closer to what I meant to say.


I agree with your first sentence, but not your second: wasm is meant to access all platform APIs, not just the DOM ones. Canvas is part of the platform as well.


I think we will start to see a lot of all-in-one frameworks that use wasm for constraint based layouts so people don't have to learn CSS. I hope I'm wrong but I can definitely imagine something like this coming from the enterprise java/.net types.


I sure hope we do. CSS has had 20 years and is still the most error-prone way of doing layout I've ever seen.


I don't think so. Accessibility, links, ad blocking etc. behave exactly the same with wasm as with JS.

What difference do you see?


no if you don't target DOM, if i could have "browser" in a browser that targets canvas or webgl then i cannot block it, only on network level.


You won't be able to block it on a network level either when it's running on a locked-down platform like iOS and using an eventual iteration of TLS that prevents man-in-the-middle inspection. This feels like yet one more step in the direction of "you don't own your computer anymore".


except iOS has some of the best ad-blocking available, with OS level extensions that are almost impossible to get around. So your point doesn't really make a ton of sense.


The OS-level tools won't be able to inspect the websockets-based channel that the browser-in-browser uses to communicate with its back-end. It will all be opaque TLS-encrypted traffic to the OS. The native browser will be hosting a canvas element that will host the UI and running WASM code and that'll be all the OS will see.


i was more thinking on DNS level, but with DNS encryption even that falls on it's face.


DNS encryption is done by the OS, which you control, so you could still null route ad servers.


You can use canvas or webgl from JavaScript too, so there's no change here specific to wasm.


Except JS alone wasn't good enough for that; WASM, especially as a compilation target, seems to make the task of embedding a browser within a webpage easier.


you can but, wasm should be an order of magnitude faster then javascript, this makes it possible to run all kinds of "heavy" apps in the browser. some kind of client that outputs to webgl in this case shoud be doable, in pure js it would be to slow /expensive.


Documents are documents.

Apps are apps.

Sometimes both are in the browser.

It'd be great if all "documents" had an HTML version, with minimal JS. For accessibility, searching, deep linking, etc.


like Flipboard created? React Canvas rendered the site directly to canvas.

https://engineering.flipboard.com/2015/02/mobile-web


How the hell do you retain any accessibility when rendering a custom UI in Canvas?


you dont. they reinvented their own css and dom.


There are sites where accessibility isn't much of a concern (esp things like games) or where it would be easily handled (eg. an image aggregator). For the rest, it seems inevitable that another (probably not-quite-compatible) accessibility layer gets built on top (for example, using the Qt accessibility model when compiling Qt into a canvas).

Overall though, I think wasm shouldn't be replacing HTML/JS.


It already is possible, with JavaScript. WASM doesn't change anything. And the fact that although it happens with JavaScript, it isn't pervasive, I think should assuage this fear.


I agree WASM doesn't bring in anything fundamental to this picture that isn't already there with JS. But that is no comfort.

In the ancient web world, the site author wrote HTML to describe the data she wanted presented and the browser took care of making it accessible. But authors (especially companies) wanted detailed control of how their sites looked, so they turned to flash etc.

JS has long been re-playing this trend in slow motion -- moving away from web pages being interactive documents presented by the GUI app called browser and towards them being stand-alone GUIs like in flash.


SEO is too big of a concern nowadays for a resurgence of black-box websites, specially if they depend on large audiences and ad revenue.


But you can still get social media traffic. I think that it's very possible that will help allow black box web sites to return.


I suspect some appeal of Electron apps is that the user can't block ads or scripts running in what's basically a website.


To be frank, I'm surprised Wildvine hasn't been used in conjunction with DoubleClick/GoogleAds to enforce websites in showing adverts.

Sure, there's "fuckAdblock" but that shortly spawned "FuckFuckAdblock". It's a whole different case when the very browser prevents the content from being tampered with.


My position on this is basically, WebAssembly is no different than JavaScript here. If you think JavaScript ruins this property, well, the web was only in the form you describe for four years, and has existed this way for 23 years now.


The focus on driving WASM performance in the browser platforms, combined with the ability to transpile more languages to WASM, pushes the barrier-to-entry lower. Yes, these concerns aren't specific to WASM, but the platform is being made more capable of hosting this kind of troubling code, and more attractive to developers who would develop these things.


Compiling C++ or Rust to use in a web page is much more complicated than just writing Javascript. I can't see how that lowers the barrier to entry. Your argument seems to be that you can do more with those platforms because they're more performant, which yeah, to a point I guess, but Javascript is already plenty fast for making whatever obnoxious dreams people want to come true and the web seems to have survived it fine.


If you compile C++ or Rust to native targets anyway, I fail to see how it's more complicated.


I don't think the barrier to entry or ease of development is really the issue when we're talking about ad networks.


Ad networks, no-- I agree with that. I'm more concerned about entire websites becoming "apps", complete with browser-in-a-browser functionality (with the inner browser's behavior being completely under the control of the site operator).

It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)


> It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)

Don't. Most people will ignore the demonstration, but someone greedy will fork the project, build a library out of it, and start selling as a product to ad networks and media companies.


I agree. Somebody is going to open that Pandora's box, though. I'm glad to see that I'm not the only person who is concerned. I think it's an eventuality, however. Few young developers today have had to deal with walled gardens and don't understand how bad they are. Worse, today's platforms give an unprecedented amount of control to the platform owner to the detriment of the hardware's actual owner, and developers seem more than willing to help create those mechanisms of control. What's going to happen when nobody is left who actually owns their own computer?


Yup. That's what I'm worried about.

And people growing up with today web-first, mobile-first computing model have no clue of the power and capabilities computers have. With data being owned and hidden by apps/webapps, limited interoperability, nonexistent shortcuts, little to no means of automation of tasks, people won't even be able to conceive new ways to use their machines, because the tools for that aren't available.


You just gave me a horrible vision of a robotic hand perched over a smartphone screen being programmed to touch the screen to "automate" tasks because nobody will know any better. (Of course that would never work because our smartphones have front-facing cameras and software to detect faces and verify that we're alive... >sigh<)


Yeah, this is the input equivalent of the analog loophole :).

Now ordinarily, on PCs, you do that by means of simulated keypresses and mouseclicks, using scripting capabilities of the OS or end-user software like AutoHotkey. In the web/mobile-first, corporate-sandboxed reality, I can't imagine this capability being available, so Arduino and robot hand it is.

(But yeah, bastards will eventually put a front-facing depth-sensing camera, constantly verifying the user, arguing that it's for "security" reasons.)


Ad networks are the next Macromedia.


That is certainly not true. The knowledge base you need to even start compiling to WASM is far greater than just JS.


I think it's reasonable to assume that there will be many efforts to create tools and libraries that make it easier. It will become less difficult with each passing day.


The web long ago became not only a document store but also a thin client platform for distributing full client applications to end users. That cat is out of the bag and is not going to be stuffed back in.

WASM is really just a cleaner, faster, more elegant way of running alternative languages to JavaScript in the browser. It replaces transpilers that turned languages like Java or Go into ugly basically machine code JavaScript blobs. It will save bandwidth and improve performance but otherwise doesn't change much. Note that transpiled and uglified JavaScript is already "closed source," so nothing changes there. Anything can be obfuscated.


I do see your point!

I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day. Marketing departments everywhere tend to turn the web into Blinkenlights.

How many support documents of more than 15-20 years ago are you able to still find using the old links? So many sites are working as dumb front-ends for a database.

The information retrieval and persistence over time is not something many worries about.

The cat is for sure out of the bag. I just hope what was still can survive.


>I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day.

JS or Wasm can't create documents by themselves, they still need a DOM. Even if it's a 2D canvas or some WebGL canvas, it's still a DOM element. Or even if it's just an iframe that loads some blob, on the top level it's still a DOM element. And as such it can be inspected and controlled.


> And as such it can be inspected and controlled

Not if the content is decrypted by EME that's not fully controlled by the browser.


I think marketing departments would quickly notice that most crawlers won't execute all the fancy Blinkenlights.

I would assume that it will take a while for tooling in any other language to get to a javascript level. I think WASM will mainly be support for the latter. Do some excessive calculations.... and yeah, excessive Blinkenlights.


You'd be surprised, marketing departments generally do not have a clue about that specific type of thing. Hell eBay's operations apparently doesn't from my experience. It's incredibly easy to game marketing, and internet marketing is mindlessly easy without the invasive stalking.


> The cat is for sure out of the bag. I just hope what was still can survive.

I hope so too, but as a member of predatory and territorial species, the cat will most likely keep on killing everything else around it.


Yes, but only for about four hours a day, because naps.


Exactly. Well said.


Webassembly is linkable: https://webassembly.org/docs/dynamic-linking/ in the dynamic linking sense.

WebAssembly enables load-time and run-time (dlopen) dynamic linking in the MVP by having multiple instantiated modules share functions, linear memories, tables and constants using module imports and exports. In particular, since all (non-local) state that a module can access can be imported and exported and thus shared between separate modules’ instances, toolchains have the building blocks to implement dynamic loaders.

The code is fetched via URLs so you can link to it in that sense, too.

It's also cacheable: https://developer.mozilla.org/en-US/docs/WebAssembly/Caching...


I believe the parent comment was referring to hyperlinks, not dynamic linking.

The point was more that once webpages become applications running on the client (think single page apps), the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained.


the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained

But not everything needs to be a document. Sometimes the thing you're working with really is an application and not a document.

To me, one of the biggest problems with the current web is that we've commingled "app stuff" and "document stuff" so badly that browsers have been forced to become a shitty, inferior X-server (or Operating System outright), instead of being really good browsers. Browsers for browsing is great... browsers as a UI remoting protocol, is a bit janky.


ah. thanks.

"clickable"

Because you certainly can link to the wasm and js code that come with webassembly instantiateables.


I think the GP means "links" as in "clickable links", not "binary linker/loader".


I think he meant linkable in the web sense, ie, hyperlinks.


Sometimes a program running in the browser will be valuable when it's full window.

Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.


> Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.

I for one would, because the browser is an absolutely shitty interface. You're still forced into "there are tabs, which contain sandboxed documents" model of use. Interoperability is nonexistant, integration with machine capabilities is superficial and completely opaque to the user, the data model is hidden (where is my localStorage equivalent of the file browser again?), everything assumes you're constantly connected - it's a corporate wet dream, but for individuals, it's a nightmare.


Nothings perfect, though. If operating systems aren't, I wouldn't expect browsers to be either.

Creating mobile and/or offline first exoeriences for individuals isn't a pipe dream, it was possible and happened in the 90's when connectivity (dialup) informed content (largely offline or downloaded).

I'm not looking at replacement, only reasonable substitutes, which I think will become useful similar to using Google docs on mobile and web.


> where is my localStorage equivalent of the file browser again?

The Firefox developer tools have a "storage" tab that lets you inspect the content of various databases associated with a website.


Default-disabled, read-only and scope-limited to domains your current tab works with, but I guess it's better than nothing.


In my experience, the application-on-browser products consume far more CPU and RAM than the application-on-OS products. For me, that's a pretty big deal: I need the laptop to run as long as possible on a charge. Right now, I would complain if I _had_ to run a full version of Word or Excel in the browser.

Perhaps Web Assembly will drive this power usage down. But as it stands now, I actively avoid more than one of these app-on-browser products at a time.


Well, modern JS is MORE performant that classical scripting languages in benchmark cases, but the fact is that your browser freezes on half of js CRUDs that do data processing, while an analogous Perl application works at near light speed in comparison.

In half cases like that, stuff like sorting, list comparison, deduplication are done in a way that will score low mark even by standards of first year university program.

This is telling of web development industry's approach to doing business.

The most horrid examples of "LAMP sweatshops" of 10 years ago pale in comparison to what the industry has devolved into these days.

My own experience being an involuntary webdev for 3 years left me with following impressions:

1. Webdev is the largest commercial development niche in the whole tech industry. Everything else pales in comparison. It is also about making money quickly. A webapp or even a promo page SPA for a major consumer brand these days can cost up to $100k easily. $100k does not seem a lot to most people here, but such money can be well offered for a 1 month project for a team of of 6-8 professionals.

2. The industry is dominated by shops with 20 to 30 people headcount. Web dev studios generally don't scale much above that because of talent flight. Loss of a single senior dev who supervises hordes of lowest tier mule coders is often the end of a business for most of companies.

3. People from "big dotcom" world are near oblivious to ways of small web dev shops. For people who began their careers at 60k a year internships, getting into shoes of a person who does coding for 30k a year is impossible.

4. Talent flight and turnover is real.

5. This is all about really expensive quick and dirty code.

6. "The big dotcom" type of companies tried time and time again to tap into the market to extract rents, and with exception of Macrovision nobody ever succeeded. This is the reason Adobe is lobbying for unusable, unwieldy APIs in hopes of selling tooling for it.


If I could ask, where do you live? Your experiences don't reflect my own.


Practiced for near 3 years in Canada, and continued for a half a year after in China.

Quit webdev a year ago, now working in engineering consultancy.


I'd hope compiled binaries can run more efficiently than dynamically compiled Javascript over time.

Right now my mobile device is often tapped by Javascript that insists on running in the background.


> decrease our reliance on particular operating systems

By replacing it with a poor simulacrum of an operating system. Browser APIs are an inefficient subset of libc and bsd sockets offer.

And they provide near-zero interoperability with native applications. No filesystem access (beyond the clunky save-one-file dialog), no CLI, no IPC, nothing. That means browsers are building on top of operating systems while not interoperating with them.


> No filesystem access

This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.

There are some use cases that are hard to support (like being able to open all the files in a folder). But people are working on a solution.[1]

> No IPC

WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC. And there is nothing stopping a process running in a different browser (or even no browser at all) to connect to a webapp using WebRTC locally.

Additionally, if a new window is opened by Javascript and both pages are in the same domain + port (or subdomains of the same domain and you have access to the parent domain) you can communicate between the windows with simple Javascript function calls. And since browsers are moving towards a 1 process per window setup this is essentially IPC.

> That means browsers are building on top of operating systems while not interoperating with them.

While I can't argue with that. So is X Window. The abstraction between app and OS is a thick gray line not a thin black one.

[1] https://developer.mozilla.org/en-US/docs/Web/API/FileSystemD...


> This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.

Most apps being limited to their little part of the filesystem is not a problem. The problem is, now as a user, I can't access those files. I can't view them in a form that suits me, I can't use other applications to operate on them. The true form of the data is forever hidden from me, a secret of the application that "owns" it.


IMO that's a fairly easily solved problem. Browsers can add "localstorage browsers", you might even be able to do it in a browser extension.

I'd also love it if they gave that ability.


But that's the wrong direction. Instead it should map to a file tree that you can explore with your native file explorer and text editor. The browser becomes a silo for your data, inaccessible by every other application.


> The true form of the data is forever hidden from me, a secret of the application that "owns" it.

But that's been true for almost all users, and not just webapp users, forever.


Not necessarily. In the world of desktop software, most users know what a file is, and know that all the data of what they've been working at the moment on is contained within such a file. They know they can move this file around and possibly send to whomever they want. They also know that a file can be opened by multiple applications.

SaaS and web kill that.


I was thinking of all of the files in proprietary, and particularly binary, formats. Maybe some users know that even those files can be opened by multiple applications, when that's even true, but I suspect even more users don't even realize that almost all of their data is stored in a file somewhere, let alone where that file is in the filesystem and in what format it's stored.

Given the ubiquity of Word document and Powerpoint presentation files and the like, most users I'll grant you are aware of the files themselves, and the fact that they can be attached to an email. I'll even grant that a large fraction of those same users could answer 'yes' to the question 'Could these files be opened by another application?'. But almost none would be capable of doing anything with those files without an application that handles everything for them.

I don't dispute tho that an awareness of, let alone existence of, files in a filesystem is a significant benefit and not having access to them is a (relatively) significant loss.


> The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed.

Your are neglecting the option of exposing a limited subview of the filesystem like containers do.

> But people are working on a solution.[1]

The big red box on top says it's not on standards-track.

> WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC.

Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?

> So is X Window.

Maybe if you're remoting X, few people do that these days. In practice X applications have access to the same machine that they are drawing on.


> Your are neglecting the option of exposing a limited subview of the filesystem like containers do.

No I'm not. I said the limitation is a step forward. I didn't intend to imply it is perfect. It is not at all perfect.

> The big red box on top says it's not on standards-track.

Correct, but most standards started as experiments by the browsers. I think it qualifies as "people are working on it" but means it is probably far from being standardized.

> Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?

No. But you already knew that. But it does allow for data communication which in my opinion solves the 80% use case for IPC. From my experience (YMMV) the features you described while useful are not needed for most consumer apps.

Don't let perfect be the enemy of good.


> Don't let perfect be the enemy of good.

The problem isn't perfectionism, but that at least some of us believe that things are moving in the wrong direction - towards making vendors own everything, and end-users in control of nothing.


I wasn't implying replacing operating systems, but rather having the ability to substitute them, similar to how web apps can substitute for native apps.

I'm still optimistic that new forms of applications will emerge from this. There are serious pieces needing fleshing out, like file access.

The insecure interoperation between browsers and operating systems perhaps can be reimplemented through a newer more secure interface like wasm or the api.


Yeah, or a full version of a monero miner...


Different psuedo-VMs, I mean browsers, operate differently even on the same specs for various technologies (CSS, JS). They already act effectively like "particular operating systems," except they're less efficient and more obnoxious to work with.


[flagged]


This comment breaks a handful of guidelines and is not civil or substantive.

https://news.ycombinator.com/newsguidelines.html


Potentially fun questions: are there any “DOM-native JavaScript games”? I.e., games that manipulate the DOM for their “graphics”—or even have hypertext in place of graphics—rather than running in a canvas?

The only example I can think of is the Twine engine for Interactive Fiction.


Well there's this.

https://github.com/mozilla/BrowserQuest

Doesn't work in Safari.


You should look into Crafty, it’s a js game engine which can output to either the DOM or canvas, I’m not sure how popular it is anymore but quite a few games used it. There are demo games here http://craftyjs.com


Compiling to Wasm will only get easier. It's only hard now because the target is new and people are still adapting the tooling. There is no reason why it would be any harder than compiling for a machine.

Wasm will almost certainly lead to UI frameworks for the Web. JS people try very hard to get similar stuff, but the language is just not good enough; at the same time the desktop people that have this stuff is claiming for some way to use the same on the Web. People are already working on those frameworks, by the way.


Yes its bad for document markup, but I wouldn't waste time coding the next Excel in HTML and CSS, I'd just straight to a gui language with guaranteed cross platform rendering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: