I don't see the need for NaCL when there is asm.js and emscripten. For fast lightweight native sandboxing, now we have docker. NaCL seemed like a great idea, but now it has been overtaken by other initiatives.
The toolchain is better for embedding existing native applications in the browser with PNaCl. There's a lot of code out there which isn't written in JavaScript, and it would be nice to apply the same sandboxing guarantees. Again, it's not that you can't compile C to asm.js (yeah, emscripten does work), but I think the PNaCl toolchain is just better.
I'd prefer to think that most efficient rather than most advanced tools win. look at hammer, axe, knife, nail. they are pure utility expressed as output divided by complexity of use
Should it? Both compile to intermediate representations which are distributed, then compiled to machine code by the browser. I see no reason PNaCl should be any faster.
Well you have benchmarks in that talk - PNaCl is 1.2x native speed. asm.js is ~2x. Single threaded. With multiple threads PNaCl will smoke asm.js since the latter has no threads at all.
A 2x performance difference means your phone lasts 3 hours instead of 1.5 hours when playing that game. This is a huge, huge difference. Battery-powered devices make performance extremely important.
1. Good primitive data types (e.g. full width machine integers)
2. SIMD
3. Tail calls
4. Threads
5. Structs
6. Gotos
7. Memory management
Once you add these you lose the supposed advantages of asm.js in practice. You lose the light weight. You lose realistic backwards compatibility because programs using these features will run so slow on normal JS VMs that they're unusable, or in the case of tail calls the program will stackoverflow on a normal JS VM.
No, an asm.js that supported all of these would still just be JavaScript, which is far more lightweight than PNaCl. Besides, these features dovetail with things we want for JavaScript anyway: SIMD, tail calls, full width machine integers are things all JavaScript programs need. By implementing them once and for all we help both regular JavaScript and asm.js.
> You lose realistic backwards compatibility because programs using these features will run so slow on normal JS VMs that they're unusable
No, we are compiling programs that use these already, and they are not too slow as to be unusable. For example, Unreal Engine 3.
> No, an asm.js that supported all of these would still just be JavaScript, which is far more lightweight than PNaCl.
What makes asm.js "far more lightweight" than PNaCl? What more "lightweight" means here anyway?
> No, we are compiling programs that use these already, and they are not too slow as to be unusable. For example, Unreal Engine 3.
Do you mean the Epic Citadel demo which runs fine on smartphones. So you made it run more or less smoothly on beefy x86 desktops, how much of an achievement is that? Is it that "OMG I run my game from the 80s inside the browser with HTML5 CANVAS!!!" thing again?
> It's much simpler than LLVM and reuses the components of the JavaScript engine that already must exist in browsers.
But LLVM IR has complexity for a reason - you need to be able to generate efficient code from it for multiple architectures. As I mentioned elsewhere, "reusing JS components" is a very unfortunate party line because it keeps us tied to this one JS forever and ever, and we should try to see beyond that.
The Relooper solves this problem for compilers, and it is extremely effective in practice. Besides, you can always solve the lack of goto with code duplication, with no loss in performance.
Except code duplication destroys performance because the instruction cache isn't infinite. I think we can agree that the relooper isn't the optimal solution to this problem...it's a compromise that works pretty well, but its a compromise nonetheless. It also means that you need to involve the LLVM toolchain anyway, just like in pnacl. I don't know if pnacl is a better solution or asm.js, but what I do know is that both are compromise and both have advantages and disadvantages.
asm.js is a nice hack and a commendable technical feat, but it's an absolutely horrible way moving forward. Do we really want to look back 10 years from now and see that we picked some crooked-n-twisted subset of JS to be the bytecode to which native code is compiled, just because "hey it's JS and it already works". Have we learned nothing from decades of horrible backwards-compatibility laden platforms?
If one would design a portable bytecode today, would he come up with asm.js? Not in a lifetime (or two). PNaCl is way, way closer to a sane design for such a bytecode. Yes, it has its problems, but these appear to be incremental and can be solved with investment of time and effort (which Google seem to be willing to invest). In its _core_, PNaCl is the right approach to this problem. It uses a subset of the hugely successful and important LLVM IR to represent bytecode. IR that's way more suitable for the task than a subset of JS. And the sandboxing model PNaCl uses also makes way more sense than asm.js's use of typed arrays to represent the heap. Where is your memory model, asm.js? Do you think it's an easy problem to solve? I don't see threads with shared memory coming to asm.js in _years_, while PNaCl already has them. Say what you will, but threads are important. Games use them all over the place, and other applications too.
Remember, speed means battery life for phones and tablets (it's only a matter of time until PNaCl runs on Android, who doubts it?) So 2x native vs. 1.15x native is a huge difference.
Again, as a hacker I can see the coolness of asm.js and kudos to Mozilla for creating it. It's a great interim approach to ensure that native code can truly run everywhere, in all browsers. But looking into the future, I hope it will die and PNaCl (or some derivative of PNaCl) will take over. Even today it lets you compile C++ code and run at nearly native speed with threads on several architectures and several OSes, in a secure way! Other browsers should definitely adopt it. Although I understand if Google doesn't worry much about it - Chrome is the most popular browser today, and its lead is growing. Lack of adoption from other browsers may end up being _their_ problem, not Google's.
> It uses a subset of the hugely successful and important LLVM IR to represent bytecode. IR that's way more suitable for the task than a subset of JS.
Speaking as someone who works with LLVM IR on a daily basis, I really dislike the idea of shipping compiler IR to users. Believe it or not, asm.js is actually significantly closer to the ideal bytecode that I would ship, except for surface syntax.
> And the sandboxing model PNaCl uses also makes way more sense than asm.js's use of typed arrays to represent the heap. Where is your memory model, asm.js? Do you think it's an easy problem to solve?
You could implement asm.js in the exact same way. There's nothing stopping you from doing that.
> I don't see threads with shared memory coming to asm.js in _years_, while PNaCl already has them.
Years? Not according to any discussions I've been privy to.
> So 2x native vs. 1.15x native is a huge difference.
I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.
>It's much simpler than LLVM and reuses the components of the JavaScript engine that already must exist in browsers.
But Google say they used simplified subset of LLVM IR. In addition as far as I understand PNaCl has its own triple which parks down some things like endinaness and pointers size etc. which make IR much more portable.
> You could implement asm.js in the exact same way. There's nothing stopping you from doing that.
Yes, I don't argue you cannot keep evolving JS to be closer to real bytecode, I'm just arguing this in my opinion is not the right way to go. And moreover I think it will fail. People tried retrofit JVM to be bytecode for C/C++ too, and where has this efforts lead?
In contrast LLVM IR /is already/ IR for C/C++. In its full form it's a compiler IR but making it more stable and portable seem like much smaller task than retrofitting all featuers needed for native execution onto JS.
> I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.
Compilation time matters only the first time app is run, Google said. If you have a game or large application, you run it more than once. Since their compilation generates native binary, they just need to load it next time so it's 0 compilation time. I don't know how their first time compilation will compare to asm.js but all subsequent ones are likely better.
I don't see how docker fits in there. Don't get me wrong, it's a great piece of technology, but it's very platform-specific and has nothing to do with the web as a platform in its current state.
this seems like a strange assertion to me. why not? you can do amazing things with static analysis and runtime optimization, and that operates on the AST / some intermediate state, not the source code. if anything, higher-level languages (ie, JS as opposed to bytecode) lend themselves to more automatic-optimizations than lower-level ones because intent is encoded better.
Asm.js is lower level than LLVM IR. In LLVM IR the data structures and memory allocation are still visible, in asm.js it's just opaque code working on a giant heap array. For example think how you'd do pointer analysis on asm.js...
The optimizations are already done ahead of time, by Emscripten. There is no need to do them on the client, and doing so just increases compilation time for no reason.
In fact, shipping the types to the client is harmful, because it increases the download size. This is one of the ways asm.js is, IMHO, a better bytecode than LLVM IR…
PNaCL seems like a much saner stack, turning JS into assembly never made sense to me if you want to get the most out of the hardware (with resulting power savings and/or performance gains).
To be fair, asm.js isn't really JS. It's more like a low-level DSL. The fact that it happens to be a subset of JS is just a convenience for backwards compatibility.
At any rate, from the perspective of the developer, it doesn't really make a difference. Either way, developers are interacting with a C/C++ compiler, not directly with the browser engine.
I fear the limitation that it has to be a subset of JS must lead to performance trade-offs, through the lack of 64-bit integer types or single precision floating point / SIMD vector arithmetic for instance.
This is not a fundamental limitation of asm.js. For example, there is already an extension introduced for compiled code that can be optionally used for improved performance: Math.imul. Similar extensions can be introduced later.
With the trade off that the code size is greatly increased and that the performance when falling back to pure JavaScript will be dog slow. If the performance of an application is only acceptable on the subset of browsers that implement asm.js then the portability advantage over PNaCL isn't worth that much.
The only actual numbers I've seen, from emscripten author Alon Zakai, show gzipped minified emscripten JS output as comparable in size to gzipped native code:
> If the performance of an application is only acceptable on the subset of browsers that implement asm.js then the portability advantage over PNaCL isn't worth that much.
The Unreal Engine demo has proven that, even for games, these limitations don't result in "dog slow" performance on browsers that don't support asm.js.
It's the same interface for the developer, but instead of being run directly on the CPU, it's run in a JavaScript virtual machine that interprets it or eventually compiles it to some kind of native code. That's very hard to overcome from a performance standpoint.
I don't know what the difference between running "directly on the CPU" and "compiling to some kind of native code" is. If you mean that the Web page delivers raw machine code to the browser, that describes NaCl, not PNaCl.
Regardless, it is incorrect that asm.js is first interpreted before being compiled. asm.js is compiled ahead of time exactly as PNaCl is.
This doesn't sound right. It's a subset of javascript, you are still shipping javascript to the browser... which is then parsed and then jitted in said browser. Which is an extra step above bitcode, where you just jit it as is. Albeit it's specifically crafted to be easier to optimize but it is still javascript that you are sending to the browser.
I feel like someone is trying too hard to make javascript into something it isn't and shouldn't be. Javascript does it's thing pretty well, why is there no room for other tools in the toolbox?
The contents of Bitcode/Reader/ in LLVM are about 3,000 lines of code alone, and that does not include the definition of LLVM data structures, the validator, etc.
asm.js is translated into native code before it's run, which is exactly the same as PNaCL, which is delivered in a non-executable format that has to be translated to the native architecture before it's run. The two technologies are just different standards for the exact same underlying idea.
Except LLVM bitcode seems much more able to expose the cabilities of the underlying CPU compared to asm.js. I guess we'll just have to wait to see what the benchmarks say.
Unfortunately there are a number of downsides which will inhibit cross browser adoption and standardisation. I would view NaCL and PNaCL less for the open web, and more an attempt to position Chrome as a replacement for Desktop OS / application platforms.
PPAPI, and LLVM are not necessarily suitable for cross browser adoption or standardisation. LLVM is a single implementation and so like SQLite, its likely it would not be accepted. It is also not particularly architecture independent nor language agnostic. Low latency polyglot JIT is an open research area on LLVM.
PPAPI is a single vendors view on a suitable API. It focuses on a single vendors needs. The track record isn't there.
I think in the long run we're all better off if they fight it out and see who wins on the technical merits in the end. In the meantime, just make sure the native code you're writing can target either/or and serve the right version based on browser detection.
I have really high hopes for PNaCl, but I'm skeptical that it will "get there". Getting other browser vendors to adopt it seems to be an uphill battle so far, and figuring out the tradeoffs between security and power will probably be a sticky point as well.
To me, the most interesting thing about PNaCl is that it'll be the first real test of LLVM as a portable assembly bitcode, rather than just a compiler IR. There are arguments for why LLVM may not be such a good idea, but given the momentum it's been making, I can only see good things happening for LLVM if more people use it.
This is true for LLVM bitcode in general, but PNaCl specifies an abstract machine that defines pointers as 4 bytes long. It also specifies Little Endian &c.
Only WebGL is a standard, whereas PNaCL is a Google only endeavor. If it doesn't catch up, they might abandon it just as they did anything else of theirs.
Whereas they cannot abandon WebGL if they want to have a standards compliant browser.
At least Google released a final version of O3D that was an implementation on top of WebGL, providing a transition path. When they dropped Gears, they said they said,
We realize there is not yet a simple, comprehensive way to take your Gears-enabled application and move it (and your entire userbase) over to a standards-based approach. We will continue to support Gears until such a migration is more feasible...[1]
How did that turn out?
They filed a Chromium ticket[2] that was moved to an html5rocks.com ticket[3] four months later that was closed as WONTFIX a month after that.
My issue is not that they discontinue projects. My issue is how they often go about it.
Well, in some parts of the world, "p" is slang for methamphetamine. I'm guessing here that the OP rather means urine. So, let us all boil our urine into salt!
I really wish that someone other than Google would take this up (whatever Google would need to do for that). C has of course been a great choice for performance-sensitive programs for years and it is quite a mature language with mature tools. It doesn't make sense for everyone to be forced to use JavaScript as the only option, forever, just because of convention.
Once it's actually possible to use PNaCL apps in webpages, I think it will see dramatically faster adoption. It's mostly a question at this point of whether they manage to ship it before someone else beats them to the punch (looks like Chrome 30 is projected for August). By then Mozilla could easily have picked up a lot of momentum with asm.js.
Problem is we'll only ever see PNaCl support in Chrome. The nature of NaCl and PNaCl is that they rely on single implementations by Google which are designed around Chrome's internals, and so they are unlikely to ever be adopted by other browser vendors, aside from perhaps Opera, since it will be Chromium-based.
Rust could target it, but not as easily as you might think. LLVM IR is not portable (for example, here's some of the code necessary to get the calling conventions right in the LLVM IR for x86-64 [1]). The runtime and scheduler would have to be ported to the Pepper API if you wanted stuff like GC, I/O, and threads (which most Rust programs will want).
Andrew Wilkins is working on a Go frontend that can also target PNaCL[1], but since it's a spare time project and he just had a baby it's slowed down. So if anyone want to help him out...
I want to. I am developing a survey progrmaming compiler. I am currently isolating the runtime (written in c++) and trying to compile to JavaScript using Emscripten. But PNaCl could be a another web delivery option
Sounds kinda like Google's own implementation of WebCL? I guess Google is really trying to push the concept of utilizing the Chrome browser as an OS development environment instead of just a browser... seems cool.