Hacker News new | past | comments | ask | show | jobs | submit login
Google Introduces Portable Native Client (techcrunch.com)
80 points by ahomescu1 on May 18, 2013 | hide | past | favorite | 94 comments



I don't see the need for NaCL when there is asm.js and emscripten. For fast lightweight native sandboxing, now we have docker. NaCL seemed like a great idea, but now it has been overtaken by other initiatives.


I don't see the need for asm.js and emscripten. They seemed like a great idea, but it has now been overtaken by PNaCL.

We can all make unsubstantiated claims, and not all of us want to use the ghetto that is modern javascript.


I think one big difference is that if right now I play the GL demo on Firefox using asm.js, it works. If I play the same demo on Chrome.. it works.

Now if I play nacl stuff on ANYTHING else than chrome is does NOT work.

For me its not about having a nice language, it's about being standardized.

Else, I can just provide C binaries, they're faster than nacl anyway.


This is by far the best argument, but how is the performance of asm.js under browsers other than firefox?


> Now if I play nacl stuff on ANYTHING else than chrome is does NOT work.

That's purely an adoption-by-browser problem.

> Else, I can just provide C binaries, they're faster than nacl anyway.

The whole point of both nacl and asm.js was sandboxing.


> That's purely an adoption-by-browser problem.

... which is a pretty big problem, right? Especially since I don't think FF is likely to pick up NaCL unless/until hell freezes over.


I don't see where the point of asm.js is sandboxing. In fact, that's pretty much beside the point. Sandboxing in NaCL is just a necessity.


The toolchain is better for embedding existing native applications in the browser with PNaCl. There's a lot of code out there which isn't written in JavaScript, and it would be nice to apply the same sandboxing guarantees. Again, it's not that you can't compile C to asm.js (yeah, emscripten does work), but I think the PNaCl toolchain is just better.


Your comment can be reduced to just its final clause. Could you expand on it?


The worse tool will win.


I'd prefer to think that most efficient rather than most advanced tools win. look at hammer, axe, knife, nail. they are pure utility expressed as output divided by complexity of use


Theoretically PNaCL code should be executing much faster than emscripten-generated Javascript. Of course we should do real benchmarks, YMMV and so on.


Should it? Both compile to intermediate representations which are distributed, then compiled to machine code by the browser. I see no reason PNaCl should be any faster.


Well you have benchmarks in that talk - PNaCl is 1.2x native speed. asm.js is ~2x. Single threaded. With multiple threads PNaCl will smoke asm.js since the latter has no threads at all.

A 2x performance difference means your phone lasts 3 hours instead of 1.5 hours when playing that game. This is a huge, huge difference. Battery-powered devices make performance extremely important.


NaCL also supports pthreads.


Asm.js doesn't have (as far as I know):

1. Good primitive data types (e.g. full width machine integers)

2. SIMD

3. Tail calls

4. Threads

5. Structs

6. Gotos

7. Memory management

Once you add these you lose the supposed advantages of asm.js in practice. You lose the light weight. You lose realistic backwards compatibility because programs using these features will run so slow on normal JS VMs that they're unusable, or in the case of tail calls the program will stackoverflow on a normal JS VM.


> You lose the light weight.

No, an asm.js that supported all of these would still just be JavaScript, which is far more lightweight than PNaCl. Besides, these features dovetail with things we want for JavaScript anyway: SIMD, tail calls, full width machine integers are things all JavaScript programs need. By implementing them once and for all we help both regular JavaScript and asm.js.

> You lose realistic backwards compatibility because programs using these features will run so slow on normal JS VMs that they're unusable

No, we are compiling programs that use these already, and they are not too slow as to be unusable. For example, Unreal Engine 3.


> No, an asm.js that supported all of these would still just be JavaScript, which is far more lightweight than PNaCl.

What makes asm.js "far more lightweight" than PNaCl? What more "lightweight" means here anyway?

> No, we are compiling programs that use these already, and they are not too slow as to be unusable. For example, Unreal Engine 3.

Do you mean the Epic Citadel demo which runs fine on smartphones. So you made it run more or less smoothly on beefy x86 desktops, how much of an achievement is that? Is it that "OMG I run my game from the 80s inside the browser with HTML5 CANVAS!!!" thing again?


> What makes asm.js "far more lightweight" than PNaCl? What more "lightweight" means here anyway?

It's much simpler than LLVM and reuses the components of the JavaScript engine that already must exist in browsers.


> It's much simpler than LLVM and reuses the components of the JavaScript engine that already must exist in browsers.

But LLVM IR has complexity for a reason - you need to be able to generate efficient code from it for multiple architectures. As I mentioned elsewhere, "reusing JS components" is a very unfortunate party line because it keeps us tied to this one JS forever and ever, and we should try to see beyond that.


Why do you want Gotos? There is a small set of problems that is best solved by Gotos and even those can be solved with a less error prone method.


Asm.js is a target language for compilers. That small set of problems that are best solved by gotos includes target languages for compilers.


The Relooper solves this problem for compilers, and it is extremely effective in practice. Besides, you can always solve the lack of goto with code duplication, with no loss in performance.


Except code duplication destroys performance because the instruction cache isn't infinite. I think we can agree that the relooper isn't the optimal solution to this problem...it's a compromise that works pretty well, but its a compromise nonetheless. It also means that you need to involve the LLVM toolchain anyway, just like in pnacl. I don't know if pnacl is a better solution or asm.js, but what I do know is that both are compromise and both have advantages and disadvantages.


What does it have then?


Explicit numeric types. Integers in particular.


Oh no, please please NO.

asm.js is a nice hack and a commendable technical feat, but it's an absolutely horrible way moving forward. Do we really want to look back 10 years from now and see that we picked some crooked-n-twisted subset of JS to be the bytecode to which native code is compiled, just because "hey it's JS and it already works". Have we learned nothing from decades of horrible backwards-compatibility laden platforms?

If one would design a portable bytecode today, would he come up with asm.js? Not in a lifetime (or two). PNaCl is way, way closer to a sane design for such a bytecode. Yes, it has its problems, but these appear to be incremental and can be solved with investment of time and effort (which Google seem to be willing to invest). In its _core_, PNaCl is the right approach to this problem. It uses a subset of the hugely successful and important LLVM IR to represent bytecode. IR that's way more suitable for the task than a subset of JS. And the sandboxing model PNaCl uses also makes way more sense than asm.js's use of typed arrays to represent the heap. Where is your memory model, asm.js? Do you think it's an easy problem to solve? I don't see threads with shared memory coming to asm.js in _years_, while PNaCl already has them. Say what you will, but threads are important. Games use them all over the place, and other applications too.

Remember, speed means battery life for phones and tablets (it's only a matter of time until PNaCl runs on Android, who doubts it?) So 2x native vs. 1.15x native is a huge difference.

Again, as a hacker I can see the coolness of asm.js and kudos to Mozilla for creating it. It's a great interim approach to ensure that native code can truly run everywhere, in all browsers. But looking into the future, I hope it will die and PNaCl (or some derivative of PNaCl) will take over. Even today it lets you compile C++ code and run at nearly native speed with threads on several architectures and several OSes, in a secure way! Other browsers should definitely adopt it. Although I understand if Google doesn't worry much about it - Chrome is the most popular browser today, and its lead is growing. Lack of adoption from other browsers may end up being _their_ problem, not Google's.


> It uses a subset of the hugely successful and important LLVM IR to represent bytecode. IR that's way more suitable for the task than a subset of JS.

Speaking as someone who works with LLVM IR on a daily basis, I really dislike the idea of shipping compiler IR to users. Believe it or not, asm.js is actually significantly closer to the ideal bytecode that I would ship, except for surface syntax.

LLVM IR is a compiler IR: http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/0437...

> And the sandboxing model PNaCl uses also makes way more sense than asm.js's use of typed arrays to represent the heap. Where is your memory model, asm.js? Do you think it's an easy problem to solve?

You could implement asm.js in the exact same way. There's nothing stopping you from doing that.

> I don't see threads with shared memory coming to asm.js in _years_, while PNaCl already has them.

Years? Not according to any discussions I've been privy to.

> So 2x native vs. 1.15x native is a huge difference.

I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.


>It's much simpler than LLVM and reuses the components of the JavaScript engine that already must exist in browsers.

But Google say they used simplified subset of LLVM IR. In addition as far as I understand PNaCl has its own triple which parks down some things like endinaness and pointers size etc. which make IR much more portable.

> You could implement asm.js in the exact same way. There's nothing stopping you from doing that.

Yes, I don't argue you cannot keep evolving JS to be closer to real bytecode, I'm just arguing this in my opinion is not the right way to go. And moreover I think it will fail. People tried retrofit JVM to be bytecode for C/C++ too, and where has this efforts lead?

In contrast LLVM IR /is already/ IR for C/C++. In its full form it's a compiler IR but making it more stable and portable seem like much smaller task than retrofitting all featuers needed for native execution onto JS.

> I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.

Compilation time matters only the first time app is run, Google said. If you have a game or large application, you run it more than once. Since their compilation generates native binary, they just need to load it next time so it's 0 compilation time. I don't know how their first time compilation will compare to asm.js but all subsequent ones are likely better.


I don't see how docker fits in there. Don't get me wrong, it's a great piece of technology, but it's very platform-specific and has nothing to do with the web as a platform in its current state.


At one time I was looking at NaCL as a possible solution for lightweight server-side sandboxing.


That's an interesting use case. And you chose docker instead?


Asm can't do nearly as much code optimizing as pnacl.


this seems like a strange assertion to me. why not? you can do amazing things with static analysis and runtime optimization, and that operates on the AST / some intermediate state, not the source code. if anything, higher-level languages (ie, JS as opposed to bytecode) lend themselves to more automatic-optimizations than lower-level ones because intent is encoded better.


Asm.js is lower level than LLVM IR. In LLVM IR the data structures and memory allocation are still visible, in asm.js it's just opaque code working on a giant heap array. For example think how you'd do pointer analysis on asm.js...


The optimizations are already done ahead of time, by Emscripten. There is no need to do them on the client, and doing so just increases compilation time for no reason.

In fact, shipping the types to the client is harmful, because it increases the download size. This is one of the ways asm.js is, IMHO, a better bytecode than LLVM IR…


> In fact, shipping the types to the client is harmful, because it increases the download size

To be fair, it's probably negligible compared to the download size impact of representing bytecode in a subset of JavaScript.


NaCL still outperforms asm.js


As a game developer i was pretty excited about Native Client when i first heard about it in 2010, and PNaCL is definately nice.

However, nowadays asm.js + emscripten seem to be the right direction and Google should adopt it.


PNaCL seems like a much saner stack, turning JS into assembly never made sense to me if you want to get the most out of the hardware (with resulting power savings and/or performance gains).


To be fair, asm.js isn't really JS. It's more like a low-level DSL. The fact that it happens to be a subset of JS is just a convenience for backwards compatibility.

At any rate, from the perspective of the developer, it doesn't really make a difference. Either way, developers are interacting with a C/C++ compiler, not directly with the browser engine.


I fear the limitation that it has to be a subset of JS must lead to performance trade-offs, through the lack of 64-bit integer types or single precision floating point / SIMD vector arithmetic for instance.


This is not a fundamental limitation of asm.js. For example, there is already an extension introduced for compiled code that can be optionally used for improved performance: Math.imul. Similar extensions can be introduced later.


With the trade off that the code size is greatly increased and that the performance when falling back to pure JavaScript will be dog slow. If the performance of an application is only acceptable on the subset of browsers that implement asm.js then the portability advantage over PNaCL isn't worth that much.


The only actual numbers I've seen, from emscripten author Alon Zakai, show gzipped minified emscripten JS output as comparable in size to gzipped native code:

http://mozakai.blogspot.com/2011/11/code-size-when-compiling...


> With the trade off that the code size is greatly increased

http://mozakai.blogspot.com/2011/11/code-size-when-compiling...

> If the performance of an application is only acceptable on the subset of browsers that implement asm.js then the portability advantage over PNaCL isn't worth that much.

The Unreal Engine demo has proven that, even for games, these limitations don't result in "dog slow" performance on browsers that don't support asm.js.


The Unreal Engine demo does pretty much all of its work on the GPU.


It's the same interface for the developer, but instead of being run directly on the CPU, it's run in a JavaScript virtual machine that interprets it or eventually compiles it to some kind of native code. That's very hard to overcome from a performance standpoint.


I don't know what the difference between running "directly on the CPU" and "compiling to some kind of native code" is. If you mean that the Web page delivers raw machine code to the browser, that describes NaCl, not PNaCl.

Regardless, it is incorrect that asm.js is first interpreted before being compiled. asm.js is compiled ahead of time exactly as PNaCl is.


This doesn't sound right. It's a subset of javascript, you are still shipping javascript to the browser... which is then parsed and then jitted in said browser. Which is an extra step above bitcode, where you just jit it as is. Albeit it's specifically crafted to be easier to optimize but it is still javascript that you are sending to the browser.

I feel like someone is trying too hard to make javascript into something it isn't and shouldn't be. Javascript does it's thing pretty well, why is there no room for other tools in the toolbox?


I doubt parsing and validating ascii asm.js is much more work than parsing and validating binary llvm.


I call Bullshit.


https://github.com/dherman/asm.js/tree/master/lib is about 1,500 lines of code.

The contents of Bitcode/Reader/ in LLVM are about 3,000 lines of code alone, and that does not include the definition of LLVM data structures, the validator, etc.


asm.js is translated into native code before it's run, which is exactly the same as PNaCL, which is delivered in a non-executable format that has to be translated to the native architecture before it's run. The two technologies are just different standards for the exact same underlying idea.


Except LLVM bitcode seems much more able to expose the cabilities of the underlying CPU compared to asm.js. I guess we'll just have to wait to see what the benchmarks say.


PNaCL doesn't "run directly on the CPU" any more than asm.js. They are both intermediate formats that must be compiled locally.


No it's not. asm.js, like PNaCl, uses ahead-of-time compilation.


> At any rate, from the perspective of the developer, it doesn't really make a difference.

Not true at all. Now we don't have to postprocess the LLVM into javascript and deal with the pain of debugging completely unreadable bitcode.


Unfortunately there are a number of downsides which will inhibit cross browser adoption and standardisation. I would view NaCL and PNaCL less for the open web, and more an attempt to position Chrome as a replacement for Desktop OS / application platforms.

PPAPI, and LLVM are not necessarily suitable for cross browser adoption or standardisation. LLVM is a single implementation and so like SQLite, its likely it would not be accepted. It is also not particularly architecture independent nor language agnostic. Low latency polyglot JIT is an open research area on LLVM.

PPAPI is a single vendors view on a suitable API. It focuses on a single vendors needs. The track record isn't there.


I think in the long run we're all better off if they fight it out and see who wins on the technical merits in the end. In the meantime, just make sure the native code you're writing can target either/or and serve the right version based on browser detection.


They can't fight it out on technical merits if nobody will adopt PNaCl due to some (real or perceived) Google lockin.


I have really high hopes for PNaCl, but I'm skeptical that it will "get there". Getting other browser vendors to adopt it seems to be an uphill battle so far, and figuring out the tradeoffs between security and power will probably be a sticky point as well.


But it can solve a lot of problems for iOS development if you don't want to go trough the store.


Yeah when Safari supports it...


Just an interesting tidbit I'd like to point out: native client was not mentioned in the Chrome portion of the I/O keynote at all, while asm.js was:

"And in the last month alone, we’ve gotten over 2.4x speed boost running this asm.js code in V8, and there’s tons more optimization to come."

Not only that, but it turns out that the mentioned improvements had nothing to do with asm.js at all: https://twitter.com/mraleph/status/334719725617696768


We were going to show a couple of cool Native Client demos during the keynote but we had to cut them at the last minute due to time constraints.


To me, the most interesting thing about PNaCl is that it'll be the first real test of LLVM as a portable assembly bitcode, rather than just a compiler IR. There are arguments for why LLVM may not be such a good idea, but given the momentum it's been making, I can only see good things happening for LLVM if more people use it.


Right. I had always thought that LLVM bitcode was unsuitable as a portable representation because it inevitably encoded architecture-specific details.

Consider code like this:

    int lsize(void) { return sizeof(void *); }
how would this be compiled portably, so that it returns 4 or 8 as appropriate? What would the LLVM bitcode look like?


This is true for LLVM bitcode in general, but PNaCl specifies an abstract machine that defines pointers as 4 bytes long. It also specifies Little Endian &c.

See http://www.chromium.org/nativeclient/pnacl/bitcode-abi#TOC-D...


The second real test. Android Renderscript uses LLVM for portable bytecode already.


Actually, emscripten does the same thing: LLVM->JS.


> PNaCl, which Google says we should pronounce as "pinnacle"

Or as I will call it, "P-salt".

As in, take it with a grain of salt that Google will support this technology 18-24 months from now after it fails to gain much traction.


If you think this kind of technology compares to Google Reader, you are just confused. WebGL would be a better comparison.


Only WebGL is a standard, whereas PNaCL is a Google only endeavor. If it doesn't catch up, they might abandon it just as they did anything else of theirs.

Whereas they cannot abandon WebGL if they want to have a standards compliant browser.


Actually google doesn't seem to have much problems with abandoning standards lately.


Perhaps Google's O3D, their 3D graphics implementation in the browser, which you haven't heard of. Why? Because when WebGL beat it, it was dropped.


At least Google released a final version of O3D that was an implementation on top of WebGL, providing a transition path. When they dropped Gears, they said they said,

We realize there is not yet a simple, comprehensive way to take your Gears-enabled application and move it (and your entire userbase) over to a standards-based approach. We will continue to support Gears until such a migration is more feasible...[1]

How did that turn out?

They filed a Chromium ticket[2] that was moved to an html5rocks.com ticket[3] four months later that was closed as WONTFIX a month after that.

My issue is not that they discontinue projects. My issue is how they often go about it.

[1] http://gearsblog.blogspot.com/2010/02/hello-html5.html [2] http://code.google.com/p/chromium/issues/detail?id=37180 [3] http://code.google.com/p/html5rocks/issues/detail?id=73


concurred, thusly it is the technology known as P Salt. The crusty residue left when drying out P


Phosphor?


Well, in some parts of the world, "p" is slang for methamphetamine. I'm guessing here that the OP rather means urine. So, let us all boil our urine into salt!


I really wish that someone other than Google would take this up (whatever Google would need to do for that). C has of course been a great choice for performance-sensitive programs for years and it is quite a mature language with mature tools. It doesn't make sense for everyone to be forced to use JavaScript as the only option, forever, just because of convention.


Once it's actually possible to use PNaCL apps in webpages, I think it will see dramatically faster adoption. It's mostly a question at this point of whether they manage to ship it before someone else beats them to the punch (looks like Chrome 30 is projected for August). By then Mozilla could easily have picked up a lot of momentum with asm.js.


Problem is we'll only ever see PNaCl support in Chrome. The nature of NaCl and PNaCl is that they rely on single implementations by Google which are designed around Chrome's internals, and so they are unlikely to ever be adopted by other browser vendors, aside from perhaps Opera, since it will be Chromium-based.


How much of a subset of LLVM is it? Can other LLVM frontends target it easily (e.g. Rust or http://terralang.org/)?


Rust could target it, but not as easily as you might think. LLVM IR is not portable (for example, here's some of the code necessary to get the calling conventions right in the LLVM IR for x86-64 [1]). The runtime and scheduler would have to be ported to the Pepper API if you wanted stuff like GC, I/O, and threads (which most Rust programs will want).

[1]: https://github.com/mozilla/rust/blob/master/src/librustc/mid...


Possible still seems preferable to impossible, which is the situation with Rust on asm.js (correct me if I'm wrong…)


asm.js Rust is no harder than PNaCl Rust.


Andrew Wilkins is working on a Go frontend that can also target PNaCL[1], but since it's a spare time project and he just had a baby it's slowed down. So if anyone want to help him out...

[1] http://blog.awilkins.id.au/2012/12/go-in-browser-llgo-does-p...


It's like ActiveX... of the future!


This seems like a niche technology. I doubt that many C/C++ developers want to write front end code for the web.


I want to. I am developing a survey progrmaming compiler. I am currently isolating the runtime (written in c++) and trying to compile to JavaScript using Emscripten. But PNaCl could be a another web delivery option


Sounds kinda like Google's own implementation of WebCL? I guess Google is really trying to push the concept of utilizing the Chrome browser as an OS development environment instead of just a browser... seems cool.


WebCL runs on the GPU while PNaCl runs on the CPU.


Not at all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: