Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO the main selling point of WASM is its predictable performance.

It's not a secret that you can write JS which will be JIT-ed into an extremely efficient machine code. But it's a "secret" - how to write that JS. You would need seriously advanced hackers who can study V8 assembly output and correlate it with used JS features. And do it all the time, when someone changes that code. Or may be even unrelated code changes will change the way V8 compiles that particular code snippet. JS compilation is black magic.

On the other hand, writing C is boring and solved problem. Compiling C to wasm works. It's predictably fast. You can use it for performance-critical code and it'll probably work without any adventures to V8 internals.



Making fast JS isn't some super-secret thing. Sure, there are weird cases like `x = y > z ? y : z` being 100x faster than `x = Math.max(y, z)`, but most performance improvements come from three simple rules:

1. Create an object with a fixed number of keys and NEVER add keys, remove keys, or change the data type of a key's value.

2. Make arrays of a set length and only put ONE type data type inside. If that data type is an object or array, all the objects/arrays must have the same type

3. Functions must be monomorphic (always called with the same parameters in the same order of the same type)

Do this and your code will be very fast. Do something else and it will get progressively slower.

Running the profiler in Chrome or Firefox is very easy and it will show you which functions are using up most of your processing time. Focusing on applying these rules to just those functions will usually get you most of the way there.


... Until it's not. If your code needs to be predictably fast, it can't be fast the majority of the time, it needs to be fast all the time. It's not about painting your updates in a tenth of a second, it's about painting your video game scene in under 16ms, every time.

There's nothing you can do to guarantee your JS isn't passed inputs that trigger pathological cases. There's no linter that can guarantee that you're writing code in a way that is the fastest it can be (even with type checking!). Asking developers to be a human linter for the sake of consistent performance is a bad developer experience no matter your skill level.

WASM simply doesn't have this problem.


> There's nothing you can do to guarantee your JS isn't passed inputs that trigger pathological cases. ... WASM simply doesn't have this problem.

why doesn't this problem also apply to WASM?


How did you find out? Just by profiling or did you read some book/article about that topic?


i.e. basically use JavaScript as if it were a statically typed language.


Kinda. As long as you guard the boundaries of the parts that need performance, you can be a lot more flexible everywhere else. You see this in libraries like React where the external API is polymorphic, but they tend to pass through to calls that are monomorphic internally so performance is better.

I wish TypeScript helped more here. I'd prefer if it had a performance option that disallowed or at least warned about these kinds of things.


I wish we just ditched the JS legacy and had a properly statically typed language, with dynamism as a layer on top (like e.g. "dynamic" in C#), rather than underpinning anything.

Which is what wasm will, hopefully, give us in the long run. And ensure that said PL will have to remain competitive against the new contenders, since they can always replace it.


I think the reverse option is better. Add a `use types` directive to functions or modules. Inside those modules, use a very strict, structurally-typed ML-style type system that ensures good performance. If an untyped piece of code calls a typed one, coerce primitives and throw on objects that aren't a structural match.


> IMO the main selling point of WASM is its predictable performance.

Not once in my entire life have I heard anyone say this

Almost always its either about it being faster or so they can use a language that isn't JS. And both are those have dubious value because I seen wasm be slower and people complain a lot about lack of tool support. Which is why the other day I claimed very few people use it. I seen many try it once or twice and not want to go through it again


The other thing is that the semantics of JS force some constraints on the JIT that make it harder for it to optimize code aggressively. Specifically, JIT compilers for JavaScript need to implement dynamic de-optimization when optimized code paths turn out to be wrong (because JS can do things like overwrite a method, meaning inclined calls to the method are now invalid).

Afaict it's much easier to write a high performance JIT for WASM because those cases aren't possible. And consequently, it's easier for something compiling to WASM to get high performance out.


Doesn’t it depend on the source language? If you compile JS to WASM, you’d have the same problem.

I imagine Python performance isn’t too great either.

Whereas if you compiled (or translated — not sure how comparable the WASM instruction set is) x86 bytecode to WASM, it’d be a walk in the park.


Sure, but that's the problem of the hosted language's compiler and not the JIT. Then the optimization pass turns into compiling the redefined method to WASM which can then be JIT compiled and inlined, so deoptomization isn't as bad (at least only the compilation unit has to be ditched)


I agree with this. One string use case I’ve seen is number crunching. Doing complex math via WASM is fast and predictable and supports a wider variety of float / integer types.

Another use case I’ve toyed with is date time. Specifically trying to figure out if something like the rust Chronos crate is a better fit for crunching and calculating dates than something like date-fns or Luxon. Not sure about this one yet.


With Temporal on the horizon I wouldn't bother. Even if Temporal doesn't do what you want wrapper libraries that do will be very lightweight in the crunching regard https://tc39.es/proposal-temporal/docs/cookbook.html#arithme...


This assumes two things though, and this is another point I just realized about WASM that I like, which is for (most) modern browsers have asm.js / WASM support, and it goes back much farther than Temporal. Therefore with Temporal we have to consider the following:

1. Browser support - its not there yet. you'd have to polyfill. A production level polyfill is 16 KB, and is still very nasacent, and, on top of that, requires support also for BigInt[0]. The polyfill that tc39 put out is decidedly marked as non-production ready[1].

2. Polyfilling - as mentioned above, we have to deal with polyfilling the API, and that isn't a clear and easy story yet. WASM support goes back farther than this.

3. Size - its entirely possible to get WASM builds under 16 KB, and the support is better, espcially for operations on strings and numbers (dates fit this category well). The only complication I haven't quite solved yet is:

A) Can I validate that a WASM build will be under 16 KB. This is crucial. I'd even accept it at 20 KB because of wider browser support[2]

B) Can I fall back to asm.js if needed (there is a slim range of browsers that support ASM.js but not WASM, mostly pre-chromium Edge[3]

C) Is it performant compared to something like Luxon or date-fns? WASM excels at string / numerical operations so my sneaking suspicion is yes, at least in terms of the WASM operations. The complexity will be serializing the operations to a JS Date instance, Luxon & the Intl API might be most useful here

[0]: https://github.com/fullcalendar/temporal/blob/main/packages/...

[1]: https://github.com/tc39/proposal-temporal#polyfills

[2]: https://caniuse.com/wasm

[3]: https://caniuse.com/asmjs


Yeah if you need something ready for prod by a literal next month or something Temporal definitely isn't it as the API isn't locked yet.

Don't forget WASM doesn't provide direct access to any OS time APIs (timezone info, current time, regional time change modifications) so the solution will still basically boil down to "call Date() and polyfill a better library" except now you have extra code to ferry the data back and forth to do a few string and math ops. Unless the use case is processing very large datetime datasets in one call the JS<->WASM function call overhead for all of this will probably take the majority of the execution time.

Not to mention after you get all of this solved, tested, and deployed you know as soon as Chrome starts shipping Temporal the cool custom solution becomes 50% slower for the average user despite all the effort because you didn't just use something like a Luxon which automatically updated to use Temporal on release. This may just be me being lazy though :p.


Even if Temporal ships tomorrow, its minimum 5 years for most applications to take advantage of it, so you're either polyfilling with feature detection or waiting it out using libraries like `date-fns` or `luxon` to fill the gap.

strings & numbers are WASMs strong point, so if you can pack the locale information tightly in a binary format, you might actually win out in the medium term. This shouldn't be a years long project by any means. And frankly, with the way enterprises move, you'll always have some client (at least in my business) where I need to support some modernish browser that may not have Temporal, so if this is more performant (we do alot of date time datasets, so yes, thats part why I'm looking at this) why not?

It could also be the wrong solution. I'll found out one way or another.


Just the IANA timezone database is ~400 KB gzipped for the data only and it's really quite the project to actually parse correctly. With that you'll still need to ferry Date(), Intl(), and friends for getting the current info about the user into the WASM module. Only then can you actually start talking about the code which competes with the 20 KB JS polyfill which started with all of the above as precompiled native code and data.

WASMs strong point isn't necessarily "strings and numbers" it's running large amounts of compiled code on large amounts of data. Video processing, PDF readers, video games. As an example even computing a large image the Mandelbrot fractal (pure math workload) then passing back an arraybuffer of the pixels was faster in JavaScript until WASM SIMD+Threads finally landed and JavaScripts poor parallelism finally factored in. Doing it with a functional call per pixel JavaScript is still ahead of even WASM with SIMD due to the functional call overhead.

But all that said I think it's a really cool project to try and I hope you're able to build what you're seeking. If you do be sure to post it to HN so I can check out how you managed to pull it off :).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: