Hacker News new | past | comments | ask | show | jobs | submit login
The Baseline Interpreter: A Faster JavaScript Interpreter in Firefox 70 (hacks.mozilla.org)
735 points by edmorley on Aug 30, 2019 | hide | past | favorite | 267 comments



The Mozilla tech blogs are always a good read.

Informative and easy to digest. I can only think of one other company blogging with similar consistency/quality: Cloudflare.

Firefox performance has seen tremendous gains since their Project Quantum efforts.

It mostly feels on par with Chrome for me pretty much everywhere.

There is one glaring omission though: a single company where I have problems with FF on multiple apps - Google.

Gmail was still dog-slow the last time I tried, Youtube can send my fan into a frenzy and Docs is also regularly problematic. (on Linux)


I have the opposite experience with YouTube.

With an AMD GPU on Linux, Chrome churns, drops frames, and stutters horribly on even 1080p videos at 30fps. Firefox, meanwhile, handles 1440p60 video with no problem. Even two such videos simultaneously, on separate monitors.

Chrome became inexplicably better for parts of early 2019 (I did not note which versions), but starting 2 or 3 months ago Chrome returned to being unusable for YouTube beyond 720p videos.

Before anyone asks, this is on an AMD RX 580 with Mesa 18.0.


Hey, I'm on the chromium videostack team, would you mind filing a bug report for what you see happening with youtube? https://bugs.chromium.org/p/chromium/issues/entry


Fixing this would be like eight steps forward: https://bugs.chromium.org/p/chromium/issues/detail?id=137247

The annoying part is that hardware acceleration works on chrome OS, so we know the support is buried in there somewhere.


Google closed that as Won't Fix, they are very opinionated and steadfast in their opinion as an organization.

I doubt they will take Linux outside their own walled gardens seruously, Google has already shown their indifference to us Linux users.


They opened it as Won't Fix, which appears to be their way of saying that they'll accept work but it has a priority of less than zero.


Well, if you're willing to investigate, using ungoogled chromium resulted in tremendous gains while using youtube to the point where I can speed up a video to 2x without the fan ever running amok against my ears.


I will take a look this weekend and if I can properly explain and document the issue in detail, I will definitely give you everything I can.

I should add that for an identical Linux distro-and-version, and identical browser versions, my laptop (I forget the SKU, but Intel Broadwell U-series GPU) is about equal in performance between Chrome and Firefox. That is to say, 1440p60 runs smooth on each, so they're good enough that there's no issue from which to discern a difference (which is damned impressive for a 4-year old mobile chip, imo).


Asking to do the work of raising a bug to make a product better which is competing with other non profit product is in itself a bit arrogant in my opinion. On top of that the attitude to be lazy to search for the issue which was already marked as won't fix https://bugs.chromium.org/p/chromium/issues/detail?id=137247 is not the right thing to do. I would have appreciated if you have created the issue with the details provided and shared the link her for the op to fill in more details.


When chrome slows down for me, it normally means I need to run clear browsing history, delete cookies, cache, logins, etc. I have modern hardware, on both linux/windows same issue.

But using res on reddit and click "show images" you can tell chrome isn't as fast as other browsers, firefox/brave are visually faster/smoother in scrolling with pics. Hell, even firefox focus is faster on android. But I like chrome addons and sync with my mobile, its my daily driver. Normally I never notice, but a quick clean if starts acting out fixes it for me.


Sync with my mobile and all the other devices (4 other computers) is one reason why I use Firefox. It's because it's the only browser, as far as I know, which lets me run my own sync server and offers a open source implementation of it.

https://jeena.net/firefox-sync-15


Oh neat, when they first switched to sync that wasn't the case (the previous system did allow it over WebDAV and such), but because sync wasn't just a data dump with WebDAV it wasn't possible at launch to run your own server. I'll have to look into that again.


As far back as I remember the code behind sync was on GitHub and I do mean the backend. It probably still is. But I let Mozilla host it cause they encrypt it. If I lose my password I lose my synched items.


It was there I think, but wasn't immediately possible to use a different server as they hadn't enabled or built that into the browser yet. They were still working out issues and I never bothered to check up on it because it seemed like the encryption was secure enough for me.


Nothing stressed me out more than the time I genuinely forgot my FF password for a few days!


If it's for a few days, how did you un-forget?


Hey, I remember the time when Chrome was soooo fast and we needed to clean up all the stuff to get Firefox to run adequately :)


have you double checked whether firefox is set to use your gpu or not, for video?


Neither Firefox nor Chrome under Linux are capable of using GPU for video decoding.

Only some distribution-specific Chromium builds do.


I seem to remember that some GPU acceleration under Linux was disabled voluntarily, on both Firefox and Chrome, because they actually ran much slower than CPU, probably due to bad drivers.


It is slower not due to drivers, but due to how the browsers[1] compose the final webpage. It does that using CPU, not GPU as on other operating systems, so hardware video decoding would involve moving the compressed video to GPU, waiting for it to decode, moving the decoded video back to RAM, do the composition, and then move the result back to GPU for display.

For hw decoding to have sense, GPU composition is needed. Hopefully webrender will bring that in for Linux too.

[1] Firefox actually, Chrome does use GPU composition. Chrome for ChromeOS even uses the same driver (libva+intel-vaapi-driver) as desktop linux for video decoding, Google just is not willing to support it on desktop. That's what some distributions enable in their Chromium builds.


GPU layer composition was available in Firefox for ages, long before WebRender. It just was not enabled on Linux by default due to fear of bad drivers. But anyone who heard about it or asked about performance on various forums knows about layers.force-enable in about:config.


That’s certain GPU acceleration of rendering - specifically for Firefox I think compositing is disabled by default. It’s not the same as GPU accelerated video decoding which isn’t implemented in either browser, except via a set of patches for Chromium.


Check about:gpu - sounds like hardware acceleration isn't working.


It seems intentionally disabled. The about:gpu page reports the following:

>Accelerated video decode is unavailable on Linux: [137247](https://bugs.chromium.org/p/chromium/issues/detail?id=137247)

>Disabled Features: accelerated_video_decode


Just video or everything?


At least on MacOS firefox plays nice with Gsuite. Youtube can be hit or miss at high framerates every now and then.


YouTube is faster on Firefox on my laptop for some reason. I get zero dropped frames on Firefox too, compared to Chrome/Chromium.

Still waiting on Firefox to support hardware-accelerated decoding of videos though.


> Still waiting on Firefox to support hardware-accelerated decoding of videos though.

Wait, Firefox doesn't do this already? Didn't know.


It does on Windows and Mac. On Linux it does not but hopefully WebRender should pave the way to enabling it (see https://bugzilla.mozilla.org/show_bug.cgi?id=1210726).


Something is still extremely funky with video on macOS Firefox. Playing MP4s on YouTube still uses 200-400% more power than on Safari, and that's with transparent window disabled. Hell, Twitch turns my 2015 MBP into a vacuum cleaner but Safari barely breaks a sweat.


Have you tried using a Nightly build of Firefox? There's recently been some work to use CoreAnimation on MacOS to reduce power consumption that hasn't made it to the Beta or Release channels yet. (See: https://bugzilla.mozilla.org/show_bug.cgi?id=1429522)


There's also [0] which investigates streamed video including YouTube specifically.

While for all major platforms even including Windows on ARM64 the performance is good for VP9, and somewhat degraded for high resolution H264 videos, the results for Mac are bad across all of them. It's not even possible to playback videos with a 480p resolution, without being affected by a lot of framedrops.

In those 15s of playback a 480p VP9 video produces around 5 dropped frames, a 720p and 1080p video already ~45 dropped frames. Really worse it is with 4k@60fps videos which have more than 400 dropped frames even with a playback speed of 1.0x!

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1576107


Forgot to mention that I use GNU/Linux. My bad.


I'm fairly certain it does it already.


It says its using the GPU but if you look at the cpu time required to play a 1080p video within firefox vs playing the same video with mpv

www.mpv.io : a lightweight player that among other things can transparently stream any url understood by youtube-dl

It uses 20x as much cpu as mpv. For purposes of comparison this is like Toyota selling a car that goes just as fast as your Ford so long as you are OK with only getting 4MPG.

There is an addon if you prefer to right click on a url and open in mpv or in an addon like Tridactyl you can do the same with your keyboard. It also lets you speed up and slow down the playback speed which is nice for speakers who talk too slow and take a while to get to the point.


I use mpv on my laptop to get more out of the battery and to stop the fan annoying me.


It uses the GPU but doesn't use it to actually decode video on linux.

https://bugzilla.mozilla.org/show_bug.cgi?id=1210726

Bug opened 4 years ago. Hundreds of millions in revenue and can't task one person for a summer to hook into the support that already exists.


> Hundreds of millions in revenue

and probably < 0.5% of it due to its linux userbase?


It's not that they have huge userbase lately, so they could afford to lose it any further.


What would you consider a huge userbase?

Firefox has 230 million monthly users, using FF for an average of over 5 hours per day.

https://data.firefox.com/dashboard/user-activity


Given that Linux has 2% desktop market share (let's be generous) and Firefox has 10% (let's be generous the same way), that would mean that Linux has potential to be 20% of Firefox users. Is that something you are going to ignore?

Many executives would sell their families for less opportunity.


The big difference is that those executives would be getting money out of those 20% customer base.

I seriously doubt that those 20% Firefox users even consider doing a $1 donation.


Firefox makes money from the install base via the search engine default setting. Fairly sure more so than donations


Might be, is there any official numbers?

After all plenty also switch to other search engines.


Are you sure? I bet 99% just leave it configured to use google.


Linux distros tend to ship with firefox and linux users are more concerned with open source. It would be strange if the percentage of mozillas userbase using linux was smaller than the general percentage of pc users running linux.

Its probably more fair to go on profit not revenue.

100 million in revenue * 2.18% is 2.2 million. A decrease of 10% of that is 220k. If engineers at mozilla are 150k then 3 months of work is aprox 38k.

This would put the break even point at losing 2% of its linux users by not spending the money.


there are a lot of assumptions in your reply.

> 3 months of work

according to whom? this will almost certainly need more than a single engineer's eyes on it. and even if it is 3 months, that's 3 months they don't spend working on features for 98% of their userbase.

i want this to happen as much as anyone, but i just don't see the compelling math here. it's a chicken-egg problem (if FF could use VA API [1], then more might switch to linux).

personally, i'm happy that Panfrost recently got mainlined.

[1] https://en.wikipedia.org/wiki/Video_Acceleration_API


> Gmail was still dog-slow the last time I tried, Youtube can send my fan into a frenzy and Docs is also regularly problematic.

It seems to work fine on my system running Ubuntu 18.04 with ublock origin installed. Which OS are you using?


Anecdotally with Firefox on macOS, Gmail is by far the worst offender. Switching inboxes/labels takes several seconds in FF and is near instant in Chrome. Scrolling performance in FF is also much worse.


In my opinion, macOS is definitely FF's weakest desktop platform. I've personally experienced terrible performance, out-of-place UI for the platform (e.g. default dropdowns), and just a generally less polished experience compared to Safari, Chrome, and Chromium derivatives.

I still use it as my default browser on macOS though because it does excel in other areas.


I have zero problems with FF MacOS. Might be something in your setup?


Same for me, I tried the latest Nightly a couple days ago to test the CoreAnimation work, and after 1 hour of use my Slack tab because so unbearably sluggish each keypress would take 1 second to render.

No extensions, no GIFs, no heavy tabs open, perhaps one or two open tabs.

No slowdowns at all after 8 hours on Slack w/ Safari or Brave. Firefox is definitely the worst browser on macOS, compared to the Windows version where it's absolutely perfect and always quick.


I know about the CoreAnimation work. Hadn't heard about this accumulated sluggishness that you describe. Have you tried safe mode? A "Refresh"?


It was an installation from scratch, no leftover files or configs. And I actually did a manual refresh in the preferences.


Interesting i never had such problems. I do webdev and Safari seems to be always the worst. But i dont use gmail.


Yea YouTube performance is nuts! It seems to be exacerbated when playing in full screen or @2x speed as well.

Google has been getting better though with supporting FF. They now support Hangouts on FF and Gmail is actually faster for me in FF than it is in Chrome.


On Edge the slowness increases to the point that loading a ZX Spectrum game from tape feels psychologly faster.

A mute point however now with EdgeChrome.


Not to nitpick, but it's "a moot point".

Fun fact, the word "moot" has the same origin as "meet". A "moot point" was originally "something that has to be discussed and decided at a meeting" (like a town hall or similar). Thus there was no sense in discussing it, since it was up to the meeting to decide. Also, scouts still call their congregations "a moot".


Thanks for the correction.

<small rant mode>

Nowadays I have two sources of errors when writing.

My own lack of knowledge, or incorrect knowledge, and then the stupid auto corrections that just go down the drain due to me mostly typing across multiple languages. </small rant mode>

So whatever caused it, thanks again.


I wanted to keep using FF despite being a gmail user so much so that I switched to Thunderbird for my emails.

I guess Mozilla wins this way.


I did the same and then ditched gmail for payed email service + my domain.

Thumderbird is underrated.


Same here. Gmail is only an easy way to aggregate my emails, which I used yahoo for, and eventually might use something else.

Back home, Thunderbird, and the occasional donations.


I thought Thunderbird was not under development anymore?


It's been a weird story. Here's how I remember things going:

1. Mozilla Corp. dropped Thunderbird as one of their projects, and told the community they would have to figure out how to move ahead with the project if they wanted to. News of "Thunderbird is dead" spread out after this announcement.

2. It turned out there was plenty of interest in the project among the public. Donations came in, and the project was kept alive. They established the "Thunderbird Council" as their governance body.

3. The project still needed some legal home and assistance. The Mozilla Foundation (the non-profit) offered to serve as that. The council studied several options and decided to accept this offer.

And this is how they got here: https://www.thunderbird.net/en-US/about/


They released a new version on the 27th of August[1]. It looks like it that was the first release since August 2018[2].

[1]: https://www.thunderbird.net/en-US/thunderbird/68.0/releaseno...

[2]: https://www.thunderbird.net/en-US/thunderbird/60.0/releaseno...


I believe Thunderbird is now ESR. So it gets a major version then security updates for a year until the next one.


What is the meaning of ESR? Because that sounds like life support to the layman.


Nah. I means Extended Support Release. After Firefox moved to the quicker release train, larger organizations were annoyed because they like stability in order to test stuff prior to release and make sure extensions and everything remains the same. So like Ubuntu (iirc) and some others, they created a Firefox ESR that gets security updates and only major updates ~once a year. I think Thunderbird actually just tracks with Firefox ESR.


Do you want your mail reader constantly tracking the latest web tech?

The layman hates updates.


I solve this by using Firefox for all web browsing, and Chrome for gmail/gapps/anything by google.

It sucks that we have to do that, but I can't entirely blame them for making their own products work better on their own browser.


> It sucks that we have to do that, but I can't entirely blame them for making their own products work better on their own browser.

You mean like MS for a long time did with MSIE?

Are we really back in the 90s, back to “This page works best in $one_specific_browser”?

This is not the future I was hoping for... On the flip side I can use Linux for almost anything, so i guess it’s not all bad.


> Are we really back in the 90s, back to “This page works best in $one_specific_browser”?

Do webdevs really check for Chrome AND Firefox compatibility? The devs I know consider the job done as long as their site works with Chrome.


> Do webdevs really check for Chrome AND Firefox compatibility? The devs I know consider the job done as long as their site works with Chrome.

That would technically only make them Chrome-devs.


This... is more fraught than you might guess. Really fiddly details can make things look goofy if you don't check.


Yes, because Google is a cool company, do no evil and such.


Well .. consider you are back in the 90s ... would you think in general, that this is "the future you was hoping for..."?


I was considering a future of native applications, connected via network protocols, with the browser being used for hypertext documents.


Rambox was a FOSS desktop app based on Electron to house all your Chrome apps in tabs.

Rambox went proprietary, but was forked into Hamsket. Might be easier than what you're using now.

https://github.com/TheGoddessInari/hamsket


Just like you can't blame Microsoft for making ActiveX work best on Windows.


If they did it while talking about open standards, then we would.


If it's any consolation, Gmail is dog-slow on my Samsung Chromebook 3. I switched to "Basic HTML" mode to make it bearable.


Speaking of good tech blogs, Netflix has an excellent one as well.


Do they have a doc detailing the full architectural design? I feel like I'm walking into a movie during the last 15 minutes.


I suggest checking out the Dolphin Emulator tech blogs as well, they're also consistently great in quality. https://dolphin-emu.org/blog/


Gmail is really crazy slow on Firefox, it’s such a deal breaker. I have regularly 30s to load the web application, it’s closer to 2-5s on Edge with the same setup (same ad blockers, etc).


That has to be something wrong with your PC, it only takes a few seconds on even my 6 year old laptop on Firefox to load up from zero. Switching between folders is under a second as is opening an email. Firefox (latest Ubuntu)


I suspect this is because chrome supports WebSQL while Firefox does not.


I suspect you have suspected incorrectly: https://developers.google.com/web/tools/lighthouse/audits/we...


I realize Mozilla spearheaded an effort to remove webSQL from the standards. But chrome still uses it for fulltext search in gmail.

https://caniuse.com/#search=websql

https://nolanlawson.com/tag/websql/


I don't know about Gmail or Docs, but IME (also on linux) Youtube doesn't work in any browser[0]; I'd recommend http://youtube-dl.org/ and a real video player.

0: Even with remote code execution vulnerabilities enabled it shows a corporate spam clip instead of the video I'm trying to watch.


It is mostly par with Chrome now pretty much everywhere. Well browsing most website should not trigger lag in any browsers. But if we look at benchmarcks: https://www.phoronix.com/scan.php?page=news_item&px=Firefox-...


Baseline interpreter is a good example of doing things that help most Web pages instead of benchmarks. One of the major things that ARES-6 and other benchmarks test is whether generators are JIT'd, for example. This doesn't really help Web pages, as it's a rarely used feature at present and even rarer in hot loops.

SpiderMonkey should get around to jitting generators, but I have a hard time blaming the team for focusing on real-world improvements first.


Pcwalton! While I agree that most synthetic benchmarcks are mostly not representative for the real world, there is a benchmarck suite for real world.

That is benchmarcking most used ALEXA websites. It's named TP 5 and TP6, mozilla use it internally (e. G to measure progress on stylo) Why is the real world (TP5/6) not public?? Phoronix has never used it, mozilla has never publicly published benchmarcks. I guess they favor chromium but I've never seen all results only the few mozillians posted on bugzilla. Let the world know, publish the results, even if it favor chromium. Anyway the mainstream will never be affected by that public information (most people have difficulties discerning a search engine from the concept of browser, let alone talking about benchmarcks)


Did you look at the raptor dashboards? Let me help you: https://treeherder.mozilla.org/perf.html#/graphs?timerange=2...


Thank you! But without being able to compare it with chromium the results are meaningless to my brain.


You're free to port it to Chromium and run it yourself.


JSC's async-await is constructed on the generator's mechanism so that high performance generator is critical for all the web code using async functions. Not sure how SpiderMonkey is doing.


As the gp said, async/await is rarely used in hot loops, so it would probably never get jit-ed in a real webpage.


I don’t agree given that async iterator can be iterated. But I’m ok if SpiderMonkey folks thinks so ;)


Async is mainly useful for asynchronous operations, like network requests if file system access. If used in this context, the loop cannot be "hot"…

Maybe there are people (ab)using the async stuf do do something else, but at least it's not really common.


I'm sure SM will get around to jitting that stuff at some point. All I'm saying is that Baseline Interpreter gives users bigger wins right now.


Amended to "feels on par".


Does anyone know if those same phoronix tests have been run against the new Firefox 70 Nightly or Developer Edition apps?

I'd be curious to see how 68 versus 69 versus 70 compare.


I don't think it has yet been done. But as you probably noticed he tested with webrender too


I mean, WebRender and spidermonkey are two separate things.


YouTube freezes so damn much. Very annoying. But probably good since I watch too much damn YouTube haha.


> Gmail was still dog-slow the last time I tried

I use the HTML version instead to deal with this issue


My yt experience is both play @ same frames but firefox waits for a couple of seconds before I'm able to click play / when I skip to a part in the video - whereas chrome is instant in both cases which can becoming annoying


Confluence is also pretty bad on firefox for whatever reason, so I'm pretty sure this is going to be a problem wherever chrome is the dominant browser in the dev culture there.


All the Atlassian suite is somehow dog slow on Firefox on Mac. They're unusable.


I still need to either use chrome on android or explicitly navigate to the mobile page to get the title text on xkcd.com ... android firefox cuts the title text off with no apparent way to read the full one :/


Just use the mobile version, works fine on android ff


Worth noting that this basically means that all of the JS engines are converging on what JavaScriptCore pioneered:

- More than just two tiers (JSC has four, and I guess Moz has four too now; I guess it's just a matter of time before V8 follows).

- Bottom tiers must include a fast interpreter that uses JIT ABI and collects types (Ignition and this Moz interpreter smells a lot like JSC's LLInt, which pioneered exactly this for JS).

It's weird that they kept the C++ interpreter. But not too weird, if the IC logic in the new interpreter is costly. In the LLInt, the IC/type logic is either a win (ICs are always a win) or neutral (value profiling and case flag profiling costs nothing in LLInt).

Also worth noting that this architecture - a JIT ABI interpreter that collects types as a bottom tier - is older than any JS engine. I learned it from HotSpot, and I guess that design was based on a Strongtalk VM.

This is the current state of the art of JSC's bottom interpreter tier FWIW: https://webkit.org/blog/9329/a-new-bytecode-format-for-javas...


And also the way Java and .NET are converging, by having both AOT/JIT on the box, alongside JIT caches.

There is also the whole ART reboot, where Google made the mistake of not doing AOT the same way as Microsoft did for the Windows Store, by compiling on the device instead of the more beefy store cluster, so they ended up with a mix of fast interpreter written in straight Assembly, a first tier JIT with PGO, and an AOT compiler that takes PGO profiles and really optimizes the critical paths while the device is idle. This circle goes around every time the application is updated or PGO information gets invalidated.

Likewise .NET Native had a few issues with reflection, so .NET Core 3.0 is bringing mixed AOT/JIT into the mix, with further improvements planned for .NET Core 5.

And on the Java world, after a couple of decades with separate AOT (comercial) and JIT toolchains, also the more widespread JDKs are going with both.

So everything new is old again, given the AOT/JIT experiments from Smalltalk, Eiffel, Oberon and Lisp. :)


Hi @pizlonator, the link you posted mentions that direct threading isn't needed anymore.

I would have expected the Meltdown/Spectre mitigations to penalize indirect branches, making direct threading relevant again. From that post, I suppose it isn't the case, but I'd love to know more about it.


JavaScriptCore uses indirect branches just as it did before Spectre.

Also, direct threading still means doing the same number of indirect branches as before, unless you implemented it as a tree of conditional branches. So direct threading doesn't change the Spectre situation one way or another.


Are there any solid benchmarck comparing JSC vs v8?


JSC has had a set of large metabenchmarks like JetStream2 for a while. We have been the fastest for a while but V8 is gaining, so the current advantage is significant but not earth shattering.

The last time V8 had their own benchmark, they retired it right after we beat them on it and made a post saying that benchmarks are bad because people cheat on them.

Around that time I stopped seeing google.com claim that I should switch to Chrome “because it’s faster”.

So basically JSC is fast enough to have made V8 ragequit benchmarking. Hope that answers your question!


JSC is barely ever mentioned in the v8 team. They gave up wanting to be benchmark kings in the interest of having decent performance in Android markets (low-end and high).

Comments around cheating are likely aimed at OEMS who ship Chrome derived browsers and spin up the clock frequency when they detect certain benchmarks are running. These look great in online reviews.


I've always wished I could actually benefit from JSC without dropping a couple thousand dollars on a macbook and having to put up with the world's worst keyboard, because your browser team kicks ass :)

At least you're pressuring Mozilla and Google to step up their game on runtimes!


it's not Mac exclusive. WebKitGTK uses JSC too


> ragequit

I've increased my vocabulary today.

In topic: I always wondered why chrome stopped that marketing spiel.


I found https://webkit.org/blog/8685/introducing-the-jetstream-2-ben..., with a graph at the end, but of course that is written by Webkit (JSC) developers. Then there is https://johnresig.com/blog/javascript-performance-rundown/ from 2008. https://www.railsmine.net/2017/12/browser-benchmark-safari-1... from 2017, similar results to the JetStream 2 post.


Something I'm curious about is why JS developers don't have the option to send along some of this information themselves; type information, JS bytecode, etc. Given that we often have it on hand already (TypeScript) or could integrate it into our build process (webpack). Obviously plain JS needs to still work without all that, but it could be a compelling point of optimization for large-scale apps. Perhaps the JS bytecode just isn't standardized across browsers?


JS tried to add (optional) type hints in the ES4 standard that was never adopted (outside of tangential things like ActionScript 3).

It would be great if Typescript hints could pass right along to the JITs as useful optimization factors, but it currently sounds like TC39 would prefer not to recreate the disasters of ES4 and are staying out of type hints for the forseeable future.

(Well-typed code should prevent most JIT bailouts, at least. Typescript linters could possibly give even better "shape" warnings than they currently do, such as catching the issue the React team recently found of bailouts in the V8 engine due to "shapes" being built with "small integers" being reallocated to doubles at runtime. However, such lint warnings would probably be JIT engine specific, and maybe premature optimization in 90%+ of usages.)


Yeah it's probably not worth it to add types to JS the language, but what if you could ship standard metadata files similar to source-maps that only included type information, which browsers could leverage to speed up compilation?


In the long term, I hope that at least a compromise like the Python annotation approach could be reached (standardize the parsing, but ignore the semantics). Typescript type annotations share a surface level syntax with them in some ways, and also with ES4's type annotation attempt.

"Type maps" are an interesting idea. You could probably piggy back on/boostrap from WASM types.


Sounds good in theory, but it seems like rich territory for mismatched types in the source and map files.


I don't see why that would happen; you wouldn't be writing these by hand


Adding types would probably slow down the code as some optimizations done by the engine wouldnt be allowed. Kinda like when you use bitwise operations for optimization only to find out they actually slow down the code.

We could hower add Interface and optional types as syntax sugar for runtime checks.


Which is kind of ironic given WebIDL.


One way to see it's more subtle than meets the eye: suppose the website defines a function F which is annotated as taking a string argument. The website itself only ever calls F with strings. But suppose I go to the website, open the console, and punch in F(1). Of course, this sort of thing could be guarded against, but the point is just that it would introduce more complexity than initially meets the eye.


Engines already have to do this - check the type of an argument before calling the compiled version of the function. What the OP is suggesting is simply prefilling the type and instruction caches with information that the build system was already able to gleen.


And that would probably amount to a "bailout". I'm only talking about providing hints that can be used to skip parts of the described process, not getting rid of it altogether.


i always wondered why it wasn't possible to use a js app in "record" mode and then export any witnessed type & profiling information from the JIT and deliver it alongside the source (like a sourcemap).


That might be considered an example of profile-guided optimization: https://en.wikipedia.org/wiki/Profile-guided_optimization


yes, i was going to say PGO :)



That's not the same thing; asm.js is a precursor to WASM which is a bytecode language that doesn't include garbage collection, and asm.js is actually an interpreter for it that's written in JS. The JS bytecode the article talks about is generated from JS and interpreted by C++.

In other words, it isn't practical to compile JS to WASM, and especially not to asm.js. The browser's JIT compilation of JS targets something completely different.


I was mostly replying to this: "JS developers don't have the option to send along some of this information themselves; type information, JS bytecode", that in JS you can send some type information with asm.js hints (or TypedArrays), but yes you don't send it directly to the C++ interpreter, but to the JS interpreter, which presumably will pass it further.


The point is, if you care about performance use asm.js. If you don’t then just use whatever standard path browsers provide and any optimization they do is gravy.


Deno[0], a project from the original author of Node, will natively run TypeScript, so I'm hoping it will have some of those sorts of optimizations.

[0]https://github.com/denoland/deno


Unless he's making his own JS engine, it won't.


It doesn't natively run typescript, it just includes a webpacked version of the typescript transpiler.


> Baseline JIT compilation is fast, but modern web applications like Google Docs or Gmail execute so much JavaScript code that we could spend quite some time in the Baseline compiler, compiling thousands of functions.

Good read. The above mentioned web apps are my biggest pain point at the moment with Firefox. I still use Gmail in Firefox, but it's much slower than running it on chromium. I just accept that, since Google makes all of that.

However, with Google Docs I actually switch over to a chromium browser, even the new Edge beta, because it is simply too slow in Firefox.

It looks like they've taken notice of that and are tackling it head on. Excellent work!


One thing that wasn't clear to me from this post is why they still need the C++ interpreter at all. I assume there is some non-obvious cost that makes it not worth it for the coldest of code, but I'm having a hard time guessing what it may be.


> One thing that wasn't clear to me from this post is why they still need the C++ interpreter at all.

A lot of code on the web is very cold (executed once or twice) and for such code the Baseline Interpreter would add some overhead (requires allocating a JitScript storing the IC data for example and we would then spend more time in IC code as well). It's possible this could be mitigated or fixed with additional work, but we need to keep the C++ interpreter anyway (not all platforms have a JIT backend and it's useful for differential testing) so it's not a priority right now.


Can you run the baseline interpreter with the costly parts disabled? Even if the code then runs approx. the same speed (and at same cost) of the C++ interpreter, you‘d save maintaining a bunch of code. I assume implementing the missing backends offsets maintenance costs in the long term.


Makes perfect sense, thanks for explaining it for me.


Basically two reasons:

A lot of code is run once, so the initial compilation cost is potentially more expensive than just the process of compilation, without even running the generated code, and if not that then compilation+execution is also longer than just interpreting.

There are also benefits for performance. If you haven’t run any of the code yet (through the interpreter), the baseline portion of the JIT has no knowledge of the code that is running, and so has to make a bunch of static guesses and (if you look at the old baseline JIT in JSC) include a lot of code for various common cases that may happen. Eg in the JSC case every arithmetic op include the entire code for both integer and floating point arithmetic inline. That bites you in a few ways: the guessing type info means that you have poorly chosen branch ordering (eg you put integer logic in the middle of the “fast” path for something that is floating point), and you simply generate more code which itself takes more time to produce, and also results in increased icache pressure which hurts runtime performance.

I haven’t read the entire article yet, but the JSC interpreter is written in a pseudo assembly so it can do an on stack replacement to switch into jotted code more or less anywhere during execution as the wonders of assembly means you can guarantee identical layout.


As mentioned in the blog-post, the Basline Interpreter uses the JitScript structure containing type and IC information.

Allocating this additional struct is a waste of memory for a function that is only executed a handful of times, without ever actually using that information for optimizing.


I'm waiting to hear about when they start to rewrite their JS interpreter in Rust. It will make things kind of interesting, especially if it becomes a stand-alone capable JS inrepreter.


To be fair i think this is an exercize in vanity and why not say it, stupidity.

Javascript VM´s these days are fairly complex beasts with huge amount of man-hour and expertise. Also technically this would be a inferior solution as compared to what V8 did with its interpreter in Turbofan (and SpiderMonkey are doing now as presented in the article), by generating pure assembly through the turbofan backend (the same technique used by LuaJIT before with great success).

By the way lets not forget we are talking about Mozilla here, were a couple of more misguided projects could mean the abrupt end of the organization who is suffering to keep its market share in the browser wars.

There are a couple of things that could take some advantage for being re-coded in Rust, but modern super-powered javascript VM´s are hardly one of those things.

Of course, if in the sidelines someone craft a JS VM in Rust, and after IDK, 4 years, you have a mature enough JIT VM, maybe there will be a reason to move, but Mozilla itself investing its unsustainable, limited and in the brink of extinction funds on something that will require a lot of money and that in the end will get you basically the same perfomance as the old C++ jit, its a pretty bad move.


To clarify: for the moment, there is no project to rewrite SpiderMonkey in Rust. However, any bit of SpiderMonkey that needs to be rewritten will be rewritten in Rust if possible, and new features that can be written in Rust will generally be written in Rust.

Note that not everything can be easily rewritten in Rust due to the existence of lots of C++ code using macros and templates. Getting such to interact with any other language than C++ is painful.

(I'm in the SpiderMonkey team)


I understand how this might be cool and exciting for the Rust community, and even for the enginnering team who might be more inclined to work in a Rust codebase.

I worry that every risk taken to improve Rust, might have a severe cost by alienating Firefox userbase, and with that Mozilla get into troubled waters.

But as you have clarified, inner pieces are being slowly replaced, which is not as bad as rewrite the whole thing from scratch in Rust, while at the same time, leaving the current C++ codebase without any improvement.

Anyway, from the enginnering perspective, kudo for you guys, for being able to make the browser work with all this complication going on under the hood.

Hope it all works out even with all the complication, challenges and risks taken, as we all need good players like Mozilla to lead us to a better future, as we rely more and more on technology to improve our daily lives.


There's already a JIT for JS coded in Rust:

https://blog.mozilla.org/javascript/2017/10/20/holyjit-a-new...

The overall goal is to have Firefox recoded in Rust, why would a JavaScript interpreter be left out?

https://wiki.mozilla.org/Oxidation

That's the whole point of Oxidation.


> There's already a JIT for JS coded in Rust:

To clarify: it's a proof of concept, not a full JIT for JS. If you want to follow that project, it lives here: https://github.com/nbp/holyjit.


Of course they can do it. I just dont think is a clever move for them to do it.

Firefox is having a hard time to compete with Chrome, and they took too long pursuing other goals, and just remembering one key feature they took a lot of time to implement, was making Firefox a the multi-process browser as Chrome.

I mean, you would be ditching all this effort, rewriting it in Rust, spending key resources only to have some parity with the same browser you already had in C++, spending years, and loosing more market share, as Chrome can spend this time in optimization and features.

For the record, i think Rust is what will save Mozilla, but it must reinvent itself. Do the best they can with the firefox codebase they have, and use Rust for new projects.

Like, creating Cloud, backend system software, and reinvent itself like Ubuntu is doing right now.

They have to be very strategic and pragmatic right now. Two big moonshots, Firefox OS and Rust, and only one of them has paid out their time and resources.

If they want to bet in more moonshots, great, but they must do it in "blue ocean" places, being more innovative on where they should use Rust, were Rust can shine.

I just think that from a strategic (and even technical) point of view, they are just spending precious resources, while at the same time eroding even more their browser market share.


I see, I think we agree with each other, we are looking at things from a different angle is all. Thanks for the clarification, I do want Mozilla to succeed, we need other similar orgs to come out and be willing to compete with tech giants like Google as well, maybe Apache, but they're too busy being a giant corporate project dumping ground I suppose.


The only thing that will save Mozilla is Google coming under legal scrutiny for their privacy abuses or monopoly power.

Otherwise Mozilla lives because Google allows it and they allow it because it's mostly irrelevant and it's making itself even more irrelevant through stupidity like the MrRobot mini-scandal.


> Javascript VM´s these days are fairly complex beasts with huge amount of man-hour and expertise.

You know what else is a MASSIVE time and money sink? C/C++.

Combining 2 big and nasty complex stuff you get something even bigger.

----

I don't say to blind rewrite. Is know to be:

> an exercize in vanity and why not say it, stupidity.

But this must take in account TIME. Right now, is not the time for a rewrite, but is better to have it planned.

----

I'm not naive saying this. This is my life (as a rewriter of codebases for several years and many projects). I'm moving a medium sized ERP project to Rust. In parts, too. Focusing in data exchange, yes. But if we don't do it, then the complexity behind will kill us.


The topic of rewrite, i think it depends a lot of the context. Some things might make perfect sense for a rewrite, but i still think that a very optimized with huge amount of man-hour piece of sophisticated and complex C++ codebase like Spidermonkey VM are not one of those things.

Theres no gain in clear performance, unless theres a better algorithm being implemented, no clear gain in productivity, as C++ and Rust are both equaly complex beasts, and with not much gain in security, if you already implemented in "modern c++" and are using smart pointers, including in API´s and moves correctly.

The other gains in security/safety that you might have with Rust, are maybe lost if you think that the C++ codebase have been used and tested in every possible scenario, so a lot of bugs are corrected, and if you think that given this is a Jit VM you will have to use Rust´s unsafe{} in a lot of places, i still think, giving the context, that is not a smart thing to do, and it will be more like a trophy to Rust, but without a more pragmatic and realistic approach to the matter, not focusing properly on results.

Not much gain, you will end with a worse and more buggy JIT in the end, and will have to spend more years, only to get parity to the VM you already had in the first place.


That is correct, in the SHORT time. The thing with rust is that it provide safety guarantee FOREVER.

Is like null. You can write null-safe code in any language... as long your developers become "compilers" and by discipline make sure EVERY LINE is null safe. But when your lang do it for you, is a problem that get solved.

C++ demand a lot of attention to details that are unnecessary in rust. This is where the gain come.


Depends, using WinUI, Qt or Gtkmm is way faster than Gtk-rs, regarding productivity.


Couldn’t you say much the same thing about Firefox as a whole?


I think so, but with the Javascript VM i think this is more clear, to the spot.

As far as i know, the web rendering engine of Firefox in C++ were pretty old, coded in old c++ style. So in this particular case a rewrite would have some leverage.

The thing is, in the end, i dont know if this part C++, part Rust codebase wont start to create more problems than it solves.

Should the layout and rendering engine be rewritten in C++ or Rust? They decided to go with Rust, and now they will be forced to continue replacing C++ codebases with Rust, for consistency, workforce, etc..

In the end they will fight a lot to recode the browser, and meanwhile the competion can optimize, inovate and create more features.

Rust will gain a lot for sure, but not Firefox, not Mozilla. And with this Mozilla will be in a place where it will need to bet everything in Rust, as its only chance of survival.

End users dont care what technology go on their browsers, they care about perception, and its not clear that a browser will work better just because its in Rust as compared to C++ which is already a high performance language. (Maybe they will spend less resources in tests or in correcting bugs, but thats pretty much it)

Firefox is much better now than it was before? Yes, but my point is it could have been this better version of itself sooner, if they did not use Firefox to fight Rust´s cruzade to relevancy.


A project is done when no one is willing to work on it anymore.

We are all employees, not owners. Decisions that look good on paper won't stop people from finding a new job if the consequences of that decision are more than you want to deal with. And won't the new person want to rewrite it anyway? Now we have a rewrite being done by a person who has no idea why all the weird code is so weird.

So the trick isn't rewrite or no rewrite, the trick is how do we make the rewrite give us things we couldn't have without it. Employee retention is important but as you say that's not enough for the board or the users.

And rewriting in phases avoids the worst aspects of rewrites, which are sometimes undertaken in bad faith (intentionally on unintentionally). Some groups I've seen seem to enjoy the fact that you get to write a lot of code without thinking too hard, and management doesn't pester you about deadlines too hard for the first six months. Those are not good reasons for a rewrite, and they usually end pretty badly. But by then the developers have been at the company long enough that the duration looks good on their resume.


I've switched to Firefox 3 times in the last 2 years, and each time I've been forced back to Chrome by horrendously (2-3x worse) bad battery life on my MacBook, caused by some variant of this bug which Mozilla seems determined not to fix. https://bugzilla.mozilla.org/show_bug.cgi?id=1404042


What makes you think they're determined not to fix it? A commit that significantly improves the situation landed in nightly a week ago.

https://bugzilla.mozilla.org/show_bug.cgi?id=1429522


the fact that they've not fixed it for literally years, despite having hundreds or thousands of reports?

I'll believe they have a fix when I see it with my own eyes, which probably won't be for another 6 months because switching my browser workflow isn't something I want to do every couple of weeks to try out a new nightly with big promises.


The idea of generating an interpreter from the compiler is really neat.

What I missed in this article is why the Baseline Interpreter is faster than the C++ interpreter. The code snippet for the load zero instruction looks like what a compiler should produce for a straightforward C++ switch case for that instruction. Except that the code uses a push instruction to store the value directly on the system stack, whereas the C++ interpreter would presumably use a more general store instruction into an array (in the heap, maybe) treated as the interpreter stack.

Is that the difference, or am I missing something else?


> Is that the difference, or am I missing something else?

That's part of it. The generated interpreter should be a bit faster for simple instructions because of the reason you give (also: things like debugger breakpoints have more overhead in the C++ Interpreter).

However, the bigger speedups are because the generated interpreter can use Inline Caches like the Baseline JIT. The C++ Interpreter does not have ICs.


Ah, yes, inline caching probably explains it. Thanks!


The article called it a threaded interpreter, the answer to this question explains what that usually means and why it it faster: https://stackoverflow.com/questions/3848343/decode-and-dispa...

I found the explanation of what they're doing a little unclear though and it seems they might not be doing exactly what is described in the answer above.


The dispatch part of the code snippet looks to me like what you would also get with computed gotos. Something like goto instruction_labels[++pc]. So that shouldn't be the difference, compilers can compile this well.

As for whether this is "threaded", and exactly what kind of threading it is, there is widespread confusion and abuse of terminology. https://en.wikipedia.org/wiki/Threaded_code


I'm using Firefox for pretty much everything including Google apps, but the one thing I do wish Firefox had is support for casting via Chromecast. In my experience, Chromecast support has been sorta spotty even on Chromium-based browsers like Vivaldi or Brave, forcing me to keep Chrome proper installed on my PC for when I want to cast a YouTube video onto a bigger screen. Is this bit of functionality too proprietary or entrenched enough that it cannot be ported to any non-Chromium browser?


You can use VLC to Chromecast, it is a little bit fiddly but it works most of the time.


That sounds like the opposite of what graal+truffle is doing. With truffle you write an interpreter and graal specializes it into a JIT.


Would be nice to see benchmarck comparing v8 vs new spidermonkey!


Equally interesting would be a side-by-side architecture comparison.


Awesome. I love the direction Firefox is heading now that Chrome is a pain to use.


I was perfectly fine with Chrome's usability until I started getting a `Hold Command + Q` to quit` prompt on my Mac. Before then, I hadn't even considered the possibility that an application could block me from quickly and easily quitting out of it. Now I have to hold the key combo or double tap it to quit Chrome, and it is the only application I have to do that for. It's so annoying.


To Disable: Uncheck "Warn Before Quitting" on the "Chrome" menu - in between the Apple menu and "File" menu.


it doesn't help when you're testing via automated software - test runners will open up chromes, then often fail and leave them open, and the 'warn before quitting' is always defaulted to on.


Pretty sure you can choose a profile with specific settings if launching from selenium or cli.


I'm surprised that functionality isn't possible to disable, but tbh I really like it. Too many times a finger slip turns Cmd+W into Cmd+Q and suddenly I'm lost a whole pile of tab state, possibly even half-submitted forms. Yuck.


Surely that can be restored? In firefox, if you accidentally close a window, there’s a “Recently closed windows” thing in the history menu you can use that brings back all the tabs.


It can, and things are better than ever as far as preserving (most) form state, sessions, and scroll positions, but it's not perfect. Just as one example, pages opened on one network (eg at work) won't be able to reload elsewhere unless I take the extra step of connecting to the VPN. This is not the end of the world, but it's an annoying detour when I really just wanted to close that one tab.

And it's so rare that I shut down the whole browser anyway, so it makes sense to me to make it hard to do accidentally.


That is true, however this results in all your tabs getting refreshed. For e.g. ProtonMail, this means you have to login (get out your 2FA device etc) all over again.


I worked around the accidental-Command-Q problem in Firefox by setting up a custom keyboard shortcut using the paid app Keyboard Maestro (https://www.keyboardmaestro.com/). I have a macro “Confirm Command-Q to Quit” that intercepts the ⌘Q keystroke, only in Firefox, and instead shows a floating dialog titled “Really quit Firefox?”. The Cancel button in the dialog stops the macro, and the Quit button continues the macro to the next step, which is a Quit Firefox instruction.

Another possible workaround is to go to System Preferences > Keyboard > Shortcuts > App Shortcuts and create a new shortcut. You can specify that in the app Firefox, the menu item “Quit Firefox” should have the shortcut ⌥⌘Q. Then a normal ⌘Q should do nothing.


It's possible to disable as mentioned in another comment.


This is extra weird because browsers have become really good at preserving state between closing and restarting.

The real answer to potentially destructive moves is not to ask the user twice, but to let the user easily undo.


It's also to keep chrome apps and notifications working. I use Hangouts a lot. Though I generally deny notifications for anything else. For most anything that I would use persistent I tend to use an external app anyway (even if it's just an electron wrapper).


FWIW, this behavior is trivial to disable (though I agree it should not be on by default: I actually had remembered it not being on by default). It was really important for me, as once a month I would accidentally hit cmd-Q and then sigh the sigh of the damned as I wait for what is effectively a reboot of my entire computer (as the only two things I tend to have open are a terminal and a browser).


Ironically, I love that feature and wish I could have it on my other apps, especially on Firefox.

I've fat fingered Cmd-W and Cmd-Q too many times, and while it's easy to restore, it takes a couple minutes, a lot of bandwidth, and spins the CPU to 100% for a while. Which really sucks when you're on battery.


Easy to do with Hammerspoon: https://apple.stackexchange.com/a/349766


Firefox: about:config, set browser.showQuitWarning to true


I use the add-on "Disable Ctrl-Q and Cmd-Q" because of this. To actually quit with the add-on enabled, you must click the Quit Firefox menu item.


Honestly as much as I dislike chrome the “hold cmd q” behavior has saved me many times and I kind of wish safari did it: cmd-q and cmd-w are very close to each other :)


I actually love this and was waiting for this feature for a long time. I can't count the times I quit chrome by mistake.

You can disable it though.


This bugs me big time, I wonder what was the decision behind such inconvenient UX.


How is it inconvenient? It prevent accidental quits!

I use ctrl-q as my shortcut key in tmux. I can't count how many times I accidentally sent that into the wrong window :) I consider showQuitWarning in Firefox a necessity.

IMO, no application ever should quit from one key combination without confirmation.


I'm guessing Google apps like Hangouts and notifications. To keep you from accidentally completely closing.

On Windows it silently keeps running in the background most of the time if you have apps or notifications running.


In what way is Chrome currently "a pain to use"?


Chrome regularly tries to trick me into logging into my Google account via Chrome. If you try to disable this functionality, updates will include new dark patterns to trick you into logging in anyway. Now if I log into any Google service, Chrome will magically log into my Google account, too. When I open Chrome, I'm greeted with a new tab with a login screen for my Google account.

I don't want to have mess with a bunch of settings just to turn off anti-user features and tracking. I am willing to mess with a bunch of settings if it enhances the functionality of the application I'm using.

Open 'about:config' in Firefox and look at all of the ways Firefox's behavior can be configured. I run my own Firefox Sync instance, because Firefox is just that customizable. Google regularly removes customization options from Chrome. I used to be able to Cast non-HTTPS resources from Chrome, then I had to enable a setting buried in its experimental features to do so. Now the feature is "enabled" in the settings, but after Chrome auto-updated a few times, the feature doesn't work at all.

I can no longer install Chrome extensions from GitHub, even though I could a few weeks ago. Google decides to take extensions off of their Chrome Web Store, and then make it difficult to use extensions they don't approve of.

Firefox uses less memory and CPU than Chrome does, I'm on a MacBook, so anything that unnecessarily drains my battery is a pain to use.


It's at least "100% more evil" than other browsers.

Google doesn't need to plaster the world with Google Analytics if it can get most people to use a browser that phones home.

From around the time of the Windows 8 transition I used Microsoft Edge as much as possible. Firefox was at a low ebb then.

I switched back to Firefox when Microsoft announced it would use the Chromium rendering engine for Edge. At that point Firefox had improved performance a lot and I've mostly been happy with it.

The minus of it however is that many developers are choosing to only support Chrome. For instance I worked at a company that had developed a data analysis tool with a React front end and it didn't work with either Firefox or Edge (or Safari) so I had to install Chrome for work. I don't think there was a deep technical reason why that was, but rather they did not want to go through the effort to test on other browsers. Our customers weren't clamoring for wider browser compatibility so that was OK for the business.

From time to time I find public web pages that have problems w/ Firefox, although more frequently I find pages that don't like it that I block ads at the "hosts" level. Some sites now use trackers as part of the authentication/anti-fraud process and that can be a problem.

Believe it or not I hardly ever log into Google. I have a gmail account that I barely use, but when i do I IMAP into it with em client. I am really done with Adsense, Adwords, Analytics, and all that. If I am working for somebody that is using Google services I will use it, but otherwise I can go a month or two w/o logging into Google.


> It's at least "100% more evil" than other browsers.

oh god, it's like identity politics but "vim vs emacs!" flavored


V8 did this a while ago.


As far as I know V8 has a generated interpreter and an optimizing JIT (Turbofan).

In Firefox we now have a generated interpreter + a Baseline JIT on top of a mostly shared code base. I think that's a pretty nice design/advantage.


Fair point!


Javascript engines gave up on interpreters too quickly. JITs have been a huge source of security holes, and the language is so huge now that verifying the correctness of JS optimizations is extremely hard. JS was never meant to be a high performance language. Plus all the heroic work on exotic optimization has just resulted in induced demand. Web pages have just grown to contain so much Javascript that they're even slower than they were when JS was slow.

Browser vendors should agree to make JS slow and safe again like it used to be, forcing web developers to make their pages smaller and better for users. For the unusual cases like browser-based games, WebAssembly is ok (it's much easier to verify the correctness of a WASM compiler), and it should be behind a dialog box that says something like "This web page would like to use extra battery power to play a game, is that ok?"


> Browser vendors should agree to make JS slow and safe again

Not gonna happen.

There's one browser vendor in particular who has 2/3 of the browser market, 96% of the ad network market, 87% of mobile, and a similar lock on online office software, email, mapping/navigation, etc. etc. They have every incentive to use their commanding service in providing both the services and the means of access to those services to consolidate their control over the world's information resources. And, as the key way in which all these different components are implemented and interact with each other, JavaScript is their most effective means of maintaining that stranglehold.


And driving to make JS that good themselves got them there.


> JITs have been a huge source of security holes, and the language is so huge now that verifying the correctness of JS optimizations is extremely hard.

Do you have numbers to back that up?

There certainly have been 1 or more security holes in JITs, but AFAICT most of the browser vulnerabilities have more to with bad (new) APIs.

The level where a JIT operates really has nothing to do with the surface syntax of JS, so adding "syntactic sugar" features to JS should have very little impact on JITs. (I'm thinking of things like the class syntax, lexical scope for function literals, etc. Maybe there's a class of additions that I'm missing.)


> Do you have numbers to back that up?

Hm hard to come up with a number that shows JS optimizations are hard, but you can peruse a collection of Javascript engine CVEs: https://github.com/tunz/js-vuln-db

Notice how many are JIT or optimization issues, or are in esoteric features like async generators or the spread operator.


That's fair and I now know more. I'm not convinced that this is the biggest issue with JS engines/browsers, but I certainly have more evidence against me :).

It's interesting how many of those are labeled OOB. Does that mean that we're talking JIT flaws that allow OOB access to memory? Is it's actually tricking the JIT itself into allowing OOB access, or is it actually OOB'ing the JIT?

I wonder what the performance impact of all JIT code being forced to do bounds-checking would be...


> Is it's actually tricking the JIT itself into allowing OOB access, or is it actually OOB'ing the JIT?

What's the difference between the two? Many JavaScript exploits abuse the interaction between strange features of the language to get around bounds checks (often, because a length was checked but invalidated by later JavaScript executing in an unexpected way, or a bound not forseen as needing a check) leading to an out-of-bounds. And I'm assuming many of these are heap corruptions where someone messes with a length that lets them get out-of-bounds.


It's not much of a difference in terms of consequences, but always-on bounds checking could alleviate OOB'ing the JIT at least.


I am more familiar with Java where the runtime implementations have gone back and forth through various iterations, such as Jazelle and various ways to accelerate Java on ARM, the various Android implementations, etc.

What people think is the best choice of tiers to use is always evolving.

One factor against JIT's is that modern chips and OS want to set the NX (no execute) bit against the stack and the heap which at least forces attackers into return-oriented programming. To JIT you have to at least partially disable that behavior.


I mostly disagree, but I appreciate your opinion - it adds a valuable viewpoint that needs to be considered.

Opinions on this matter may arise from the dichotomy Martin Fowler describes between an "enabling attitude" and a "directing attitude" in software development: https://martinfowler.com/bliki/SoftwareDevelopmentAttitude.h...

You're right that web apps have become extremely JS- and framework-heavy. Just like adding lanes to a freeway increases traffic, adding JS performance has increased demand for it. But faster JS execution does translate to more headroom for developers (regardless of whether they abuse it), which enables new scenarios that wouldn't be possible otherwise.

An enabling attitude will give top developers the freedom to rise higher than ever before; whereas a directing attitude helps to improve those who would perform poorly otherwise (by preventing stupid decisions) -- but places artificial blockades in the way of the best performers.


The best performers can use WASM, I suppose.


Unfortunately it isn't as simple as saying "just use WASM for intensive apps". There are still huge hurdles to getting WASM modules to understand and interface with the page around them. Maybe one day that will change, but not in the near future.


Pandora's box has been opened so what you say is not going to happen, ever. Get used to it, JIT javascript is not going away anytime soon


So would this be one of the Futamura projections http://blog.sigfpe.com/2009/05/three-projections-of-doctor-f... ?


What's a good way to diagnose optimzation/deoptimization performance issues? The Z80 emulator I use for http://8bitworkshop.com/ has some long pauses while it's spinning up. It uses a huge generated switch statement, which I'd assume is hard to optimize if type info isn't complete. (I'm replacing it with a simpler emulator which works much better though)


I have written a single javascript program in my life, and it was an emulator for an 8080-based machine. I used https://bluishcoder.co.nz/js8080/ for that part of the emulator, though I had to make some changes to it.

I found the emulator ran 4x faster on firefox than on chrome. The culprit was the main dispatch loop, a 256-entry switch statement. Chrome used a slow fall-back path because there were too many cases. The fix was to have "if (opcode < 128) { switch for first 128 cases} else { switch for other 128 cases }". It made FF a little bit slower, but greatly sped up on Chrome.

I also tried generating 256 functions and then dispatch to the right sub based on an array of function pointers, but it wasn't any faster than the switch statement.

But that was five year ago, and I'm sure the landscape is different now.


I'm seeing a lot of people saying gmail and youtube are slow in Firefox. This may not sound like a good answer, but consider using a dedicated video player such as mpv for youtube, and an email client instead of webmail.

Web browsers are some of the worst-performing software we have today. Asking them to do more than display documents and web pages never seems to go well.


I'm sorry, I do not understand from the article what the Baseline Interpreter is or does. It keeps the Baseline Compiler from having to compile so many functions by turning some sections into bytecode first?


Everything is turned into bytecode anyway. The Baseline Interpreter interprets the bytecode faster than the C++ interpreter, which allowed them to send less code to the JIT compiler.


I wouldn't be surprised if improvements in Javascript execution will make webassembly obsolete.

It already has a slim advantage of being 2x faster.

And Javascript has so many advantages in terms of handling. No compilation step needed at all. And since it has modules widely supported now, it is a joy to code in native Javascript without any libraries like React&Co.

Just look at how beautifully you can dynamically load code when it is needed in modern Javascript:

    let calendar = await import('/modules/calendar.js');
    calendar.askUserForDay("Checkin Date");


WebAssembly will never become obsolete (in a technical sense, who knows what will happen in practice) as long as there are use-cases that require predictable and consistent performance. I predict these will become plentiful as more and more companies build web apps for increasingly performance-sensitive niches.

I'm not talking so much about raw speed, it's more about latency. Things such as realtime audio in the browser, non-CSS-driven animation, 3D (or any kind of realtime graphics, really), tight UIs where feedback must be very fast to be useful, etc. In lower-level languages, you must sometimes go as far as to avoid all memory allocations in one critical path. It's very hard to do so in JavaScript, especially when you take into account the prevalent coding style and functional nature of the language.

As long as you have GC and a dynamic type system which requires JIT heuristics to achieve that 2x speedup, you will always have cases of pathological latency which degrade the experience. When you measure raw speed in benchmarks, you amortize all of the jitter.

Now, whether the market is such that it will be satisfied by apps with this behavior is another story.


And then you have cases like Android where there was a FloatMath class, implementated in native code due to the dog slowness from Dalvik with floating point math.

After ART came into play, and its JIT started getting serious optimizations, doing regular Java math was faster than going over the JNI wall and FloatMath is now deprecated.


You are possibly right, but it would be sad if we our choices for writing programs with a UI are: write in javascript or compile to javascript. There are many languages out there, webASM would allow them to work without the massive pain of cross-compilation.

I guess I am just an idealist screaming about how packet switched networks are unreliable and we should all use circuit switched networks.


If you want to use a different language, why would you care about the compile target?

Compiling is done by the compiler. So to the developer it is the same. No matter if it comiles to Javascript or Webassembly.

In the end, I don't think writing code for the web in languages other then Javascript will take off. Simply because Javascript will always evolve to fit this specific environment. And therefore will always be the best choice. While other languages will evolve to be the best fit for their niche.


Cross compilation always comes at a performance cost. Moreover, it is another compilation target your compiler needs to support. When that compilation target is a high-level language, supporting it is harder. This means java-script as a compilation target is less likely to be added.


Because usually it ends up with debugging generated JavaScript at some point.


The use case for wasm is basically doing cpu/memory intencive work in a worker process. In order to use wasm for UI you need lots of glue and ducttape, and the end result will be something using 2d canvas.


I love those elemental animal names of their projects...


Had no idea their interpreter was C++ under the hood. Excellent article fro the Mozilla team as always.


I can't help but think that regardless what different browser vendors do there's no competition with v8. Like any language there is a standard library, or environment around it. V8 is to JavaScript what CPython is to python.

At least in python you can use other versions and it's a very similiar environment. But if you want to use mozillas spidermonkey without Firefox, it's hoops and bounds worse experience. I'd argue that v8 is far less of a lock in by comparison.

Given all that, why are we still creating entirely separate engines that are made differently, yet do the same.


Diversity and multiple implementations are essential for the web IMO.

V8/Blink/Chromium are not independent community projects, but firmly in the hands of Google. Chromium being the only viable implementation would put too much control in the hands of a single company (regardless of which company that is, Firefox being the only implementation would be just as bad).

It's redundant effort, but it also enforces consensus building and exchange of ideas.

EG Google would have happily stuck with PNacl, but (afaik) Mozilla pretty much forced their hand - with the result being Webassembly, a much better design.


Also a much slower one - on PDFTron's benchmark, WASM is half as fast as PNacl for me: https://www.pdftron.com/benchmarks/pnacl-vs-wasm/.


Their very own blog post goes into plenty of detail about this: https://www.pdftron.com/blog/wasm/wasm-vs-pnacl/

It's nothing fundamental about WASM; in fact they state that the actual computations are slightly faster.


If that diversity doesn't lead to better results then it isn't necessary. I wouldn't consider the oligarchs of Js engines to be diversity, nor inherently innovative. In the past there has no doubt been innovation, but currently it has significantly stagnated.

Wasm was not what was promised. Mozilla and the other vendors just translated the wasm bytecode to js bytecode. All it did was skip a few steps. Yet the actual requirements for performance, simd, was ignored in its proposal. It has been a significant under delivery overall.


> If that diversity doesn't lead to better results then it isn't necessary.

It does, so it’s necessary. Performance is not the only one result we care about.


If they were innovating then spidermonkey would be comparable or better than v8. It isn't. It's effectively proprietary to Firefox. Id call that the opposite of diversity.


This meme of "why are we building more than exactly one thing for a certain purpose" needs to die. Alternatives need to exist and they are beneficial for innovation and cross-pollination of ideas, resilience, finding alternative approaches to problems and last, but certainly not least, not ceasing control of everything to a single company. For inspiration for this idea, see: natural selection and evolution.


> Given all that, why are we still creating entirely separate engines that are made differently, yet do the same.

To name one reason: would we have asm.js/Wasm instead of (P)NaCl without Mozilla's work on asm.js optimizations in SpiderMonkey?


About a decade ago I needed an embedded scripting language in a c++ project and chose JS. Embedding SpiderMonkey was a matter of copying all files in the `js` subfolder of the Firefox sources into my project, and calling into it was peanuts. The classes were well named, easy to use and easy to learn.

Did this somehow change?


Nope.

JSC is even better in that regard - it’s API and ABI stable so you can just link to the[1] system install and use it [2].

[1] on Linux there are multiple (although technically it could be made to have a single lib for qt,gtk,wx...

[2] ok, actually using the C API is very very clunky :-/


There are often multiple installs of spidermonkey too (IIRC recently there was an effort in FreeBSD Ports to consolidate everything onto the latest couple versions)

The most (in)famous consumer of that is polkit :) but also GJS


The problem for JSC I suspect is how you manage gtk/qt/wx bridges without need either all three or alternatively creating dependency hell due to the bindings directly interacting with internal (eg totally unstable) interfaces and structs.

Eg you’d ideally have

* libjavascriptcore - the actual engine, runtim, and c-api, etc

* libjavascriptcore-qt (only the bindings, it would link the root jsc lib)

* libjavascriptcore-gtk (same)

* etc

The problem is that because they talk directly to internal interfaces they need to update in lockstep.

On Mac the only frameworks that do that are the core webkit frameworks. Nothing else on the system can talk to the internals (Mac and iOS have fairly comprehensive support for distinct internal/project/public APIs). But the system webkit, webcore, jsc, etc all have to update and build in lockstep.

In an ideal world all the alternate language bindings would be built on top of the stable C API, but alas (as I said before) the C API is fairly clunky and also out of date wrt to modern JS features. Also there are fun performance things a given binding can achieve if it doesn’t need to go through some layer of abi stability limitations.


Why would a JSC on its own depend on the UI toolkit? Spidermonkey doesn't.


It doesn’t, but the major webkit ports all have bridging APIs to make interfacing from a <gtk,qt,wx,corefoundation,cocoa> cleaner, and potentially lower cost.

I recall there being a desire to make it easier for bindings to be done entirely through the API, but as said elsewhere the API is somewhat clunky. Of course any level of abstraction adds costs - for example by being tied to the innards of JSC the various bridges are able to directly bludgeon the tag bits in JSString (the jsc raw string type) so there’s zero copying.

It also theoretically means you can do automatic object bridging (see the objc bindings).

So it’s not “JSC has to have UI bindings” as much as “JSC can be built with bridging APIs for major embedding frameworks”.

There is a trade off to be made, and long term I’m sure everyone in the JSC team at Apple would rather they could pull the bridges out of core build, but API design is very hard when you are having to think about long term support, coupled with continued support for the existing APIs.


Uhhh what? JSC beats v8 across a wide variety of code, and has stable ABI so you don’t have to have N different copies of the entire implementation across every app.


Because monocultures are bad.


Is linux bad? Is git bad? Your argument needs a little more development.


Linux is only a monoculture on the server. Server-side applications will sometimes use Linux -specific features but generally are written to be ported to other POSIX systems with little work.

Of course the git monoculture is bad. Have you seen how many people complain about git on Twitter?


linux isn't a monoculture at all

assuming you know about Windows and OS X and are just talking about servers, there's still the BSDs and Solaris and AIX. a good deal of software is written to just assume a reasonably POSIX-compliant environment, precisely because linux is not the only server OS.

unix-like operating systems are something of a monoculture and that IS very bad, because OS design is basically stuck in 1973

>Is git bad?

yes and anyone who says otherwise is numb to the pain


Yea, because Google surely will implement and fast track new privacy measures on V8, instead of fill any initiative on this regard in a lot of red tape...


It's a JavaScript interpreter. What privacy features did you have in mind?


For example, closing subtle interpreter patterns or behavior differences that may be used to fingerprinting...


I guess that could be an issue after all the non-subtle ways to fingerprint are fixed.


You mean the ones google is fighting to keep, while trying to convince everyone that allowing more tracking would incentive bad actors to drop fingerprinting ?


Where did you learn that? My understanding is that they like cookies and don't like fingerprinting.


You just repeated that dude's last comment. I think you need to read it again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: