Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dynamic languages tend to gain more from JIT as AOT due to such issues.

On the other hand, have a look at Dylan, as it might inspire you:

http://opendylan.org/



For the method lookup, other than for methods that are dynamically generated with names not known at compile time, the only additional gain you'll get from JIT is by going to full on inline caches, but vtables gets you most of the speedup without the hassle of inline caches and tracing, and doesn't prevent using tracing and inline caching down the line.


With JITs you get devirtualization as well, so no need for vtables.

Something possible in AOT as well to certain extent, but it requires a mix of profile guided optimizations coupled with whole programm analysis.

Which have issues with dll/so anyway, as those calls cannot be optimized away as in JITs.


> With JITs you get devirtualization as well, so no need for vtables.

That's what I referred to with "inline caches". The problem is that for Ruby you need fully polymorphic inline caches, with guards all over the place, because unless you do tons of analysis upfront, you will have problems knowing whether or not the world has totally changed on you after any method call, and almost anything is a method call. (call into code you have not verified can't possibly call "eval", and you might find that adding two integers afterwards does in fact not add them, but returns a string and changes global variables, and what-not)

The upshot, is that compared to vtables, you're not actually saving all that much. E.g., take "1 + 2 - 3". You could inline Fixnum#+ (and could reasonably do so with an AOT compiler too). But you need to add a type guard before the inlined fragment to verify that Fixnum#+ still is the Fixnum#+ you inlined, which at the minimum costs you a comparison and a branch or you need to record every call-site with inlined code and be prepared to overwrite it with fixups if the implementation changes.

And if Fixnum#+ has been overridden, or the Fixnum#+ implementation has method calls, chances are you will need another guard before "-" too, because you might not even know for sure whether or not the object returned from "1 + 2" will be a Fixnum, so you might find that the inlined method suddenly is for the wrong class.

I'm planning on benchmarking inline caching for my compiler against vtables, but absent evidence to the contrary I'm expecting that there will be a very substantial number of cases where the complexity isn't worth it, or where they might even turn out to be slower.

> Something possible in AOT as well to certain extent, but it requires a mix of profile guided optimizations coupled with whole programm analysis.

It does if you want to do everything upfront, but you can pull things into inline caches with a mostly-AOT compiler relatively easily with just a little bit of extra information, and a few guards thrown in to do some basic tracing.


I've implemented a handful of simple dynamic languages years ago, and something I was interested in trying, but never did, was taking advantage of the MMU to replace guard clauses.

For example, mapping a few pages for vtables/method dictionaries read only. When something like `def` or `define_method` comes along, catch the segfault (which in this case would actually mean "segmentation fault" instead of "I fucked up") and rewrite all JIT blocks or method caches that depend on that method table. Once everything has settled, generally after startup and the vtables tend to stay more stable, the overhead seems like it'd be negligible.


Catching the vtable updates and propagating them downwards is pretty simple, you "just" need every class to know which classes inherits from them. There's an implementation for dynamic runtime updates of dispatch tables for Oberon, of all languages (though that version sidesteps the "sparse vtables" issue by splitting the vtables into interfaces, and adding one extra level of indirection).

The tricky bit is if you have gone as far as inlining the method.


(Disclaimer: I know nothing about Ruby, but I know some things about JIT compilers)

Another way to handle this is to assume that Fixnum#+ hasn't changed when compiling a method that is using it (maybe add a check at method entry); but when it does get redefined you "deoptimize" the methods that you compiled while holding that assumption.


That's what this referred to:

> or you need to record every call-site with inlined code and be prepared to overwrite it with fixups if the implementation changes.


Interesting post. You have really spent some time looking into it.

Compilers was one of the main focus on my CS degree, so I am really into this type of discussions.

Good luck for the project.


Thanks.

It's really fascinating, and what fascinates me in particular with Ruby is exactly that once you start looking into it, there are new problems around every corner, and trying to make it as "ahead of time as possible" makes it even trickier - I absolutely agree with you that there are parts that are much easier to do if you JIT, though, and I'll have to go there anyway to handle "eval"..


Yep, I keep on jumping between "love JIT" and "love AOT" in terms of implementations.

Currently my feeling is that most languages could benefit from both.

A JIT like environment for live coding and portable deployment.

And a AOT one for certain types of deployment where thin runtimes are desired.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: