Hacker News new | past | comments | ask | show | jobs | submit login
Live Objects All the Way Down: Removing the Barriers Between Apps and VMs (programming-journal.org)
89 points by mpweiher on Jan 7, 2024 | hide | past | favorite | 22 comments



These are the traditional advantages of Smalltalk-like systems, and yes, this system is based on Smalltalk. There's a 2014 paper about Bee Smalltalk [1]. There's a Github repo [2]. There are no real docs and minimal comments in source control.

Section 5.3.3 talks about some drawbacks, and there are more in "8. Conclusions" and "9. Future Work."

> Today, Bee is far from finished, yet we know all required functionalities can be implemented. Furthermore, the resulting code is fully object oriented and can take advantage of all the benefits that a high-level environment brings. The main remaining question to be answered is what is the maximum performance to expect from the system. We believe that the answer to that question will be highly positive, and that we will be able to unravel the mystery very soon.

...

> Current implementation of Bee is more than 10x slower than the hosted environment [...]

So, interesting work in progress, maybe check back later?

[1] http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_Design%20... [2] https://github.com/aucerna/bee-dmr


> a 2014 paper about Bee Smalltalk

Here's the pdf for this 2023 paper:

https://arxiv.org/pdf/2312.16973v1.pdf


That's pretty interesting. It's not as aggressive as Bee sounds, but the Espresso JVM is somewhat similar in concept. It's a full blown JVM written in Java with all the mod cons, which can either be compiled ahead of time down to memory-efficient native code giving something similar to a JVM written in C++, or run itself as a Java application on top of another JVM. In the latter mode it obviously doesn't achieve top-tier performance, but the advantage is you can easily hack on it using all the regular Java tools, including hotswapping using the debugger.

When run like this, the bytecode interpreter, runtime system and JIT compiler are all regular Java that can be debugged, edited, explored in the IDE, recompiled quickly and so on. Only the GC is provided by the host system. If you compile it to native code, the GC is also written in Java (with some special conventions to allow for convenient direct memory access).

What's most interesting is that Espresso isn't a direct translation of what a classical C++ VM would look like. It's built on the Truffle framework, so the code is extremely high level compared to traditional VM code. Details like how exactly transitions between the interpreter/compiled code happen, how you communicate pointer maps to the GC and so on are all abstracted away. You don't even have to invoke the JIT compiler manually, that's done for you too. The only code Espresso really needs is that which defines the semantics of the Java bytecode language and associated tools like the JDWP debugger protocol.

https://github.com/oracle/graal/tree/master/espresso

This design makes it easy to experiment with new VM features that would be too difficult or expensive to implement otherwise. For example it implements full hotswap capability that lets you arbitrarily redefine code and data on the fly. Espresso can also fully self-host recursively without limit, meaning you can achieve something like what's described in the paper by running Espresso on top of Espresso.


This seems similar to the technique popularised by Jai (jblow) and adopted by Zig -- making the compiler programmable in the host language, at compile-time.

Here it seems this is happening at run-time by using an interpreted language.

Nevertheless the reasoning they give is similar to why you'd want compile-time to be programmable, that the distinction between compiler and program is somewhat artificial and generally severely hobbles the programmer from being able to introspect their program (etc.).

Given how Jai/Zig/etc. work, it now seems crazy that compiler authors have gone to extreme lengths to provide really-bad compile-time languages (called "type systems") that do 5% of what's useful with incomprehensible syntax, rather than just make the compiler itself programmable.

This technique makes much of programming language design seem to be fumbling around in the dark.


Type systems aren't programs because sensible type systems are decidable. They're very specifically not a programming language but a constraint language.

https://3fx.ch/typing-is-hard.html ; more people should understand Hindley-Milner.

Now, there's also a case for compile-time programmability, but that is a feature that needs to be used carefully because it can make the question of what code is actually running very opaque, as well as arbitrarily increasing compile time. (C++, the bad bits)

> making compilation a program that you write wherein you can fully introspect your program

I'm experimenting with C# source generators at the moment, and wondering if there are any breakthrough features we could add this way by making it fully generalizable. (They have the limitation that they are additive-only, you can't mutate existing source)


>They have the limitation that they are additive-only, you can't mutate existing source

.Net has System.CodeDom which can load existing source code and mutate it.


> really-bad compile-time languages (called "type systems") that do 5% of what's useful with incomprehensible syntax, rather than just make the compiler itself programmable.

Because some things aren't knowable at compile time. E.g. what is the value I enter at runtime, and all values derived from it, but also state of environment, and so on.

Pretending they are the same is a weak point of Zig. By making `const` a property determined by the compiler, you can accidentally introduce breaking changes, just by changing function contents.

https://typesanitizer.com/blog/zig-generics.html


I wasn't talking just about Zig's implementation of comptime, I was talking about collapsing the distinction between compiler and program -- if done well then the problems mentioned in that article need not occur.

I think the key feature here is making compilation a program that you write wherein you can fully introspect your program. Worries here about IDE support fadeaway, as the full state of the compiler is available -- if some annotations on code are required to guide IDEs, so be it.

I wasnt saying that type systems should be replaced with compile-time programs, only that they are compile time programs, and often really bad ones. (My claim: Programmers shouldnt only have the type system to program the compiler.)

Better that the type system be extremely simple and let the programer transform their program as they wish.


Sure. But I think my point still applies.

Separation of compile vs runtime is as useful as separations between pure and non-pure functions. Pretending they are the same will lose you information.

Sounds to me you just want Lisp or Smalltalk (self modifications galore, no difference between compile and runtime, small syntax).


I think you are making more of a distinction between optimizations and runtime, not compilation. Doing compilation is about taking a lot of arbitrary user-input (in the form of urls and cmake files and code edits) and executing that. Then optimization occurs which is a lot of pure transforms. Then runtime occurs, which is when the program takes a lot of user-input again. But step one usually is in a different language than step three, which is extra effort to learn. And step 2 doesn’t really need to visible to users, since it is pure the user doesn’t really need to directly see the side effects from that.


That wasn't my intention if it came out that way.

Compilation is something that happens (generally) once and returns an executable artifact.

Runtime is execution of said artifact, and can happen many times, with many different environment setups.

Compile time functions and types capture stuff that can be known during compilation.


runtime_result = f(static state, dynamic state)


Recently, "Design Principles Behind Smalltalk (1981)", https://news.ycombinator.com/item?id=38821506


Maybe there's something new.


Smalltalk and Javascript? https://github.com/codefrau/SqueakJS


Pharo has this [0].

[0] https://pharo.org/features


Perhaps OpenSmalltalkVM only has some of this


This looks like it is a small talk implementation with a lot of fancy nonsense terms to make it sound like it is something new.

I've never heard the term "Live Metacircular Runtimes (LMRs)" but they seem to think they are improving on "classic metacircular approaches"


It's a Smalltalk implementation written in Smalltalk, which is designed to survive its internals modified at runtime, which is the "live" part. The paper exhibits conventionally tricky bits to modify like the garbage collector and compiler, and describes some performance regressions they found which they fixed using the liveliness of the system.


I did indeed tilt my head to one side like a confused dog when I read that - this helped: https://en.m.wikipedia.org/wiki/Meta-circular_evaluator



WebComponents are also nicely alive. Wish we saw some solidarity from other programming in an alive world/language of the gods camps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: