Hacker News new | past | comments | ask | show | jobs | submit login

It is rather "odd".

Personally, deal with the low-hanging fruit first. For example: there are plenty of cases where an optimizing compiler could statically (read: at compile time) compute the lifespan of an object and thus bypass the GC entirely. (Or, for another example for a related issue: recognize that a threadsafe object can only be accessed by one thread and as such can be optimized in ways that break update order.) And yet most compilers don't see to do this sort of optimization, or do so in an extremely limited fashion.

In other words: deal with the slowness of garbage collection after you've reduced the amount of garbage to be collected.




This does happen partially with escape analysis within a function. Objects/primitives could be allocated on the stack instead. I totally agree that this should be an area that could be explored more in depth.


Not just stack allocation. Pool allocation also, especially when one has many threads. And inserting destructor calls directly into code, etc, etc. There are a lot of optimizations here.

Take Java. One of the concepts of Java was that you didn't need to worry about the stack / heap distinction - the JVM handles where to put things. But in practice, all this ends up doing is making (almost) everything ends up on the heap. And then people wonder why it's slower and more memory-hungry.

(Also: I wish compilers would be smarter about function boundaries. Have functions be source-level constructs, that get turned into a single global control flow graph, which the compiler then inspects to determine where function boundaries "make sense". With manual overrides of course.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: