Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>His stance on compiler optimizations is another example: only add optimization passes if they improve the compiler's self-compilation time.

What an elegant metric! Condensing a multivariate optimisation between compiler execution speed and compiler codebase complexity into a single self-contained meta-metric is (aptly) pleasingly simple.

I'd be interested to know how the self-build times of other compilers have changed by release (obviously pretty safe to say, generally increasing).



Hmm, but what if the compiler doesn't use the optimized constructs, e.g. floating point optimizations targeting numerical algorithms?


Life was different in the '80s. Oberon targeted the NS32000, which didn't have a floating point unit. Let alone most the other modern niceties that could lead to a large difference between CPU features used by the compiler itself, and CPU features used by other programs written using the compiler.

That said, even if the exact heuristic Wirth used is no longer tenable, there's still a lot of wisdom in the pragmatic way of thinking that inspired it.


Speaking of that, if you were ever curious how computers do floating point math, I think the first Oberon book explains it in a couple of pages. It’s very succinct and, for me, one of the clearest explanations I’ve found.


Rewrite the compiler to use a LLM for complication. I'm only half joking! The biggest remaining technical problem is the context length, which is severely limiting the input size right now. Also, the required humongous model size.


Simple fix: floating-point indexes to all your tries. Or switch to base π or increment every counter by e.


That’s not a simple fix in this context. Try making it without slowing down the compiler.

You could try to game the system by combining such a change that slows down compilation with one that compensates for it, though, but I think code reviewers of the time wouldn’t accept that.


probably use a fortran compiler for that instead of oberon


His stance should be adopted by all languages authors and designers but apparently it's not. The older generation of programming language guru like Wirth and Hoare are religiously focused on simplicity hence they never take compilation time for granted unlike most popular modern languages authors. C++, Scala, Julia and Rust are all behemoth in term of complexity in language design hence have very slow compilation time. Popular modern languages like Go and D are the breath of fresh air with their lightning fast compilation due to their inherent simplicity in their design. This is to be expected since Go is just a modern version of Modula and Oberon, and D is designed by a former aircraft engineer where simplicity is mandatory not an option.


You cannot add a loop skew optimization to compiler before compiler needs a loop skew optimization. Which it would not need at all because it is loop skew optimization (it requires matrix operations) that need a loop skew optimization.

In short, compiler is not an ideal representation of the user programs it needs to optimize.


Perhaps Wirth would say that compilers are _close enough_ to user programs to be a decent enough representation in most cases. And of course he was sensible enough to also recognize that there are special cases, like matrix operations, where it might be wirthwhile.

EDIT: typo in the last word but I'm leaving it in for obvious reasons.


Wirth ran an OS research lab. For that, the compiler likely is a fairly typical workload.

But yes, it wouldn’t work well in a general context. For example, auto-vectorization likely doesn’t speed up a compiler much at all, while adding the code to detect where it’s possible will slow it down.

So, that feature never can be added.

On the other hand, may lead to better designs. If, instead, you add language features that make it easier for programmers to write vectorized code, that might end up being easier for programmers. They would have to write more code, but they also would have to guess less whether their code would end up being vectorized.


perhaps you could write the compiler using the data structures used by co-dfns (which i still don't understand) so that vectorization would speed it up, auto- or otherwise




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: