Hacker News new | past | comments | ask | show | jobs | submit | pron's favorites login

From my subjective perspective, Zig, of all the languages, is what feels most the modern successor of C.

I think, it even does a better job than C++ which share some of the C syntax.

The problem i guess is complexity and productivity. Zig here is a hit while Rust is a miss as much as C++.

The question Zig was trying to answer is: Can we have a simple, free and powerful language as C but with a more modern approch?

While Rust in its cruzade to be seen as a C++ competitor were not aiming at simplicity and user ergonomics.

So my feeling is, Zig is much more compeling as a C substitute than any other language out there, and probably thats the reason why your friend like it more than others.


There is one major problem with -- or, rather, cost to -- zero-cost abstractions: they almost always introduce accidental complexity into the programming language originating in the technical inner workings of the compiler and how it generates code (we can say they do a bad job abstracting the compiler), and more than that, they almost always make this accidental complexity viral.

There is a theoretical reason for that. The optimal performance offered by those constructs is almost always a form of specialization, AKA partial evaluation: something that is known statically by the programmer to be true is communicated to the compiler so that it can exploit that knowledge to generate optimal machine code. But that static knowledge percolates through the call stack, especially if the compiler wants to verify — usually through some form of type-checking — that the assertion about that knowledge is, indeed, true. If it is not verified, the compiler can generate incorrect code.

Here is an example from C++ (a contrived one):

Suppose we want to write a subroutine that computes a value based on two arguments:

    enum kind { left, right };
 
    int foo(kind k, int x) { return k == left ? do_left(x) : do_right(x); }
And here are some use-sites:

    int bar(kind k) { return foo(k, random_int()); }
    int baz() { return foo(left, random_int()); }
    int qux() { return foo(random_kind(), random_int()); }
    
The branch on the kind in foo will represent some runtime cost that we deem to be too expensive. To make that “zero cost”, we require the kind to be known statically (and we assume that, indeed, this will be known statically in many callsites). In this contrived example, the compiler will likely inline foo into the caller and eliminate the branch when the caller is baz, and maybe in bar, too, if it is inlined into its caller, but let’s assume the case is more complicated, or we don’t trust the compiler, or that foo is in a shared library, or that foo is a virtual method, so we specialize with a "zero-cost abstraction":

    template<kind k> foo(int x) { return k == left ? left(x) : right(x); }
This would immediately require us to change all callsites. In the case of baz we will call foo<left>, in qux we will need to introduce the runtime branch, and in bar, we will need to propagate the zero-cost abstraction up the stack by changing the signature to template<kind k> bar(), which will employ the type system to enforce the zero-cosiness.

You see this pattern appear everywhere with these zero cost abstractions (e.g. async/await, although in that case it’s not strictly necessary; after all, all subroutines are compiled to state machines, as that is essential to the operation of the callstack — otherwise returning to a caller would not be possible, but this requires the compiler to know exactly how the callstack is implemented on a particular platform, and that increases implementation costs).

So a technical decision related to machine-code optimization now becomes part of the code, and in a very intrusive manner, even though the abstraction — the algorithm in foo — has not changed. This is the very definition of accidental complexity. Doing that change at all use sites, all to support a local change, in a large codebase is esepcially painful; it's impossible when foo, or bar, is part of a public API, as it's a breaking change -- all due to some local optimization. Even APIs become infected with accidental complexity, all thanks to zero-cost abstractions!

What is the alternative? JIT compilation! But it has its own tradeoffs... A JIT can perform much more aggressive specialization for several of reasons: 1. it can specialize speculatively and deoptimize if it was wrong; 2. it can specialize across shared-library calls, as shared libraries are compiled only to the intermediate representation, prior to JITting, and 3. it relies on a size-limited dynamic code-cache, which prevents the code-size explosion we'd get if we tried to specialize aggressively AOT; when the code cache fills, it can decide to deoptimize low-priority routines. The speculative optimizations performed by a JIT address the theoretical issue with specialization: a JIT can perform a specialization even if it cannot decisively prove that the information is, indeed, known statically (this is automatic partial evaluation).

A JIT will, therefore, automatically specialize on a per-use-site basis; where possible, it will elide the branch; if not, it will do it. It will even speculate: if at one use site (after inlining) it has so far only encountered `left` it will elide the branch, and will deoptimize if later proven wrong (it may need to introduce a guard, which, in this contrived example will negate the cost of the branch, but in more complex cases it would be a win; also there are ways to introduce cost-free guards -- e.g. by introducing reads from special addresses that will cause segmentation faults if the guard trips, a fault which is caught; OpenJDK's HotSpot does this for some kinds of guards).

For this reason, JITs also solve the trait problem on a per-use-site basis. A callsite that in practice only ever encounters a particular implementation — a monomorphic callsite — would become cost-free (by devirtualization and inlining), and those that don’t — megamorphic callsites — won’t.

So a JIT can give is the same “cost-freedom” without changing the abstraction and introducing accidental complexity. It, therefore, allows for more general abstractions that hide, rather than expose, accidental complexity. JITs have many other benefits, such as allowing runtime debugging/tracing "at full speed" but those are for a separate discussion.

Of course, a JIT comes with its own costs. For one, those automatic optimizations, while more effective than those possible with AOT compilation, are not deterministic — we cannot be sure that the JIT would actually perform them. It adds a warmup time, which can be significant for short-lived programs. It adds RAM overhead by making the runtime more complicated. Finally, it consumes more energy.

There's a similar tradeoff for tracing GC vs. precise monitoring of ownership and lifetime (Rust uses reference-counting GC, which is generally less effective than tracing, in cases where ownership and lifetime are not statically determined), but this comment is already too long.

All of these make JITs less suitable for domains that require absolute control and better determinism (you won't get perfect determinism with Rust on most platforms due to kernel/CPU effects, and not if you rely on its refcounting GC, which is, indeed more deterministic than tracing GC, but not entirely), or are designed to run in RAM- and/or energy-constrained environments — the precise domains that Rust targets. But in all other domains, I think that cost-free abstractions are a pretty big disadvantage compared to a good JIT -- maybe not every cost-free abstraction (some tracking of ownership/lifetime can be helpful even when there is a good tracing GC), but certainly the philosophy of always striving for them. A JIT replaces the zero-cost abstraction philosophy, with a zero-cost use philosophy -- where the use of a very general abstraction can be made cost-free, it (usually) will be (or close to it); when it isn't -- it won't, but the programmer doesn't need to select a different abstraction for some technical, "accidental", reason. That JITs so efficiently compile a much wider variety of languages than AOT compilers can also demonstrate how well they abstract the compiler (or, rather, the languages that employ them do).

So we have two radically different philosohies, both are very well suited for their respective problem domain, and neither is generally superior to the other. Moreover, it's usually easy to tell to which of these domains an application belongs. That's why I like both C++/Rust and Java.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: