Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Moving Beyond Type Systems (vhyrro.github.io)
77 points by flexagoon on May 31, 2024 | hide | past | favorite | 61 comments


> 4. External resources (files, I/O): this would be a very difficult topic if not for our simple question. Only writing to a file or outputting to an I/O stream is considered an effect, as only the process of writing mutates external state. Reading data creates new state, but does not modify it, therefore reading data of various sorts is not an effect. Because of this, printing is effectful, but reading from stdin is not.

Lost me at this part. Of course reading is a side-effect. A function which reads from a file/console cannot be pure. Purity implies referential transparency - a function given the same arguments will always return the same result.

If we consider the example given:

    pub fn read_guess() -> int {
        return io.read_int("Take a guess (0-100): ");
    }
We should take `read_int` to be effectful because it will have the effect of advancing the position to read from in the console's buffer. If it didn't, it would always read from the same position, so even if the user took a second guess, the first guess would be read again the second time `read_int` is called. So given that `read_int` is effectful, so too is `read_guess`.

> 1. Well, creating a variable is creating new state, but it’s not changing the state, therefore creating a variable is not an effect, but changing it is.

Creating a local variable isn't a side-effect, but allocating a variable is a side effect. Sure enough, if we also free the allocation before leaving scope, we can avoid propagating that effect, but if you return a value that contains anything allocated by a function, then the function becomes effectful.


Later on the author makes the point that everything changes some various register deep in the guts of the CPU so by definition everything is always effect-full, but since we invented programming languages to get away from writing assembly, we shouldn’t consider that an effect when we’re not writing assembly. We invented many programming languages to get away from writing allocations and pointers into buffers too.


It’s usually implied that “effect” pertains to observable effects. Anything that doesn’t affect referential transparency gets handwaved away. A memoized function might use a private stateful cache, but as long as it doesn’t affect determinism, you can keep its existence secret.


A thought stroke me. Security struggles with timing leaks.

That programming language would need something like a timing effect to control timing leaks.

In such a programming language a memoized function is effectful on time.


If you print to stdout but nobody is around to read it, does it count as an effect?


Perfectly reasonable question.

Basically, you can consider it a NOP if there's no observable side-effect.

Obviously, stdout has a particular purpose and you wouldn't just disable the side-effect for one use case, but you could imagine having some pseudo console for the purpose of emitting debug messages which is considered a non-effect in your language (provided you can't read from it and reify the text back into values).

In .NET for example, you can use `Debug.Write` and `Trace.Write` for emitting debug messages, and these lines are erased from the release version of the binary. You can mark any void-returning function with the `[Conditional("DEBUG")]` attribute and have calls to it erased because the DEBUG symbol is not defined in release versions.

IMO, this should be standard in all languages - in languages with effects even moreso because we don't want to pass around an effect if our only use for it is debug or trace messages.


Haskell also has something like this in Debug.Trace.trace: even though it's _technically_ impure (cause it outputs to stdout), it's typed as a pure function because it's just for debugging (I think internally it uses an escape hatch to just hide the IO)


Yeah, but you're losing one of the things that you get from pure functional languages, which is determinism. The same function run multiple times may return different results.


The author has an entire paragraph dedicated to explaining the difference between "pure" in an effect system vs "pure" in a functional language. It might help to read the rest of the article


I think the author’s choice of function to demonstrate purity made this harder to grok as a reader. Asking the reader to “…exclude the I/O interactions…” when considering functional purity makes the analogy much harder to follow.

My interpretation of the thesis of that paragraph is that localized mutation does not violate referential transparency, but getting there required some gracious reading. By the end of the section, we’re given an _example_ of an effectfully pure function, but no standalone definition.

Based on that, I agree that this is a weak point in the overall piece.


Seems like you could solve that by endlessly buffering stdin? Then you'd have to keep passing higher and higher offsets to read from, and old offsets would simply return their original, old values.


It doesn't solve the referential transparency problem. To be referentially transparent, the buffer would need to be fixed in size, and have all of its data input prior to being used the first time. Essentially, it would akin to passing a read-only file as a command line argument to `main`.


While preloading would obviously solve pretty much everything and make all execution trivially deterministic: how is forcing all input-reading to provide an offset not referentially transparent? Particularly when all input is done this way, since e.g. checking the current time is I/O and would need to pass last-result values around to maintain its "state" as well.


The problem is if you pass an offset which the user has not yet input to, you're either going to have to abort or return an error of some kind. If the user inputs to this location later, and you read from it, you've broke referential transparency because you called the same function with the same arguments twice and got a different result.


Why would you have to error, or retain the offset? Block until that input exists, and/or return a higher offset to use for the next call - perhaps add the length of error message so you essentially can't guess and have to show the error has been checked. It doesn't have to match the underlying bytes, it's just a progress marker.


What happens if you do:

x = read_guess(); y = io.read_int("what's your name: "); z = read_guess();

How would your system ensure x and z are the same?


You don't have that API. You offer

    x = read_guess(0)
    y = read_name("what's your name: ", len(x))
    z = read_guess(len(x)+len(y))
If you read(0) twice, you get the same value both times.


But now every function that reads from stdin needs to pass around an offset of where to read from which is very unergonomical. It also isn't really pure since offset is now state that is changing whenever a function is called.

fn do_stuff(offset: int) -> (string, int) { x = read_guess(offset) y = read_name("what's your name: ", offset + len(x)) z = read_guess(offset+len(x)+len(y)) new_offset = offset+len(x)+len(y)+len(z) return ("name is: {y}, guess is: {x}", new_offset) }


Ergonomics isn't really the point of a hypothetical extreme system. And no, you can compute offset as a result of all inputs to the system, like I did in my example - just keep passing a "previous state" value around, and compute the next state and return it. Like any other functional system already does 99% of the time.


I've been thinking a lot about effects systems recently. A few months ago I implemented an effects system based interpreter in Haskell for an embedded DSL prototype for a project at work. In our case, the effects we were concerned with were all some variation of reading data, and the effects based approach allowed us to statically ensure that data would be available, and let us perform some clever optimizations to reduce the overhead of large IO operations.

We've since rewritten the system and, while we still support an optional effects based style in our DSL, it's not being used very heavily. In practice, our user found the ergonomics of the effects system quite challenging, and it introduced some type inference challenges. The biggest problem was that our use-cases ended up with functions that would have hundreds of effects, and it was fairly unwieldy and the type errors were difficult to deal with. Since we've introduced the updated version, most users prefer to user our newer features that allow them to write more traditional code even though it means they don't get composable effects.

On the other side of the experience, I've come to believe that effects systems are a good idea, but when adding them to an existing language it's probably best to make them an opt-in feature that can be used to constrain specific small parts of a program, rather than something that should be applied globally. I also think we need a bit more research into the ergonomics before they are going to appeal to a lot of users. That said, the guarantees and optimization opportunities ours gave us were really nice, and were quite difficult to achieve without building on top of the effects system (our new system is about an order of magnitude more code, for example).


In Erlang, messages can bring state to a gen_server, and the accumulations of state are sliced very small and spread out through the system.

The problem with imperative languages is the accumulation of global shared state. Worst case the interactions between those states grow factorially. Best case they grow logarithmically, but practically you will have to constantly fight against it trending toward n^1/2 instead, which is not sustainable.

No I think effects need homes and letting them talk to each other is a formal introduction (friction) not the unending piles of expediencies you see in companies that will soon collapse under their own weight.


How did you learn how to implement an effects system? Are there resources you can share that taught you the fundamentals?


Personally I didn’t do a lot of specific research when I started. I’ve read through the implementations of a few of the ones out on hackage, and a few papers, so the ideas were in the back of my mind. Here are a few papers that might be useful (sorry I don’t have links, I’m just looking through my research directory and copying title names):

- Stitch: The Sound Type-Indexed Type Checker (Functional Pearl) by Richard Eisenberg

- A Criterion for Kan Extensions of Lax Monoidal Functors by Tobias Fritz and Paolo Perrone

- Effect systems revisited—control-flow algebra and semantics by Alan Mycroft, Dominic Orchard, and Tomas Petricek

- Kleisli arrows of outrageous fortune by CONOR McBRIDE

- Parametric Effect Monads and Semantics of Effect Systems by Shin-ya Katsumata

- Unifying graded and parameterised monads by Dominic Orchard and Philip Wadler

For a much more gentle introduction to some of the basic bits like using GADTs to accumulate effect annotations at the type level and building recursive type class instances to traverse them I’d recommend chapter 15 of my own book, Effective Haskell. From the examples in that chapter and the papers I listed I think you’d have everything you need to have a go at it.


Thanks for the detailed response! I'll check it out.


Just curious what you think of abilities in Unison? I've yet to use Unison in anger, but from what I've peeked at, abilities look really elegant to me.


I like what I’ve seen of unison and I think it has some great ideas, but I haven’t had a chance to dive into it deeply enough to have a stronger opinion than that unfortunately.


What is the actual problem that this effect-oriented approach addresses? The effects of methods that are called by other methods will be concealed, so at the calling site, you'll have no idea that the high-level API `do_stuff` is going to utilize `io`. That is, unless the effect decoration "infects" the caller like `async`, in which case every complex API method will be decorated with a huge number of effects.

Simply put, what does this buy us?


Effect systems have always felt to me like the functional programming community re-inventing dependency injection, but at the type level.

At the term level, you could just let your function have a function as a parameter `read_file: IO -> String`, vs. annotating it with some kind of type-level "filesystem effect". And the former is a lot more flexible over time.


This is an interesting approach but it has some issues with more more advanced effects. For example if you have a function that uses mutable state you would need to have the state both as an input and an output (for the original and the updated state) `update_mut: State -> String -> State` this isn't very economical as is probably worse than just `update_mut: String -> State ()`. If you commit to just having the parameters act as "markers" that don't actually relate to the implementation then you can get away with `update_mut: State -> String -> ()` but then you use one of the big benefits of effect systems which is that you can change the way effects work and have multiple different implementations.


> Effect systems have always felt to me like the functional programming community re-inventing dependency injection

Yeah, so, in python, I've been playing around with "structured dependency injection", basically DI + ideas from structured concurrency (i.e., dependency injection bound to well-defined lexical scopes). And whenever I read about effects systems, it definitely feels like they're talking about, if not the same thing, at least something in the same nuclear family of things.


> What is the actual problem that this effect-oriented approach addresses?

Yes. The blog post is hazy on the actual problem.

Effect containment is an attempt to limit spooky action at a distance. The idea is to constrain what happens way over there from negatively affecting what happens here. Most difficult bugs come from that. It helps to keep this in mind when propounding language theories.

Globals and mutable references are the usual legit mechanisms for effects at a distance. Less legit mechanisms are pointer manipulation in C and messing with the dictionaries of distant objects in Python. Non-legit mechanisms are subscripting out of range and dangling pointers.

The trend is away from modifying mutable data. There's the pure functional approach, but then you have to do too many gyrations to have an effect on anything. Someone proposed single-assignment languages back in the 1980s, where each variable could be assigned to only once. That was considered silly at the time, but that's where we mostly ended up. In Rust, you write "let" far more often than you assign to a mutable variable. C++ has "auto". Everybody does it that way now, most of the time. It's roughly equivalent to a functional form, but things have names and there's a place to put dumping, logging, and profiling code. That's mostly where things have settled down. It's not a bad place to be.

There's a whole other area of type theory that revolves around generics. This starts at "complicated" and ends at "incomprehensible". The lesson in this area, from LISP macros to C++ templates to the more exotic languages, is that it's easy to create unreadable code.


Containing effects _in general_ is the natural next step after containing unsafe code. Memory safety issues are just one category of side effect that can have unpredictable effects on your code -- any time your code interacts with the "outside world" there's an opportunity for unpredictable behavior, and the more you can quarantine it, the easier it is to reason about the remaining code.


The difference is that a compiler knows all about the memory model of the language it's compiling. So it can enforce rules that guarantee you handle memory safely.

But it knows nothing about your other effects. The most it can do is force you to annotate that an effect is happening. It can't help you verify correct behavior of that effect.


The compiler knew next to nothing about the memory model before we started annotating lifetimes. Annotating effects is a potentially reasonable next step.

(I say "potentially", because even just annotating lifetimes is already considered a DX nightmare by a lot of people)


> The compiler knew next to nothing about the memory model before we started annotating lifetimes.

Compilers for garbage compiled languages have always had to reason about the memory model. Compilers for languages with obligatory bounds checks have always had to reason about the memory model. Compilers for multi-threaded languages have always had to reason about the memory model. Even "good old C" compilers had to reason about the memory model when it came to volatile variables. Rust didn't invent the concept of a memory model, and Rust's lifetimes are only a small part of its memory model.


The obvious next step is to give the compiler the information it needs. The effect types could contain constraints similar to how Rust traits work.


Unless I completely misunderstood the post, a function that calls another function with effects will indeed be "infected". That is, unless the effect ends up being local (such as passing a mutable reference to a local variable).


> The effects of methods that are called by other methods will be concealed, so at the calling site, you'll have no idea that the high-level API `do_stuff` is going to utilize `io`.

The point is to make the caller aware of any side-effects so that the callee can't conceal them.

> That is, unless the effect decoration "infects" the caller like `async`, in which case every complex API method will be decorated with a huge number of effects.

Effects are infectious, precisely because if `foo` has effects, which must be known to `bar` which call's `foo`, then `baz` which calls `bar` must also be aware of the effects of foo, else they would be hidden.

> Simply put, what does this buy us?

For one, it makes it explicit what can and cannot be done by some code, as a means to prevent obvious programmer errors. You might consider it analogous to a static versus dynamic type system. In the dynamic system we can call `(foo 123)`, even though `foo` might expect strings rather than integers - and we get a failure at runtime. In a static typing system, we can catch this error much earlier - at compile time. `foo(s : String)` makes it clear what foo expects.

Effects attempt to augment functions not only with the type of value they expect, but also with the capabilities that the function has. If a function is pure, it's not capable of going into your filesystem and deleting data, for instance.

While the effects are infective, the languages encourage you to write as much as you can using pure functions and only use effects where strictly necessary. Essentially, they follow the principle of least privilege[1].

There are ways to avoid infecting a program with the `IO` effect where only "some" IO is needed. For instance, in Clean, which uses uniqueness types instead of effects, a function which writes to a particular file can be given the privilege for just this file, eg `write_foo : *File String -> *File`. That would prevent `write_foo` from opening up another file and writing to it. The file itself must be opened using a `*Filesystem` type, which `*World` - the primordial source of uniqueness for your program implements, which is passed into the program's entry point.

The other advantage referential transparency gives us is optimization. If a function is called multiple times with the same arguments, it always produces the same result. If we can detect at compile time that a function is called more than once with the same arguments, we can cache the result of the first call, and replace the second call with the cached value. This same optimization can't be done with unknown side-effects because those side-effects may be desirable - we don't want the compiler to attempt to remove them.

Clean's uniqueness types give us another opportunity for optimization. A uniqueness type is guaranteed by the compiler to not be aliased, which means that we can mutate a value in place whilst retaining referential transparency, though only in a single-threaded manner.

In regards to threading, threads are also side-effects. If we call into some library function written by someone else, we don't want their functions to start spawning threads and potentially mutating values we give it, else it could cause any number of race conditions. If any functions which may spawn a thread have a big red flag on them, we can avert these problems earlier, rather than finding out in production and spending a lot of time debugging.

Another advantage that purity provides (but not uniqueness) is the ability to reuse parts of data structures for multiple values. For example, if we have a linked list `l`, we can write both `x = cons foo l` and `y = cons bar l`, to obtain two new lists which both have `l` as their tail, but they both refer to the same `l` in memory. It's safe to do this because we can't mutate this tail - we wouldn't want a mutation of the list `x` to also mutate `y`. In a language which doesn't prevent arbitrary side-effects, we would need to make a copy of `l` when creating `x` and `y` to prevent this from occurring. Purely functional data structures have many uses - if we want to keep a versioned record of states (like git) for example. Would strongly recommend reading Okasaki's Purely Functional Data Structures[2] for more insight.

[1]:https://en.wikipedia.org/wiki/Principle_of_least_privilege [2]:https://www.cs.tufts.edu/~nr/cs257/archive/chris-okasaki/dis...


> If we can detect at compile time that a function is called more than once with the same arguments

Just to emphasize, effect systems can let the compiler chase down whole chains of functions calling functions and libraries and so on and know that at no point is there going to be anything non deterministic happening. Something you can't do, btw, if you follow the OP's advice and allow file reads.


Effect systems and capability systems generally both try and solve very similar issues - getting a handle on the side effects of deeply nested and/or 3rd party code.


Only local effects (basically only mutation) are concealed


I can't but feel this is running head first into statements versus expressions.

Specifically the part that views effects as growing in size and that being contrary to desired behavior. Strikes me as worrying about similar concerns.

Seems more that it is the eager evaluation of each line of code that is a problem. Nothing wrong with growing the footprint of concern on code. The problem is how to annotate the concern.

Consider, adding 'logger.whatever(...)' is likely not a concern to the program. Being able to annotate the logger as not an effect to check makes sense. Of course, all edge cases matter. Are the arguments eagerly evaluated? What if the logger doesn't even use them, due to level?

With lisp, the magic wasn't only that you can treat code as data, but also that you could define code that ran at different times. And you could largely do that in "user space."


The article starts off with a motivating example of effects getting "bigger" when composed but I don't see where it does anything to contain / control that expansion (other than "proto" which does full inference)

Seems like complex functions could end up with effects lists of unwieldy size, similar to how Elm programs end up with a giant "Msg" type.


This was the problem we saw in practice building an effects based system at work. It had a lot of nice properties, and type inference worked better than you might expect, but there was no way around the types with 100+ effects in the type signature, and people really disliked it.

There are some things you can do to make it more ergonomic. One mistake we made was not focusing on ergonomics early on, and that lead to people having some pretty sour experiences. It's something you really have to pay a lot of attention to.


Can you not combine effects into unions and/or intersections resulting in more abstract effects?


In real world languages with effects, like F* or Koka, you can


The "getting bigger" actually happens in reverse. `main` has god-mode privileges, and it can selectively grant smaller or equivalent privileges to the functions it calls, which in turn, grant smaller or equivalent privileges to the functions they call. A function can't call another with greater privilege than it has itself. Ideally we should aim to grant only the necessary privileges to functions that need them, with the leaf functions in the call graph being pure functions - those which have no privilege to do anything but compute (or never return, which can be considered a side-effect). We might also want total pure functions, which are guaranteed to return (they only use primitive recursion), though there's no guarantee that they'll return in any reasonable time.

Haskell is a pretty lousy example because it bundles everything up into one big `IO` privilege, which gets passed around everywhere, and monads aren't that great. Monad transformers are terrible at scaling.


> monads aren't that great. Monad transformers are terrible at scaling.

I've dabbled in Haskell, but making anything more than toys became infuriating when dealing with deep stacks of monad transformers: the inscrutable error messages that maybe you just need another `lift` to fix or maybe need to rearrange the whole stack IDK WTF FML LOL.

I'm now poking around Unison, though without any ideas so far on what to write with it. The Abilities system looks refreshingly simple, and there's even a fair explanation somewhere on the site of how they're at least as powerful as Monads.


I personally think these effect systems are trying to solve the wrong problem. What we really want is capabilities[1], and the means of enforcing them, which I suspect is best done with linear types and uniqueness types. Granule[2] is certainly of interest as it combines these two typing disciplines. Austral[3] is another worth looking into, though uniqueness and referential transparency is not one of its goals, it does provide capabilities which are encapsulated in linear types, which serve a similar purpose as uniqueness types for guaranteeing unforgeability.

[1]:https://en.wikipedia.org/wiki/Capability-based_security

[2]:https://granule-project.github.io/

[3]:https://austral-lang.org/


I think the solution to that is to have only a few builtin effects. This is the approach taken by the Verse language:

https://dev.epicgames.com/documentation/en-us/uefn/specifier...

It's a compromise between the granularity of correctness checks and readability, but IMHO it's still a big improvement over current type systems.


in the OP example, this is considered bad

  let x: string = "hello world";
  x = 32;  #type error
however, a well designed strong typesystem should be able to compose types to allow this if desired by the coder (eg. to read in a column of numbers from a csv)

  my $forty-two = 42 but 'forty two';
  say $forty-two+33;    # OUTPUT: «75␤» 
  say $forty-two.^name; # OUTPUT: «Int+{<anon|1>}␤» 
  say $forty-two.Str;   # OUTPUT: «forty two␤» 
Calling ^name shows that the variable is an Int with an anonymous object mixed in. However, that object is of type Str, so the variable, through the mixin, is endowed with a method with that name, which is what we use in the last sentence.

https://docs.raku.org/routine/but


It's bad because of the explicit signature. If the signature had been `string | int` thenn it's a type system feature


good point - I am just a little reluctant to swallow that a strong type system is intended to stop us accidentally assigning a number to a string var - raku is an interesting case since the strong types in raku (aka perl6) were crafted so that the typical (untyped) perl case of

  my $x = "1"; 
  print $x+1;   #2
  print $x.1;   #'11'  ('.' is perl for string concatenate)
so in raku, untyped, thats

  my $x = "1"; 
  say $x+1;   #2
  say $x~1;   #'11'  ('~' is raku for string concatenate)
ok, but raku can be strongly typed too, so how to graduate to that without throwing out the first draft

  my IntStr $x = <1>; ('<>' is raku for Allomorph literal)
  say $x+1;   #2
  say $x~1;   #'11'
just reach for the IntStr type (it's built in and has a literal syntax sugar)

the main idea for gradual typing to allow code to be evolved by adding types over time without necessarily knowing all the types you are going to settle on before you start


Yeah, we can consider `string`, `int` to both be subtypes of `string | int`, and `string & int` to be a subtype of both `string`, `int`.

There are some problems with doing so though, such as inferring the types, but this can be mostly mitigated by giving types a polarity, as Dolan introduced in Algebraic Subtyping[1].

This can be a bit hard to wrap your head around at first because given a function `f : a -> b`, the parameter `a` has negative polarity and return type `b` has positive polarity, but in the function body, the polarities are reversed, and the argument becomes positive, with the result value being negative. Local variables are ambipolar as they can be both introduced and eliminated in the function body.

It becomes even more of a problem when you have types of kind `* -> *` or higher where the type argument can be covariant or contravariant. As far as I'm aware type inference for handling these in combination with algebraic subtyping is still an open problem.

[1]:https://www.cs.tufts.edu/~nr/cs257/archive/stephen-dolan/the...


Out of curiosity is there any point to a loop that doesn’t produce some effect in its body? Obviously I mean traditional imperative looping constructs and not map functions that return a value.


One example is to cause a fairly predictable delay in embedded microcontrollers for bit-banging[1] digital signals. If we know the clock frequency and the cycle cost of a certain instruction (say, add or nop), we can specify how many times to execute the instruction to get the desired delay time. Very inefficient, but very low level processors like PICs have limited capabilities for interrupt based timing, and their low-power requirements make it viable to waste cycles.

    inline void delay (int desired_delay) {
        int i = 0;
        while (i < desired_delay)
            i++;
    }
While "delay" is obviously an effect, it's not an effect in the PLT meaning of the word, since it causes no state changes to the program, and effect systems don't concern themselves with compute cost. The signal emission happens outside of the delay loop - usually inside some enclosing loop which obviously does have side-effects.

[1]:https://en.wikipedia.org/wiki/Bit_banging


Your loop has the effect of mutating the counter. It's not an effect that leaks out of the function, but it is an effect of the loop, which was what your parent was asking about, I think.


No. In fact for example the C++ standard says such a loop can be optimized away by the compiler, even if syntax suggests it is an infinite loop.


> No.

This isn't quite correct, as demonstrated by the example of C++, because the act of turning a function that never halts (i.e. a function that is guaranteed to diverge) into a function that might halt is an observable effect. Rust actually relies on the fact that infinite loops are infinite, and they had to submit patches to LLVM to make it possible for languages to opt out of C++-style semantics: https://github.com/rust-lang/rust/issues/28728


C++ is pretty unique in considering infinite loops without side effects undefined behavior. It's allowed in C[1] as well as pretty much all mainstream programming languages from the low level to the high level (for good reason). Furthermore, there's an active proposal to match C by removing this undefined behavior in the cases C does[2].

Even in C++, it's uncommon for compilers to exploit it (especially in the trivial case) probably because, even if it's undefined, generally such behavior are pretty surprising and unexpected (and programmers do write infinite loops on purpose!).

Having non-effect producing infinite loops is useful (and sometimes is the only safe way of performing an operation), especially in some low level code. Another commenter pointed out the idea of delays in microcontrollers[3] but I want to provide an example from operating systems.

Take for example a panic function. At the end of it, you usually have an infinite loop of some sort, partly because you can't really safely do too much and panics generally represent a completely unrecoverable state from the operating system's point of view. A trivial

    for (;;) {}
...is useful.

[1]: For constant expressions. [2]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p28... [3]: https://news.ycombinator.com/item?id=40542860


This post needs to justify itself with a solid real example of how this additional effect system provides value




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: