Yeah, in that case I think the link to boats's work obscures the point a bit.
I take what might be a slightly different read of matklad's point; I don't think Rust has much compromised its vision in terms of which broad features to support, but it has on a couple occasions chosen to ship something that wasn't perfect because being useful requires taking only a bounded amount of time to iterate on design.
So Rust 1.0 shipped without async, even though it was known to be needed for some of Rust's core use cases, because it was too far from being ready and it wouldn't do to wait forever. Once that decision was made, it had implications for how async could work; in particular, really doing it right requires linear types, but this wasn't appreciated when Rust 1.0 shipped and it's not a backwards-compatible change, so by 2018 it was off the table. The choice was, do async in a way that works with the existing design decisions, at the cost of some elegance, or don't do it at all. The former choice is not just more "industrial", I would argue that it coheres better, because waiting for multiple events at the same time is a core feature that a language for foundational software has to have, and the combinator-based approach that people were using in 2018 cohered poorly with the rest of the language (e.g., requiring unnecessary heap allocations). So this wasn't really a compromise to coherence.
(This also happened on a lesser scale when async/await first shipped—e.g., specific "async" syntax instead of a more general coroutine feature—because of eagerness to ship something that year. boats has claimed that this was a matter of existential survival for the language; I'm not sure I agree. But while async/await is a bit less conceptually pure than fully general coroutines, I don't believe that any of today's common complaints about async are downstream of the decision at that time to try to ship something quickly; there don't seem to have been a lot of obvious mistakes from then.)
(My understanding is that const has a similar story but I'm less familiar with the design space there, because people haven't exhaustively chronicled its history like they've done with async, perhaps because it's not as heatedly controversial.)
> in particular, really doing it right requires linear types, but this wasn't appreciated when Rust 1.0 shipped and it's not a backwards-compatible change, so by 2018 it was off the table.
It was pretty much off-the-table well before that, because a usable implementation of linear types requires being able to ensure the absence of panics. (A panic must unwind the stack, which amounts to automatically running drop implementations.) The two issues are quite closely linked, and hard to address in isolation.
I think an interesting component is that you might also want “semi linear types”: types which are purportedly linear but can be dropped as an unwinding backstop.
For instance if you’re dealing with database transactions you probably want to make it explicit whether you commit or rollback, but on panic you can likely allow the transaction to be cleaned up automatically.
Most Rust ORMs and query builders expose a transaction API that takes a closure and runs it inside the transaction, rolling back on unwind or (in most cases) if it's not explicitly committed. This is the most common idiom in Rust for dealing with situations where you want to pass extra data to or from a cleanup routine. Unfortunately, for the async use case in particular it happens to be unsound: https://tmandry.gitlab.io/blog/posts/2023-03-01-scoped-tasks...
This is one of many things that could have been done to solve the unwinding-through-linear-types problem, if it were still possible to make backwards-incompatible changes to the language.
Yes, but unlike most of the proposed solutions to this problem, this one was (1) seriously considered prior to the release of Rust 1.0, and (2) wouldn't have caused major changes to the way most people write Rust programs in practice. i.e. Rust without panic catching in safe code is still essentially Rust.
I think we are using different meanings of the term "cohere" and I am not sure how to reconcile them. I agree that Rust with async is a more useful language. I don't think being useful implies anything about how coherent a language is (I would point to bash and perl as examples of useful languages with very little coherence). "Coherence" to me means that all the features fit together tightly and are designed with each other in mind, and I don't think that's the case for async and const in Rust—simply because they aren't finished being designed.
Your point on coherence is similar to the perspective of an ex-C++ maintainer. This video came out a decade ago https://www.youtube.com/watch?v=KAWA1DuvCnQ&t=2530s and I feel his lesson went unheeded. It's relevant to the bigger and more dangerous concept of Conceptual Integrity by Fred Brooks.
I think so, yes. If you remove any of the things in the “core” I mention in the post, the language hangs together much worse even though it’s smaller; enums without pattern matching is a simple example.
I’m not just saying that I want to go back to the “good old days”, I really do think that those parts of Rust were designed as a coherent whole, in the same way that Uiua is designed as a coherent whole.
I take what might be a slightly different read of matklad's point; I don't think Rust has much compromised its vision in terms of which broad features to support, but it has on a couple occasions chosen to ship something that wasn't perfect because being useful requires taking only a bounded amount of time to iterate on design.
So Rust 1.0 shipped without async, even though it was known to be needed for some of Rust's core use cases, because it was too far from being ready and it wouldn't do to wait forever. Once that decision was made, it had implications for how async could work; in particular, really doing it right requires linear types, but this wasn't appreciated when Rust 1.0 shipped and it's not a backwards-compatible change, so by 2018 it was off the table. The choice was, do async in a way that works with the existing design decisions, at the cost of some elegance, or don't do it at all. The former choice is not just more "industrial", I would argue that it coheres better, because waiting for multiple events at the same time is a core feature that a language for foundational software has to have, and the combinator-based approach that people were using in 2018 cohered poorly with the rest of the language (e.g., requiring unnecessary heap allocations). So this wasn't really a compromise to coherence.
(This also happened on a lesser scale when async/await first shipped—e.g., specific "async" syntax instead of a more general coroutine feature—because of eagerness to ship something that year. boats has claimed that this was a matter of existential survival for the language; I'm not sure I agree. But while async/await is a bit less conceptually pure than fully general coroutines, I don't believe that any of today's common complaints about async are downstream of the decision at that time to try to ship something quickly; there don't seem to have been a lot of obvious mistakes from then.)
(My understanding is that const has a similar story but I'm less familiar with the design space there, because people haven't exhaustively chronicled its history like they've done with async, perhaps because it's not as heatedly controversial.)