Hacker News new | past | comments | ask | show | jobs | submit login

It really is, but I still favour "unsexy" manual poll/select code with a lot of if/elseing if it means not having to deal with async.

I fully acknowledge that I'm an "old school" system dev who's coming from the C world and not the JS world, so I probably have a certain bias because of that, but I genuinely can't understand how anybody could look at the mess that's Rust's async and think that it was a good design for a language that already had the reputation of being very complicated to write.

I tried to get it, I really did, but my god what a massive mess that is. And it contaminates everything it touches, too. I really love Rust and I do most of my coding in it these days, but every time I encounter async-heavy Rust code my jaw clenches and my vision blurs.

At least my clunky select "runtime" code can be safely contained in a couple functions while the rest of the code remains blissfully unaware of the magic going on under the hood.

Dear people coming from the JS world: give system threads and channels a try. I swear that a lot of the time it's vastly simpler and more elegant. There are very, very few practical problems where async is clearly superior (although plenty where it's arguably superior).




> but I genuinely can't understand how anybody could look at the mess that's Rust's async and think that it was a good design for a language that already had the reputation of being very complicated to write.

Rust adopted the stackless coroutine model for async tasks based on its constraints, such as having a minimal runtime by default, not requiring heap allocations left and right, and being amenable to aggressive optimizations such as inlining. The function coloring problem ("contamination") is an unfortunate consequence. The Rust devs are currently working on an effects system to fix this. Missing features such as standard async traits, async functions in traits, and executor-agnosticism are also valid complaints. Considering Rust's strict backwards compatibility guarantee, some of these will take a long time.

I like to think of Rust's "async story" as a good analogue to Rust's "story" in general. The Rust devs work hard to deliver backwards compatible, efficient, performant features at the cost of programmer comfort (ballooning complexity, edge cases that don't compile, etc.) and compile time, mainly. Of course, they try to resolve the regressions too, but there's only so much that can be done after the fact. Those are just the tradeoffs the Rust language embodies, and at this point I don't expect anything more or less. I like Rust too, but there are many reasons others may not. The still-developing ecosystem is a prominent one.


I read comments like this and feel like I’m living in some weird parallel universe. The vast majority of Rust I write day in and day out for my job is in an async context. It has some rough edges, but it’s not particularly painful and is often pleasant enough. Certainly better than promises in JS. I have also used system threads, channels, etc., and indeed there are some places where we communicate between long running async tasks with channels, which is nice, and some very simple CLI apps and stuff where we just use system threads rather than pulling in tokio and all that.

Anyway, while I have some issues with async around futur composition and closures, I see people with the kind of super strong reaction here and just feel like I must not be seeing something. To me, it solves the job well, is comprehensible and relatively easy to work with, and remains performant at scale without too much fiddling.


Honestly, this is me too. The only thing I’d like to also see is OTP-like supervisors and Trio-like nurseries. They each have their use and they’re totally user land concerns.


> It really is, but I still favour "unsexy" manual poll/select code with a lot of if/elseing if it means not having to deal with async.

> I fully acknowledge that I'm an "old school" system dev who's coming from the C world and not the JS world, so I probably have a certain bias because of that, but I genuinely can't understand how anybody could look at the mess that's Rust's async and think that it was a good design for a language that already had the reputation of being very complicated to write.

I'm in the same "old school" system dev category as you, and I think that modern languages have gone off the deep end, and I complained about async specifically in a recent comment on HN: https://news.ycombinator.com/item?id=37342711

> At least my clunky select "runtime" code can be safely contained in a couple functions while the rest of the code remains blissfully unaware of the magic going on under the hood.

And we could have had that for async as well, if languages were designed by the in-the-trenches industry developer, and not the "I think Haskell and Ocaml is great readability" academic crowd.

With async in particular, the most common implementation is to color the functions by qualifying the specific function as async, which IMO is exactly the wrong way to do it.

The correct way would be for the caller to mark a specific call as async.

IOW, which of the following is clearer to the reader at the point where `foo` is called?

Option 1: color the function

      async function foo () {
         // ...
      }
      ...
      let promise = foo ();
      let bar = await promise;

Option 2: schedule any function

      function foo () {
         // ...
      }

      let sched_id = schedule foo ();

      ...

      let bar = await sched_id;

Option 1 results in compilation errors for code in the call-stack that isn't async, results in needing two different functions (a wrapper for sync execution), and means that async only works for that specific function. Option 2 is more like how humans think - schedule this for later execution, when I'm done with my current job I'll wait for you if you haven't finished.


Isn't mixing async and sync code like this a recipe for deadlocks?

What if your example code is holding onto a thread that foo() is waiting to use?

Said another way, explain how you solved the problems of just synchronously waiting for async. If that just worked then we wouldn't need to proliferate the async/await through the stack.


> Said another way, explain how you solved the problems of just synchronously waiting for async.

Why? It isn't solved for async functions, is it? Just because the async is propagated up the call-stack doesn't mean that the call can't deadlock, does it?

Deadlocks aren't solved for a purely synchronous callstack either - A grabbing a resource, then calling B which calls C which calls A ...

Deadlocks are potentially there whether or not you mix sync/async. All that colored functions will get you is the ability to ignore the deadlock because that entire call-stack is stuck.

> If that just worked then we wouldn't need to proliferate the async/await through the stack.

It's why I called it a leaky abstraction.


Yes actually it is solved. If you stick to async then it cannot deadlock (in this way) because you yield execution to await.


> Yes actually it is solved. If you stick to async then it cannot deadlock (in this way) because you yield execution to await.

Maybe I'm misunderstanding what you are saying. I use the word "_implementation_type_" below to mean "either implemented as option 1 or option 2 from my post above."

With current asynchronous implementations (like JS, Rust, etc), any time you use `await` or similar, that statement may never return due to a deadlock in the callstack (A is awaiting B which is awaiting C which is awaiting A).

And if you never `await`, then deadlocking is irrelevant to the _implementation_type_ anyway.

So I am trying to understand what you mean by "it cannot deadlock in this way" - in what way do you mean? async functions can accidentally await on each other without knowing it, which is the deadlock I am talking about.

I think I might understand better if you gave me an example call-chain that, in option 1, sidesteps the deadlock, and in option 2, deadlocks.


I'm referring to the situation where a synchronous wait consumes the thread pool, preventing any further work.

A is sychrounously waiting B which is awaiting C which could complete but never gets scheduled because A is holding onto the only thread. Its a very common situation when you mix sync and async and you're working in a single threaded context, like UI programming with async. Of course it can also cause starvation and deadlock in a multithreaded context as well but the single thread makes the pitfall obvious.


That's an implementation problem, not a problem with the concept of asynchronous execution, and it's specifically a problem in only one popular implementation: Javascript in the browser without web-workers.

That's specifically why I called it a Leaky Abstraction in my first post on this: too many people are confusing a particular implementation of asynchronous function calls with the concept of asynchronous function calls.

I'm complaining about how the mainstream languages have implemented async function calls, and how poorly they have done so. Pointing out problems with their implementation doesn't make me rethink my position.


I don't see how it can be an implementation detail when fundamentally you must yield execution when the programmer has asked to retain execution.

Besides Javascript, its also a common problem in C# when you force synchronous execution of an async Task. I'm fairly sure its a problem in any language that would allow an async call to wait for a thread that could be waiting for it.

I really can't imagine how your proposed syntax could work unless the synchronous calls could be pre-empted, in which case, why even have async/await at all?

But I look forward to your implementation.


> I don't see how it can be an implementation detail when fundamentally you must yield execution when the programmer has asked to retain execution.

It's an implementation issue, because "running on only a single thread" is an artificial constraint imposed by the implementation. There is nothing in the concept of async functions, coroutines, etc that has the constraint "must run on the same thread as the sync waiting call".

An "abstraction" isn't really one when it requires knowledge of a particular implementation. Async in JS, Rust, C#, etc all require that the programmer knows how many threads are running at a given time (namely, you need to know that there is only one thread).

> But I look forward to your implementation.

Thank you :-)[1]. I actually am working (when I get the time, here and there) on a language for grug-brained developers like myself.

One implementation of "async without colored functions" I am considering is simply executing all async calls for a particular host thread on a separate dedicated thread that only ever schedules async functions for that host thread. This sidesteps your issue and makes colored functions pointless.

This is one possible way to sidestep the specific example deadlock you brought up. There's probably more.

[1] I'm working on a charitable interpretation of your words, i.e. you really would look forward to an implementation that sidesteps the issues I am whining about.


That sounds interesting indeed.

I think the major disconnect is that I'm mostly familiar with UI and game programming. In these async discussions I see a lot of disregard for the use cases that async C# and JavaScript were built around. These languages have complex thread contexts so it's possible to run continuations on a UI thread or a specific native thread with a bound GL context that can communicate with the GPU.

I suppose supporting this use case is an implementation detail but I would suggest you dig into the challenge. I feel like this is a major friction point with using Go more widely, for example.


> and not the "I think Haskell and Ocaml is great readability" academic crowd.

Actually, Rust could still learn a lot from these languages. In Haskell, one declares the call site as async, rather than the function. OCaml 5 effect handlers would be an especially good fit for Rust and solve the "colouration" problem.


That’s how Haskell async works. You mark the call as async, not the function itself.


I think Rust’s async stuff is a little half baked now but I have hope that it will be improved as time goes on.

In the mean time it is a little annoying to use, but I don’t mind designing against it by default. I feel less architecturally constrained if more syntactically constrained.


I'm curious what things you consider to be half-baked about Rust async.

I've used Rust async extensively for years, and I consider it to be the cleanest and most well designed async system out of any language (and yes, I have used many languages besides Rust).


Async traits come to mind immediately, generally needing more capability to existentially quantify Future types without penalty. Async function types are a mess to write out. More control over heap allocations in async/await futures (we currently have to Box/Pin more often than necessary). Async drop. Better cancellation. Async iteration.


> Async traits come to mind immediately,

I agree that being able to use `async` inside of traits would be very useful, and hopefully we will get it soon.

> generally needing more capability to existentially quantify Future types without penalty

Could you clarify what you mean by that? Both `impl Future` and `dyn Future` exist, do they not work for your use case?

> Async function types are a mess to write out.

Are you talking about this?

    fn foo() -> impl Future<Output = u32>
Or this?

    async fn foo() -> u32

> More control over heap allocations in async/await futures (we currently have to Box/Pin more often than necessary).

I'm curious about your code that needs to extensively Box. In my experience Boxing is normally just done 1 time when spawning the Future.

> Async drop.

That would be useful, but I wouldn't call the lack of it "half-baked", since no other mainstream language has it either. It's just a nice-to-have.

> Better cancellation.

What do you mean by that? All Futures/Streams/etc. support cancellation out of the box, it's just automatic with all Futures/Streams.

If you want really explicit control you can use something like `abortable`, which gives you an AbortHandle, and then you can call `handle.abort()`

Rust has some of the best cancellation support out of any async language I've used.

> Async iteration.

Nicer syntax for Streams would be cool, but the combinators do a good job already, and StreamExt already has a similar API as Iterator.


Re: existential quantification and async function types

It'd be very nice to be able to use `impl` in more locations, representing a type which needs not be known to the user but is constant. This is a common occurrence and may let us write code like `fn foo(f: impl Fn() -> impl Future)` or maybe even eventually syntax sugar like `fn foo(f: impl async Fn())` which would be ideal.

Re: Boxing

I find that a common technique needed to get make abstraction around futures to work is the need to Box::pin things regularly. This isn't always an issue, but it's frequent enough that it's annoying. Moreover, it's not strictly necessary given knowledge of the future type, it's again more of a matter of Rust's minimal existential types.

Re: async drop and cancellation.

It's not always possible to have good guarantees about the cleanup of resources in async contexts. You can use abort, but that will just cause the the next yield point to not return and then the Drops to run. So now you're reliant on Drops working. I usually build in a "kind" shutdown with a timer before aborting in light of this.

C# has a version of this with their CancelationTokens. They're possible to get wrong and it's easy to fail to cancel promptly, but by convention it's also easy to pass a cancelation request and let tasks do resource cleanup before dying.

Re: Async iteration

Nicer syntax is definitely the thing. Futures without async/await also could just be done with combinators, but at the same time it wasn't popular or easy until the syntax was in place. I think there's a lot of leverage in getting good syntax and exploring the space of streams more fully.


> That would be useful, but I wouldn't call the lack of it "half-baked", since no other mainstream language has it either. It's just a nice-to-have.

Golang supports running asynchronous code in defers, similar with Zig when it still had async.

Async-drop gets upgraded from a nice-to-have into an efficiency concern as the current scheme of "finish your cancellation in Drop" doesn't support borrowed memory in completion-based APIs like Windows IOCP, Linux io_uring, etc. You have to resort to managed/owned memory to make it work in safe Rust which adds unnecessary inefficiency. The other alternatives are blocking in Drop or some language feature to statically guarantee a Future isn't cancelled once started/initially polled.


> Golang supports running asynchronous code in defers, similar with Zig when it still had async.

So does Rust. You can run async code inside `drop`.


To run async in Drop in rust, you need to use block_on() as you can't natively await (unlike in Go). This is the "blocking on Drop" mentioned and can result in deadlocks if the async logic is waiting on the runtime to advance, but the block_on() is preventing the runtime thread from advancing. Something like `async fn drop(&mut self)` is one way to avoid this if Rust supported it.


You need to `block_on` only if you need to block on async code. But you don't need to block on order to run async code. You can spawn async code without blocking just fine and there is no risk of deadlocks.


1) That's no longer "running async code in Drop" as it's spawned/detached and semantically/can run outside the Drop. This distinction is important for something like `select` which assumes all cancellation finishes in Drop. 2) This doesn't address the efficiency concern of using borrowed memory in the Future. You have to either reference count or own the memory used by the Future for the "spawn in Drop" scheme to work for cleanup. 3) Even if you define an explicit/custom async destructor, Rust doesn't have a way to syntactically defer its execution like Go and Zig do so you'd end up having to call it on all exit points which is error prone like C (would result in a panic instead of a leak/UB, but that can be equally undesirable). 4) Is there anywhere one can read up on the work being done for async Drop in Rust? Was only able to find this official link but it seems to still have some unanswered questions (https://rust-lang.github.io/async-fundamentals-initiative/ro...)


Now you lose determinism in tear down though.


Ok, in a very, very rare case (so far never happened to me) when I really need to await an async operation in the destructor, I just define an additional async destructor, call it explicitly and await it. Maybe it's not very elegant but gets the job done and is quite readable as well.

And this would be basically what you have to do in Go anyways - you need to explicitly use defer if you want code to run on destruction, with the caveat that in Go nothing stops you from forgetting to call it, when in Rust I can at least have a deterministic guard that would panic if I forget to call the explicit destructor before the object getting out of scope.

BTW async drop is being worked on in Rust, so in the future this minor annoyance will be gone


Yes I am aware of async drop proposals. And the point is not to handle a single value being dropped but to facilitate invariants during an abrupt tear down. Today, when I am writing a task which needs tear down I need to hand it a way to signal a “nice” shutdown, wait some time, and then hard abort it.


Actually, this "old school" approach is more readable even for folks who have never worked in the low-level C world. At-least everything is in front of your eyes and you can follow the logic. Unless code leveraging async is very well-structured, it requires too much brain-power to process and understand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: