Hacker News new | past | comments | ask | show | jobs | submit login

In the sense that green threads are easier, sure.

But green threads were not and are not the right solution for Rust, so it's kind of beside the point. Async Rust is difficult, but it will eventually be possible to use Async Rust inside the Linux kernel, which is something you can't do with the Go approach.




Rust Futures are essentially green threads, except much lighter-weight, much faster, and implemented in user space instead of being built-in to the language.

Basically Rust Futures is what Go wishes it could have. Rust made the right choice in waiting and spending the time to design async right.


You're overstating your case. Rust's async tasks (based on stackless coroutines) and Go's goroutines (based on stackful coroutines) have important differences. Rust's design introduces function coloring (tentative solution in progress) but is much more suited for the bare-metal scene that C and C++ are famous for. Go's design has more overhead but, by virtue of not having colored functions, is simpler for programmers to write code for. Most things in computer science/programming involve tradeoffs. Also, Rust's async/await is built-in to the language. It's not a library implementation of stackless coroutines.


> Go's design has more overhead but, by virtue of not having colored functions, is simpler for programmers to write code for.

Colored functions is a debatable problem at best. I consider it a feature not a bug and it makes reasoning about programs easier at the expense of writing additional async/await keywords which is really a very minor annoyance.

On the other hand Go's need of using channels to do trivial and common tasks like communicating the result of an async task together with the lack of RAII and proper cleanup signaling in channels (you can very easily deadlock if nothing is attached on the other end of the channel), plus no compile time race detection - all that makes writing concurrent code harder.


ok lol


I think they are referring to channels, which came with the tagline "share memory by communicating."


Rust has had channels since before Go was even publicly announced. Remember that Rust, like Go, was inspired by Pike's earlier language Limbo, which uses CSP. https://en.wikipedia.org/wiki/Limbo_(programming_language)


Rust has had OS channels since forever, and async channels for 5 years.

Rust has changed a lot in the past 5 years, people just haven't noticed, so they assume that Rust is still an old outdated language.


yes


We need a way to bridge the gap. Having a runtime may not be suitable for all apps but it can easily allow you to reach 95%+ concurrency performance. The async compile-to-state-machine model is only necessary for the last 5%. Most userland apps rarely need to maximize concurrency efficiency. They need concurrency yes, but performance at the 95th percentile is more than sufficient.


I really don’t buy this argument that only some small “special” fraction of apps “actually” need async, and for the rest of us “plebs” we should be relegated to blocking.

Async is just hard. That’s it. It’s fundamentally difficult.

In my experience language implementations of async fall into 2-axes: clarity and control. C# is straightforward-enough (having cribbed its async design off functional languages) but I find it scores low on the “clarity” scale and moderate-high in control, because you could control it, but it was t always clear.

JS is moderate-high clarity, low control: easy to understand, because all the knobs are set for you. Before it got async/await sugar, I’d have said it would have been low clarity, because I’ve seen the promise/callback hell people wrote when given rope.

Python is the bottom of the barrel for both clarity and control. It genuinely has to have the most awful and confusing async design I’ve ever seen.

I personally find Rust scores high in both clarity and control. Playing with the Glommio executor was what really solidified my understanding of how async works however.


I learned concurrency and parallelism by confronting blocking behavior: waiting on a networking or filesystem request stops the world, so we need a new execution context to keep things moving.

What I realized, eventually, is that blocking is a beautiful thing. Embrace the thread of execution going to sleep, as another thread may now execute on the (single core at the time) CPU.

Now you have an organization problem, how to distribute threads across different tasks, some sequential, some parallel, some blocking, some nonblocking. Thread-per-request? Thread-per-connection?

And now a management problem. Spawning threads. Killing threads. Thread pools. Multithreaded logging. Exceptions and error handling.

Totally manageable in mild cases, and big wins in throughput, but scaling limits will present themselves.

I confront many of these tradeoffs in a fun little exercise I call "Miner Mover", implemented in Ruby using many different concurrency primitives here: https://github.com/rickhull/miner_mover


Maybe "add a runtime that switches execution contexts on behalf of the user" and "force the programmer to reimplement everything" are not the only options.


in the sense that sharing memory by communicating is the right approach




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: