Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is confusing. By far the most words of any concern in this piece are dedicated to a root concern that Rust doesn't have a select/poll abstraction, and that idiomatic network code simply allocates a task per socket. But that's true of Golang as well; not just true but distinctively true, one of the first things you notice writing Go programs.


Golang allocates a _goroutine_ per socket. Those goroutines are actually scheduled using epoll (https://golang.org/src/runtime/netpoll_epoll.go). Goroutines are _very_ light (8KB stack size) and don't incur nearly the same cost as a thread in a "context" switch. While for some (honestly very small) subset of problems they are still too large most people will never need to worry about them.

I have production systems which run several 100k goroutines at once with very little overhead.


Wait, the only way to do concurrent network programming in Rust is to allocate an OS thread to every connection? That doesn't sound right.


Rust Core team member here. Out of the box, the Rust standard library only provides very basic networking support [1]. If you want to process requests in parallel, then yes you need to spawn an OS thread to process the request. We went with this model because we wanted to have the standard library be portable across all our supported platforms, and there isn't really a great standard for doing portable IO.

Instead, we're layering platform-specific code in external libraries. At the raw C-level bindings, we have libc [2] and winapi [3]. For higher level APIs, we've got Mio [4] which abstracts the system APIs, and now Tokio [5], which uses futures to simplify async operations.

We're just working our way up the stack. All of these are being developed by members of the various Rust teams, so they're as "official" as everything else we're doing.

[1]: https://doc.rust-lang.org/std/net/ [2]: https://crates.io/crates/libc [3]: https://crates.io/crates/winapi [4]: https://github.com/carllerche/mio [5]: https://tokio.rs/


And it's not right. Rust programs can use epoll directly or through abstractions like mio and tokio.

Thread-per-connection is the simplest way to do concurrent networking in Rust using only libstd, but it's not the way that most Rust programmers would actually use (at least, not for a production server handling a large number of connections).


IIRC that's the only option in the standard library, but event loop and coroutine implementations are available from the crate ecosystem.


No, there's tokio, but it is not in the standard library.


> While for some (honestly very small) subset of problems they are still too large most people will never need to worry about them.

This is also true about OS threads!


    Goroutines are _very_ light (8KB stack size)
2kb, actually.


That's curious, as I've always considered 2 pages (1 unallocated) the absolute lower bound for a thread stack size (1 page of actual, allocated memory for stack, 1 page unallocated as a guard page to detect overflow.)

If a goroutine calls into C, and the C code overflows or otherwise writes ‼Fun‼ onto the stack¹ … can it clobber an adjacent stack of another goroutine or other memory?


C code runs on a larger special stack (that's part of the reason calls to C are slow). I don't know whether or not guard pages are added.


There are no added guard pages.


While this confused me a bit too, there's definitely an epoll implementation in the golang syscall package which I'm guessing ESR was planning on using instead of net? Rust doesn't have such a thing in their std lib, although tokio seems perfect for this

Edit: Does "But that's true of Golang as well" refer to the one thread/socket model or not having epoll? I may have misinterpreted what you were saying...


It would be extremely janky to write a Golang network server coded directly to select/poll. You would lose most of the networking libraries for the transactions based directly on it.

(I know it's doable; I did it for a portscanner, where I needed fine grained timer control. But I had to forego the Go networking stack to do it.)



Au contraire: Go has a built-in 'select' keyword that lets you wait on a set of arbitrary (blocking) channel operations. Start up blocking I/O in goroutines, and select on channels that give you the results.


Channels are not network sockets; select cannot be used to wait on a set of network sockets.

(Early versions of Go had a "network channel" concept, but that was removed from the language: https://softwareengineering.stackexchange.com/questions/1540...)


select{} has nothing to with select(), epoll etc. It works on channels only, which are not implemented using those syscalls.


A goroutine is a "task" though. And typically you spin up a goroutine for your select, and hang your current goroutine on its completion.

Seems pretty similar to me.


For networking, it's using epoll() under the scenes; depends on what exact non-blocking operation you're using, though.

See this very very old HN thread:

https://news.ycombinator.com/item?id=3565703

(Hopefully the standard library has improved since then.)


That's not access to the raw system's select/epoll.

It just happens to offer a similar model (and have a select operation).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: