Hacker News new | past | comments | ask | show | jobs | submit login
Tokio Console (tokio.rs)
788 points by hasheddan on Dec 17, 2021 | hide | past | favorite | 118 comments



Praise, 2 suggestions and a question.

Color coding OOM (order-of-magnitude) is smart, but the colors chosen seem quite hard to distinguish. Consider picking different default colors; also consider adding a redundant representation of scale, like an integer representing OOM scale.

It is strange that a thoughtful design around OOM would also choose keep so many digits. If the goal is to summarize, then please throw those extra digits away. (e.g. 10.3992s is specifying the time down to the millisecond, but those extra ms are not meaningful at the second time scale.)

What is this about the runtime polling tasks? Does that happen in Tokio? Why? I am only familiar with 2 async runtimes (node, vert.x) and I was under the impression that neither of these "poll their threads" in any meaningful way. Threads (or process or fiber or whatever you want to call it in this context) never initiate action on their own. They are ALWAYS waiting for something to happen to them, and that includes timeouts, which would be triggered by passing a thread a clock tick. The runtime's job is to centrally manage resources that must persist between thread behavior, so at most it is going to be polling external resources, and not it's own threads.

Or maybe I don't understand what polling means here? I would interpret it as meaning "keep checking a well-known place for changes, and then do something if a change is found". But since async threads can't initiate any change on their own, this is nonsensical.


(primary author of the console here) you're right that there are too many digits of precision right now...the reason for that is that it's actually _not_ a thoughtful design at all, though I appreciate you saying that it is; I just picked an arbitrary number when I was writing the format string and didn't really think about it. We probably don't want to display that much precision --- for smaller units, we probably don't want any fractional digits, for larger units like seconds, we probably want two digits of precision maximum.

Regarding the color scheme, glad you like the idea. Because it's a terminal application, the choice of the colors was constrained a bit by the ANSI 256 color palette (https://www.ditig.com/256-colors-cheat-sheet); I wanted it to be obviously a gradient, so I just picked colors that were immediately adjacent to each other in the ANSI palette. It might be better to pick colors that are one step apart from each other, instead, so they're more distinguishable visually...but there's kind of a balancing act between distinguishability and having a clear gradient. We'll keep playing with it!


I'm a little bit of a color freak. Allow me to leave some suggestions :)

- Picking from the 256 color pallete will likely give you colors with different brightness. This may hurt readability of darker colors on a dark background, and may make some color stand out unintentionally. Consider using something like HSLuv [1] to pick colors with the same lightness, then convert to the closest Xterm color [2].

- To make it obvious there is a gradient, I'd pick one lightness (assuming HSLuv) and one saturation (I usually stick to 100%), then pick a distance in hue for each step. For example if I expect to see a maximum of 7 steps on the screen at once, one way is to start at 0, then 30, then 60, etc. You may choose to go over 180, but keep in mind 360 will be the same as 0 so maybe stop at 240. Note how by picking adjacent colors from the table you are still picking a distance, but the distance is too small so it's hard to see.

- You may want to choose a different starting point than 0, and maybe different direction for the steps, depending on whether you want the colors to "mean" anything. For example red is commonly associated with warning, so you can arrange to have the top of the range aligned with red. Or arrange to avoid the red region if you don't want that association.

[1] https://www.hsluv.org/

[2] https://codegolf.stackexchange.com/q/156918 <- I'm sure there are more readable ways but can't find them in a quick search


> we probably want two digits of precision maximum

I think it depends a lot on jitter in the system. Sigfigs are one kind of error bar, one where I often have to haul my coworkers or myself out of trying to read things into the data that aren't there.

There are times where a 10ms change in a 2 second response actually matter to me, because that's half a percent and not all improvements which are easy are also straightforward. Sometimes you're scrambling for 3% here and 2.2% there. But if the noise in the system is ±50ms then people declaring that they've shaved 15ms off of response time are likely deluding themselves and then deluding the rest of us.

I know how to do some of these things by hand, I'm not sure how you automate them, or in the case of a dashboard, typeset them.


> We probably don't want to display that much precision --- for smaller units, we probably don't want any fractional digits, for larger units like seconds, we probably want two digits of precision maximum.

I really like the way Haskell's Criterion library formats numbers. It always displays four digits total and selects the SI prefix appropriately. I've ported the algorithm to C here, feel free to use it as an inspiration: https://gist.github.com/pkkm/629a66d47ecd16aa89e8b67ba5abd77...


Quite a lot of terminal emulators support 24bit TrueColor escape codes, so it might be worth using that along with a color space designed for visual intensity corrolation (there's a lot of study in this for e.g. heatgraphs for maps)


Yeah, currently, the console knows how to detect TrueColor, but in this case, I just used the ANSI 256 palette rather than picking a better one when TrueColor is available...we should probably fix that!

Side note, it turns out that detecting what color palettes a terminal supports 24-bit colors is surprisingly fraught. There are a couple env variables that may be set...but not every terminal emulator will set them. And then you can use `tput`...but the terminal may not have correct data in the tput database. So that was fun to learn about!


Consider going for one of the Colorcet[0] maps if you detect you can display them. They are really useful to have a neutral view of the subject. I suggest log-scale before applying the colormap for order-of-magnitude visualization. There's a Rust crate for these (I forgot the name).

[0]: https://colorcet.holoviz.org/


> the choice of the colors was constrained a bit by the ANSI 256 color palette (https://www.ditig.com/256-colors-cheat-sheet);

Note that many terminals support 24bit (truecolor) these days.


I'll answer the polling question. The Tokio runtime (and async rust in general) works a bit differently than other async runtimes like node. With node, callbacks are provided and executed when an OS event is received. With Tokio, there are no callbacks. Instead, async logic is organized in terms of tasks (kind of like async green threads). When the task is blocked on external events, it goes into a waiting state. When external OS events are received, the task is scheduled by the runtime and eventually the runtime "polls" it. Because the poll happens in response to an OS event, most times, the poll results in the task making progress. Sometimes there are false positive polls.

This page goes into a bit more depth and shows an example of how one would implement a (very simple) runtime/executor: https://tokio.rs/tokio/tutorial/async


I read your link earlier today, and have been thinking about it; I particularly like the pedagogy of rebuilding it "in the small" with your MiniTokio example.

I don't know Rust. If I had to guess, this means that you've reused Rust's threads for tasks, such that they may not be done computing when a resource is available? In any event, I want to circle back to the OP, and note that runtime visualizations like this are awesome, and conversations like this is why. I personally don't think anyone spends enough time dwelling in their runtime(s), and certainly no async runtime has good visualizations, so its pretty cool to me that tokio-console is taking the lead here. I've been bullish on Rust for 5 years; maybe it's time to try it out for reals.


The purpose of async is mostly to avoid OS threads, and rust decided not to go down the route of implementing user space threads.

Instead, for async, rust implemented the ability to basically encode a functions stack frame and instruction pointer into a "normal" (but opaque) struct. What an async runtime like tokio does is (through a few levels of useful indirection that I won't talk about) store a list of these structs, and decide when it's a good idea to "call" one of them. When called, the structs either return a final value, or return a value saying "call me again later", in which case the runtime presumably puts it back into it's list of structs and calls it again sometime later.

Figuring out when to call it is left up to the runtime, but the useful ones will do things like record what operation it's waiting for and call it when that operation is ready.


> rust decided not to go down the route of implementing user space threads

Rust had (optional) user-space threads a long time ago, but that was removed in the pre-1.0 days as it added a lot of complexity and had some unavoidable performance loss even when opting for native threads (it forced dynamic dispatch on anything related to threading or I/O). There was a lot of discussion here but eventually it was declared that the OS thread scheduler was in fact perfectly capable of handling large numbers of threads and that virtual memory mapping meant the stack space allocation for each thread wasn’t a big deal and so green threads were removed.


These feel like co-routines.


Yep, basically equivalent to stackless coroutines.


I sometimes wonder what is the fundamental distinction between a callback-based API and this wakeup-based task API. I guess the main difference is that in a callback you generally provide the result as an argument whereas with a wakeup-based API you just wake the task and it has to look for the result in some stored state somewhere.

But ultimately both of them take the "rest of the computation"/continuation and store it somewhere (i.e. on some sleeping task/callback list) to be awakened/invoked later.


It is fairly subtle and mostly an implementation detail. In Rust, the concept of "polling tasks" was very exposed before the async/await keywords were introduced, so the lingo kind of stuck. There is an argument that we should move away from that lingo now that it is mostly hidden as an implementation detail, but we haven't yet.


@tijsvd mentioned that the callback model usually requires more allocation, which Rust is eager to avoid. I'll add that the wakeup/polling model plays much more nicely with Rust's ownership and borrowing rules. Callbacks usually need to hold pointers to the objects that they capture. In a GC'd language, this usually isn't a big deal, other than sometimes causing some surprising leaks. But in Rust, where the compiler wants to keep track of how long pointers live and which objects are aliased, it gets real ugly real fast. The wakeup/polling model sidesteps this nicely, because the no one besides the task itself holds any pointers to the objects that a task owns.


A waker-based API can fall back to waking all paused tasks in a background process to recover from lost events (epoll overflow or such), while a callback-based API can't "just" do so without (allocation?) cost on the happy path.

Their inherent resilience to spurious wakeups is quite useful in that regard. They also work with exotic FD's, as long as those can still be registered with epoll. For example, pidfd can be polled for readability (despite any read(2) call failing with EINVAL), triggering when the corresponding process terminated.

I guess the benefit is that at least on Linux pre-io_uring, the async syscall way of doing things was via poll/select/epoll to notice when an fd unblocked, followed by waking whatever corouting/statemachine was interested in that event. It composes quite well.


The callback is really hard to implement without allocating memory for each wakeup. The poll mechanism can simply leave the task in place. I suspect the poll thing is also easier to generate.


Generally the way async Rust works is that you have a Future trait with a poll method, and if you call it the future will attempt to make progress if it can — e.g. if the task is a timer it will check the time and complete the task if so and otherwise return Pending.

However, async Rust includes an additional concept: Wakers. When your runtime (Tokio) calls poll on a future, it gives the future a waker object, and when the future is ready to continue work, something needs to call wake on the waker. Once this happens, Tokio will poll the task again soon, and Tokio wont poll tasks that have not been woken.

For example, for timers, Tokio includes a timer wheel (a sort of priority queue) with all the registered timers, and the timer wheel calls wake on the appropriate waker whenever a timer expires. Similarly with a message passing channel, the sender calls wake on the receiver's waker when a message is sent.


Also, thanks for the thought re: digit precision. I am tracking it here: https://github.com/tokio-rs/console/issues/224


Hi, I'm one of the main authors of `tokio-console`, so if folks have any questions, I'm happy to answer them!


Have you delved at all into the potential for having a true "rust repl"? This is something I have very much wanted for some time, where instead of just recompiling and re-running everything every time, it just compiles the new line and executes it in the memory context of the already running rust program. I'm just not enough of a low level rust hacker to get it working, but imagine the web frameworks and things that could exist with a good REPL for rapid debugging and prototyping in rust.

I actually spent a whole summer trying to do this in Crystal and I was very nearly successful, however a few low level limitations got in my way at the end of the day. In Crystal it is actually possible to do this kind of REPL if you have a perfect ability to deep marshal/copy any object, and I almost, almost got that working here: https://github.com/sam0x17/marshal


I think a low effort, high value step in the usability direction of a REPL is cargo-script

I've written some on it [0] and there was a recent reddit thread discussing it [1]

[0] https://epage.github.io/blog/2021/09/learning-rust/

[1] https://www.reddit.com/r/rust/comments/rddokp/media_most_up_...


Something like that would definitely be useful! It's not really in scope for this project, which is intended as a telemetry and diagnostics tool, but I can imagine a Rust REPL being useful. Of course, in order to do that, you'd need to implement a general-purpose Rust interpreter, which seems like a fairly large amount of work.

In practice, I personally just use REPLs mostly for quick testing out of a small expression or something...and honestly, I usually just use the Rust playground (https://play.rust-lang.org/) for this. Small examples are compiled fast enough in the playground that it's kind of a REPL-like experience for testing stuff out semi-interactively...but it's not the same as connecting to a running application and running new code inside of that application. That's something that seems very difficult to add to Rust, a compiled, statically-linked language with limited support for hot reloading...



Can tokio-console be used on Rust async programs that don't use the tokio runtime (assuming such arises on desktop)?


Yeah, it is decoupled from Tokio. Tokio emits instrumentation via the `tracing` crate. Tokio Console just listens to the tracing events. Any runtime that emits the same events can be used with Tokio Console. This is the tracking issue: https://github.com/tokio-rs/console/issues/130


`valuable`[1] was initially written to support tracing, but I see this 0.1 release doesn't use it. Is valuable seen as more of an add-on to this approach, rather than core to it?

[1]: https://tokio.rs/blog/2021-05-valuable


The current release of `tracing` includes the predecessor of valuable (https://docs.rs/tracing-core/latest/tracing_core/span/struct...). Valuable extracts this functionality and improves on it, but hasn't quite made its way back into tracing yet. I expect that it will be included in upcoming releases (I know eliza has been poking me to release valuable and get it in tracing, I probably should get on that!)


Yeah, the goal is for `valuable` to replace `tracing`'s (currently much more limited) `Value` trait entirely, when we release `tracing` 0.2. Before making a breaking change, though, we want to release opt-in support for `valuable` in the current v0.1.x `tracing` ecosystem, so people can start trying it out and we can figure out if there's anything missing.

You can follow the progress of that here: https://github.com/tokio-rs/tracing/pull/1608

I believe it's currently just waiting for a crates.io release of `valuable`!


This looks great! I wonder how hard it would be to add support for this to debuggers like lldb. I believe they currently lack a way to inspect rust objects.


I stopped following development of `valuable` since I was too busy with non-rust-stuff, but weren't there still some big API-shaping questions open for it?


Not too many now. What it really needs is real usage to find any potential limitations.


What's the intended workflow? Would I run a console an all of my rust services, and then when debugging some prod issue connect to it? Or would I flip a switch? Or is it more for CLIs?

Curious to hear, in general, how it's been used.


This first release is geared primarily towards local debugging. That said, it is designed to be able to enable / disable instrumentation at runtime and it will be able to support connecting to a process in production, enable the instrumentation, and debug there.

Right now, we wanted to get the first release out and start getting people using it and collect feedback to help prioritize future development.


Got it, thanks. A follow up - is this work funded in any way?


Tokio is a non-profit, community-supported project, although many of us work on it as part of our day jobs. If you want to support Tokio development, you can contribute to it on GitHub Sponsors (https://github.com/sponsors/tokio-rs) and on OpenCollective (https://opencollective.com/tokio).

I've also recently started accepting donations on my personal GitHub Sponsors page (https://github.com/sponsors/hawkw) if you're interested in supporting my open-source work in particular.


Perfect! Thanks.


Is there any question we should ask ? About some challenge or interesting discovery / aspect that most of us wouldn't even realise ?


Could you please go get hired by Apple and implement this for Swift? Pretty please!


The Rust ecosystem is on fire!

This is such amazing tooling. There's so much best in class engineering going on with this language, Tokio/async runtimes, graphics, web libraries, etc.


Thank you, that's really nice to hear!


Can't wait to try this out! I always wanted the ability to see the tasks that are currently running or waiting.

In the screenshots I can see that tasks have descriptive names. Does anybody know how to set the name for a task? tokio::spawn doesn't take a name parameter. Does it require `tracing`?


Looks like they have added a builder for tokio::task[1] that will allow you to set a name. It's unstable at the moment so you would need to set the tokio_unstable cfg flag.

[1]: https://github.com/tokio-rs/tokio/blob/master/tokio/src/task...


Exactly, this is the first release of Tokio Console. We will keep adding functionality over time, which will help inform the APIs in Tokio to better provide the necessary instrumentation.

I expect the new APIs like the task builder will stabilize (no longer require the `tokio_unstable` flag) over the course of 2022.


Very interesting! I can't help but wonder why these technologies (the framework, and accompanying observability) aren't an OS-level feature; wondering if at this point we need an OS at all, and why not ship a minimal kernel + the binary of an application built w/ this to run directly on the VM.


They're not OS-level because there isn't wide consensus on the right way to do things yet.

Rust, C#, and JS all have similar concepts of async, but they're all slightly different. None of them would be trivial to adapt to other system langs like C and C++ - Rust requires compiler support to take apart async functions and put them back together as state machine, and the others lean on their GC. (And also compiler support, IIRC) I think there is a proposal to add coroutines in the new C++ standard, but I'm not sure how it would be done in the kernel.

And sometimes I see people saying, "async is very bad, just use coroutines." Having only used Lua coroutines, I don't understand what the big difference is supposed to be.

But mostly, these runtimes don't need OS-level support. Async is sort of a way to do concurrency without a kernel-level context switch for every task switch, right? If it's working so well in-process, why involve the OS at all?

> wondering if at this point we need an OS at all

Depends what you mean by OS. If you deploy in a container, of course your OS shares its kernel with the host. But for some user stories, (glares at Android) "OS" means all the software, including a Blink-based web browser, a plethora of GUI programs, and other things that I would rather call a "desktop environment" than an OS.


> Rust requires compiler support to take apart async functions and put them back together as state machine, and the others lean on their GC

C# actually does the same thing, though it indeed still relies on GC.


It's been done, check out https://www.nerves-project.org/ for an example of running a minimal kernel + Elixir (BEAM VM) on devices. As I understand it, it's powered by Buildroot and should be possible to do for other VMs/languages. Don't think I've seen anything like this for desktop OS's though!


In Linux there's work being done on a language-agnostic lightweight threading model, User-Mode Concurrency Groups or UMCG: https://lwn.net/Articles/863386/

One could imagine a similar, runtime-independent console for UMCG. Note, however, that the programming model for such a runtime would be much more similar to 1:1 threading (i.e. blocking I/O with threads) than async/await.


Unikernels are a thing e.g. https://mirage.io. AFIACT uptake has been ... slow. You give up a lot to erase most or all of your OS, and only the most performance sensitive applications realize a benefit


That's basically what we did with https://github.com/auxoncorp/ferros, Bundle Rust programs together as tasks to run atop the formally verified seL4 microkernel.


Arguably, they are an OS-level feature, in the form of OS threads and your favourite task manager; but it is more expensive to run a very large number of OS threads, compared to async tasks or green threads.

Per Dan Ingalls [1], an operating system is a collection of things that don't fit into a language. There shouldn't be one.

[1] Design Principles Behind Smalltalk <https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....>


What about people who want to use several programs on one computer, possibly even written in different languages? This idea is ridiculous unless you think everyone shares your dream of running everything in a smalltalk image.


I don't think everyone shares _my_ dreams, which mostly don't involve Smalltalk images, but this seems quite feasible. And, from experience, making languages interoperate is a hassle, so "written in different languages" isn't all great.

In the particular case of Smalltalk, the operational(?) semantics of which are specified by a virtual machine, one can just compile targeting the virtual machine. c.f compiling Smalltalk and Java to the Self virtual machine, or all the compilers targeting the JVM nowadays.


Yes truly, is there something like this for linux threads and processes?


The Linux Perf tool is pretty powerful. I've barely just scratched the surface myself.

https://perf.wiki.kernel.org/index.php/Main_Page

As as a side note, Go and pretty good profiling tools. You can see a trace of goroutine execution and why a goroutine got scheduled out and when GC kicks in.

https://about.sourcegraph.com/go/an-introduction-to-go-tool-...


A lot of the Tokio console UI was inspired by `htop`, which provides a pretty similar overview of processes and threads. It doesn't really have the same ability to inspect things like `pthread_mutex` and timerfds etc in the same way that the Tokio console can inspect the state of `tokio::sync::Mutex` and `tokio::time::Sleep`, though; although I wonder if something like that could be possible with eBPF...


> wondering if at this point we need an OS at all

People have written unikernels, but they're not that interesting anymore when you can trivially constrain Linux to a single core, keep the remaining cores for you app (no context switching), and keep all of the Linux-y goodness for admin and debugging (SSH, gdb, etc.). All of the performance, none of the admin and deployment headaches.

Basically, unless you truly can't afford that extra core, unikernels are all downside at this point.


With languages with rich runtimes that is exactly the point.

Actually in a way, POSIX is the missing C's runtime that wasn't made part of ISO C.


Very cool. Reminds me of Erlang Observer.


This looks great, thank you for the hard work!

Have you considered to also expose this information in an interactive web interface? Using a zoomable timeline view (https://www.tensorflow.org/tensorboard/tensorboard_profiling...), both for after the fact analysis (taking a fixed N second trace and then inspecting it) as well as interactive visualization (automatically scrolling timeline with option to pause and scrub).


Yes, we've designed the overall architecture of the system to be modular so that the telemetry can be consumed by a number of different UIs --- we'd love to see someone write web interfaces and/or native GUIs for the console data. I have basically no web development experience whatsoever, though, so I went with the terminal app, because not having to learn JavaScript first made it a lot easier to get started :)

We're also thinking about factoring out the Tokio Console command-line application's internal data model and client code into its own library (https://github.com/tokio-rs/console/issues/227) to make it easier to build other UIs on top of that.


We would love a web view but don’t have any ability to design or much experience with building web apps, so we stuck with a terminal UI.


this is very exciting b/c it feels like one of the 1st language features centered around runtime debugging.

Most languages don't seem to do a lot for run-time debugging. Being able to `gdb` and step-through on a local binary is a far-cry from detailed metrics / visualizations. We end up resorting to stuff like Honeycomb, but I'm waiting for the days of a programming language built for the ground-up for runtime debugging.

Anyways, this feels like an important step in the right direction and I'm excited to try this out


Go also has pretty good out of the box profiling (pprof[0]) and third-party runtime debugging (delv[1]) that can be used both remotely and local.

These tools also have decent editor integration and can be use hand in hand:

https://blog.jetbrains.com/go/2019/04/03/profiling-go-applic...

https://blog.jetbrains.com/go/2020/03/03/how-to-find-gorouti...

[0] https://github.com/google/pprof

[1] https://github.com/go-delve/delve


I've been waiting years for some cooperation between languages and editors to achieve things Bret Victor wanted years ago, like "Just change this constant while the program is running", where you can have very tight iteration loops for things that don't need a full recompile.

I mean, Visual Studio has some kind of hot reloading, but I'm not gonna VS, I want mainstream support for this fast iteration and deep runtime debugging.


gdb has allowed you to change runtime variable values for the couple of decades that I've been using it.

e.g. "set var i=10" ?

Or are you talking about something else?


It's very limited, and doesn't work for things like constants that might be compiled straight into the code. By contrast, a true hot-reload can swap out the actual code at runtime.


The Tokio ecosystem is tremendously important for people considering Rust for professional networking stuff. It not only has great libs / features, but will be well maintained for years to come - this is my firm believe at least and of course what you need for professional projects (as opposed to personal).


Is there something like this for Golang? I've inherited a golang project at work and it's really not fun to debug compared to Python.


Go has pprof (https://github.com/google/pprof), which I've heard good things about --- and, the pprof data model was one of the influences I looked at when designing the Tokio console's wire format. But, I'm not sure if pprof has any similar UIs to the one we've implemented for the Tokio console; and I haven't actually used it all that much.


pprof + runtime tracing spans is probably the best equivalent in Go-land, yeah. I have yet to see any library actually use those, but that may just be me being unlucky.


Already mentioned: https://pkg.go.dev/net/http/pprof

I've just seen that Goland (the IDE) has some nice integration:

https://blog.jetbrains.com/go/2020/03/03/how-to-find-gorouti...


Is there even something like this for Python?

I've used Tracy for C++, which does similar tracing on OS-level threads, but I don't know much about Python.


That's just lovely. I'm a huge fan of programs exposing deep runtime state via normal (not debug mode only) instrumentation.

Java has historically been amazing in this area, it's great to see other languages stepping up as well.

Does anyone know of a similar visualizer for coroutines/threads for async Python?


Thanks for making this! It looks super useful.

Will I be able to use console or part of it with the rest of the rust async ecosystem? (for example, async-std or futures-rs), Or is is Tokio specific?


It does not have a hard dependency to Tokio. Any runtime can use tracing to emit the necessary instrumentation to work with Tokio Console.


Thank you so much for building the things in the Tokio ecosystem including Console.

So much great stuff!


This looks great though it's a shame it only has a TUI mode. The interface is pretty much begging for a proper GUI. I guess we're still a way away from having a good de fact Rust GUI toolkit.


Would love a real GUI but we have no design talent and stuck with something simpler.


Question for y'all using Async/Await: Are you coding web servers/backends, and/or do you have a background in that? Testing a (transparent!) theory.


I'm using it for hardware libraries that are mostly I/O bound. We're not C10K, we're barely C10, but it's easier than managing threads manually for me.


Interesting! I use rust mainly for embedded. There's a framework called Embassy that uses Async. Mainly built by one guy who's pretty sharp. I prefer to use interrupts, DMA, timers etc directly instead of Async, but it seems that a large number of Rust programmers consider Async to be fundamental. I wasn't sure if that was just the web devs or what, but here are 2 counter-examples.


This made me laugh, but Im in the same boat haha


At my day job (https://buoyant.io/), we're using it to write a reverse proxy/load balancer --- one of the use-cases where you really, absolutely do need asynchronous concurrency. :)



Oh I've been waiting for this for sooooo long. Thank you tokio team. I think this is going to make my life a lot easier in the debugging sphere.


Is there sth like this for Go?


Having programmed in both Rust and Go: I mostly didn't need what this does in Go because my problem was usually solved by panic'ing, which prints a stack trace of each go routine, allowing me to get enough of the way through figuring out my current problem (usually something was stuck waiting or sending). In tokio, there's no such print-stack behavior, so it's much harder to get a snapshot of what's going on. (I'm relatively new to tokio and Rust, so there's perhaps a handy "print all the tokio tasks and their stack traces" method, but I haven't come across it yet.)

If folks use this console thing for perf reasons and not debug reasons, then yeah, maybe cool to have in Go.



this is great. thank you.


Is there anything comparable to this with boost/asio?


Anyone know what the font is in those screenshots?


It's https://typeof.net/Iosevka/ (I took the screenshots).


Has anybody done this for Python?


trio has a similar utility I believe


very cool, reminds me a bit of jmx which I miss alot back from my java days.


This is awesome


All your HN is belong to rustaceans.


Is this the beginning of a new way of deploying applications?


1st time i laughed reading a readme for such a brandifious project.


What does brandifious mean? A google search doesn't return anything.


Thanks for asking, in short pure nonsense with the intention to attract derision. Google does know it? :-) Because I made it up. It is synonymous to grandiose without being pretentious and is originally meant to be used in some product marketing context. Are you Sir Ious?


I think a submission title regarding something specifically about Rust development should contain the word Rust.


i think most people know tokio is rust (I'm not a rust dev)


It's the german word for Tokyo. There is little reason for a non-rust dev to know this other meaning.

I hate these ambigious HN vaguebait headlines and you should too.


There is little reason for a non-german to know this other meaning


Actually, that's how Tokyo is spelled in many languages besides German.


It used to be spelled Tokio in English speaking countries before the 1950s

https://english.stackexchange.com/questions/207014/why-was-t...


if an hn headline says something like "best practices for Phoenix" nobody expects it to refer to the city Phoenix, Arizona, USA. It's more likely Elixir Phoenix, the web framework. Or it could be one of a ton of other software projects called Phoenix.

this is not a problem.


I'm not sure about that, I don't use Rust but I don't see how it could be relevant because its real time, what you want is to dump things and analyze later, if you have to stay in front of the console to see what's happening it's not very useful. Is it currently possible to dump the trace and analyze it later with that console? ( like Go or Java does with JFR ).

The idea is great it's the way to consume the information that I think misses the point.


It's helpful if you're, for example, running unit or integration tests and see stuck threads.


Often you can find a misbehaving process that somehow got into a weird state, one that you weren't tracing beforehand because you weren't expecting trouble and didn't want the overhead, and in those cases it can be tremendously useful to be able to attach some inspection tool to figure out what's going on so you can fix the underlying bug.


I only read the first paragraph but it sounds like it addresses this?

> It gives you a live, easy-to-navigate view into the program's tasks and resources, summarizing both their current status and their historical behavior.


Dumping to a file for later analysis is on the roadmap.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: