Hacker News new | past | comments | ask | show | jobs | submit login

The context switch for threads remains very expensive. You have 4,000 threads but that's lots of different processes spinning up their own threads. it's still more efficient to have one thread per core for a single computational problem, or at most one per CPU thread (often 2 threads per core now). You can test this by using something like rayon or GNU parallel using more threads than you have cores. It won't go faster, and after a certain point, it goes slower.

The async case is suited to situations where you're blocking for things like network requests. In that case the thread will be doing nothing, so we want to hand off the work to another task of some kind that is active. Green threads mean you can do that without a context switch.




> The context switch for threads remains very expensive

It got even more expensive in recent years after all the speculative execution vulnerabilities in CPUs, so now you have additional logic on every context switch with mitigations on in kernel.


Since that time, context switching changed from a O(log(n)) operation to an O(1) one.

I have no doubt that having a thread per core and managing the data with only non-blocking operations is much faster. But I'm pretty current machines can manage a thousand or so threads locked almost the entire time just fine.


> Since that time, context switching changed from a O(log(n)) operation to an O(1) one.

I'm not sure how that's relevant here, if for example something takes 1ms and I do it 1000 times a second, I'm using 1000 ms of CPU time vs not doing it at all. So if you want to use big o notation in this context it should be O(n) where n is the number of context switches, because you are not comparing algorithms used to switch between threads but you are comparing doing context switch or not doing it at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: