I learned concurrency and parallelism by confronting blocking behavior: waiting on a networking or filesystem request stops the world, so we need a new execution context to keep things moving.
What I realized, eventually, is that blocking is a beautiful thing. Embrace the thread of execution going to sleep, as another thread may now execute on the (single core at the time) CPU.
Now you have an organization problem, how to distribute threads across different tasks, some sequential, some parallel, some blocking, some nonblocking. Thread-per-request? Thread-per-connection?
And now a management problem. Spawning threads. Killing threads. Thread pools. Multithreaded logging. Exceptions and error handling.
Totally manageable in mild cases, and big wins in throughput, but scaling limits will present themselves.
I confront many of these tradeoffs in a fun little exercise I call "Miner Mover", implemented in Ruby using many different concurrency primitives here: https://github.com/rickhull/miner_mover
What I realized, eventually, is that blocking is a beautiful thing. Embrace the thread of execution going to sleep, as another thread may now execute on the (single core at the time) CPU.
Now you have an organization problem, how to distribute threads across different tasks, some sequential, some parallel, some blocking, some nonblocking. Thread-per-request? Thread-per-connection?
And now a management problem. Spawning threads. Killing threads. Thread pools. Multithreaded logging. Exceptions and error handling.
Totally manageable in mild cases, and big wins in throughput, but scaling limits will present themselves.
I confront many of these tradeoffs in a fun little exercise I call "Miner Mover", implemented in Ruby using many different concurrency primitives here: https://github.com/rickhull/miner_mover