> If you want to serve many requests that do significant CPU work in parallel
For what definition of "many"? If you have 16 cores that is 16 requests being processed in parallel. Go hasn't even got out of bed by the time it is handling 16 connections. When Go is being used for it's intended purpose - handling thousands of concurrent connections, you need to be a lot smarter than just running on all the cores you have.
I'm not saying you wouldn't use all cores to maximize utilization - what I am saying if that if you are scaling beyond one core, then you would already be thinking about scaling across multiple servers behind a load balancer - i.e. you are doing ops work on a production scale cluster. And if you are doing production ops work, you are going to be tuning all the processes under your control in detail, not just making rash assumptions about how they may or may not utilization multiple cores.
The point I was arguing here is that someone who said "Back when I was learning Go" and who understands the "nature of the claims of the language" is not going to be running into these sorts of scalability issues, and would not be bitterly disappointed to learn of a default threading value of 1. Unless they had some misconstrued understanding of what Go's concurrency support was all about, and made an incorrect assumption that it was primarily about parallel computation, rather than it's true purpose of having many thousands of lightweight processes handling lots of network I/O.
For what definition of "many"? If you have 16 cores that is 16 requests being processed in parallel. Go hasn't even got out of bed by the time it is handling 16 connections. When Go is being used for it's intended purpose - handling thousands of concurrent connections, you need to be a lot smarter than just running on all the cores you have.
I'm not saying you wouldn't use all cores to maximize utilization - what I am saying if that if you are scaling beyond one core, then you would already be thinking about scaling across multiple servers behind a load balancer - i.e. you are doing ops work on a production scale cluster. And if you are doing production ops work, you are going to be tuning all the processes under your control in detail, not just making rash assumptions about how they may or may not utilization multiple cores.
The point I was arguing here is that someone who said "Back when I was learning Go" and who understands the "nature of the claims of the language" is not going to be running into these sorts of scalability issues, and would not be bitterly disappointed to learn of a default threading value of 1. Unless they had some misconstrued understanding of what Go's concurrency support was all about, and made an incorrect assumption that it was primarily about parallel computation, rather than it's true purpose of having many thousands of lightweight processes handling lots of network I/O.