It’s a common strategy for small tasks where the overhead of dispatching the task greatly exceeds the computation of it. It’s also a better way to maximize L1/L2 cache hit rates by improving memory locality.
Eg you have 100M rows and you want to cluster them by a distance function (naively), running dist(arr[i], arr[j]) is crazy fast, the problem is just that you have so many of them. It is faster to run it on one core than dispatch it from one queue to multiple cores, but best to assign the work ahead of time to n cores and have them crunch the numbers.
It has always been a bad idea to dispatch so naively and dispatch to the same number of threads as you have cores. What if a couple cores are busy, and you spend almost twice as much time as you need waiting for the calculation to finish? I don't know how much software does that, and most of it can be easily fixed to dispatch half a million rows at a time and get better performance on all computers.
Also on current CPUs it'll be affected by hyperthreading and launch 28 threads, which would probably work out pretty well overall.
If you don't pin them to cores, the OS is still free to assign threads to cores as it pleases. Assuming the scheduler is somewhat fair, threads will progress at roughly the same rate.
Blindly dividing work units across cores sounds like a terrible strategy for a general program that's sharing those cores with who-knows-what.