Hi aaron_m04, I have posted a correction to the post and given you credit. Please let me know if I have made any more mistakes, I don't want to mislead any more readers. Thanks very much for your time!
Yes, you can use multiple processes to mitigate some of the problems, but you will create other problems (for example: you now have to serialize all the objects through a message pipeline, which is expensive, or you have to use shared memory, which also requires copy operations, unless you want to write your entire code in a low-level kind of way but in that case, you might as well use a different language like C). In an existing system, moving everything to a different process can be a cumbersome experience.
What you really want is multicore support with a garbage collector that is concurrent and a runtime that doesn't have a global lock. Sadly, few environments support this. Erlang comes to mind, but also doesn't support structural sharing. Ocaml is working on multicore support [1]. GoLang, Haskell and Java seem your best choice for GC+multicore.
Why? With cpython there are plenty of cases where parallelism is possible with threads because the GIL is released (for instance most IO operations and many C based number crunching operations), making threads and locks useful.
If you don't want to use C (I don't blame you if you don't) you could use Cython or Numba. And certainly in my number crunching code the amount of code that actually crunches numbers, and thus could really benefit from being written in a GIL releasing way, is really quite small.
No you can still use pure Python, and use multiple threads, and get concurrency just fine. The global lock doesn't stop you doing that! C extensions being able to release the lock is just a bonus on top.
Lots of problems are merely I/O related, which can be solved just fine with Python threads. As for number crunching (let's assume that means CPU intensive tasks), you can always still resort to multiprocessing (which can also be combined with multithreading, I fail to see why such paradigms cannot be combined).
You don’t need to run multiple processes - Python already has shared memory threads (that’s what the article is about) - and these work just fine concurrently even with the global lock. The big lock stops you doing some things but not many.
> What you really want is multicore support with a garbage collector that is concurrent and a runtime that doesn't have a global lock. Sadly, few environments support this.
Too bad, because you want to isolate things sometime.