This depends very strongly on the nature of the simulation. Lots of simulations are handled just perfectly by modelling your system as arrays. Consider, for example, nonlinear waves. Your simulation typically consists of FFT -> k-domain operator -> IFFT -> x-domain operator (repeat for i=0...T/dt).
No reason whatsoever not to do this in python, though of course the FFT is just fftw/fftpack and the x/k domain operators are numpy ufuncs (all written in C/Fortran).
On the other hand, for particle simulations, you need more complicated logic/data structures to handle multipole methods, so the python array model might not work so well.
Yes, it seems that there are some types of simulations that could be structured as numpy operations steered by a little Python code. But can this kind of code be run effectively on large multi-processor machines?
Array operations tend to be highly parallelizable by nature. Numpy operations can certainly be parallelized, and even distributed. Take a look at blaze.
That looks interesting. I know that numpy calculations should be straightforwardly parallelizable; my question, out of curiosity, was whether they were in practice.
Note also that a common bottleneck for array processing is memory bandwidth. Multithreading something that is memory bound will not get you much speed.
There are tons of optimisation, new representations that can be experimented with for arrays. While NumPy is already reasonably fast, I am convinced you can get much faster by expanding it (within or outside it). String/Object arrays nowhere near as useful as they could be as well.
No reason whatsoever not to do this in python, though of course the FFT is just fftw/fftpack and the x/k domain operators are numpy ufuncs (all written in C/Fortran).
On the other hand, for particle simulations, you need more complicated logic/data structures to handle multipole methods, so the python array model might not work so well.