Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Aha, anyone remember psyco from the python 2.x era?

https://psyco.sourceforge.net/psycoguide/node8.html

p.s. The psyco guys then went another direction called pypy.

p.p.s. There's also a pypy based decorator but it limits the parameters to basic types. Sadly I forgot the github.



Yes! I used psyco in production for a while, and the transition to psyco resulted in some interesting learning...

I had written a C extension to speed up an is-point-in-polygon function that was called multiple times during every mouse move in a Python-based graphical application (the pure Python version of the function resulted in too much lag on early 2000s laptops). When psyco came out, I tried moving the function back to Python to see how close its speed came to the C extension. I was shocked to see that psyco was significantly faster.

How could it be faster? In the C extension, I specified everything as doubles, because I called it with doubles in some places. It turns out the vast majority of the calls were working with ints. The C extension, as written, had to cast those ints to doubles and then do everything in flouting point, even though none of the calculations would have had fractional parts. Pysco did specialization - it produced a version of the function for every type signature it was called with. So it had an all-int version and an all-double version. Psyco's all-int version was much faster than the all-double version I'd written in C, and it was what was being called 95% of the time.

If I'd spent enough time profiling, I could have made two C functions and split my calls between them. But psyco discovered this for me. As an experiment, I tried making two versions of the C functions. Unsurprisingly, that was faster than psyco. I shipped the psyco version as it was more than fast enough, and much simpler to maintain.

My conclusion... JITs have more information to use for optimization than compilers do (e.g., runtime data types, runtime execution environment, etc.), so they have the potential to produce faster code than compilers in some cases if they exploit that added information through techniques like specialization.


it was very good. But there was a win only if one can avoid the overhead of function-calls, which is slowest thing in python - magnitude+ more than anything else (well, apart of exception throwing which is even slower, but.. rare). In my case, the speedup in calculations was lost in slowdown because of funccals.. so i ended up grouping and jamming most calculations in one-big-func(TM).. and then that was psyco-assembly-zed.

btw funccalls are still slowest thing. somedict.get(x) is almost 2x slower than (x in somedict and somedict[x]). In my last-year attempt to optimizing transit-protocol lib [0], bundling / copying few one-line calls in one 5-line func was the biggest win - and of course, not-doing some things at all.

[0] https://github.com/svilendobrev/transit-python3/blob/master/...



Yes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: