Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
PyInstrument – A statistical Python profile that focuses on the slow parts (github.com/joerick)
81 points by galacticdessert on Oct 20, 2020 | hide | past | favorite | 22 comments


Very cool, py-spy[1] has been an invaluable tool in my development process since jvns blogged[2] about it. The power of being able to visualize where your code is spending its time is so obvious and I'm glad people are building tools to make that easier.

As a quick compare and contrast between py-spy and pyinstrument it looks like py-spy has the advantage of being able to attach to an already running process which is super useful when your program is stuck and you don't know why. I haven't used pyinstrument yet but I do like the fact that it can do its flame graph in the console, sometimes I find saving down an svg file and opening up the browser a bit arduous. Excited to give it a try.

[1] https://github.com/benfred/py-spy

[2] https://jvns.ca/blog/2018/09/08/an-awesome-new-python-profil...


Another relatively new addition to the python statistical sampler space is Austin[1], that has a lot of similar features to py-spy. I haven't made a direct comparison yet between the two.

[1] https://github.com/P403n1x87/austin


You can use https://github.com/4rtzel/tfg for an interactive console flamegraph. It supports opening pyspy files.


I see a lot of tracing and sampling profilers for Python, but are there any profilers for manual instrumentation (goes by a bunch of names, frame profiling, performance telemetry, APM etc.)?


As an example, https://github.com/score-p/scorep_binding_python#user-region... is the support in the HPC system https://score-p.org (though it focuses on parallel programs). http://apps.fz-juelich.de/scalasca/releases/cube/4.5/docs/gu... is the associated profile analysis tool.


py-spy seems also very interesting, thanks for linking it. I always found cPython to be quite difficult to work with, and usually reverted to line_profiler or some sort of UI for cprof files. The main added benefits of pyInstrument to me is the high signal vs noise ratio, as it is evidently clear what is taking the most time while retaining the option to dive deeper.

I am also curious to try out the on-demand profiling integration with Flask, seems like a cool thing to have running in the background for my side projects


py-spy is absolutely invaluable for the attach-to-process feature. also, nothing beats the adrenaline rush of attaching to a prod process and leaving it in a paused state after collecting a profile (true story! SIGCONT is your friend).


> The standard Python profilers profile and cProfile show you a big list of functions, ordered by the time spent in each function. This is great, but it can be difficult to interpret why those functions are getting called. It's more helpful to know why those functions are called, and which parts of user code were involved.

Note that you can use something like gprof2dot to convert pstats dump from cProfile to a visual callgraph: https://github.com/jrfonseca/gprof2dot#python-cprofile-forme...

Not saying that solution’s better than pyinstrument — I haven’t use this one before so I’ll have to evaluate. Also, the lower overhead is undeniable.

---

Edit: Another thing I noticed in "How is it different to profile or cProfile?":

> 'Wall-clock' time (not CPU time)

> Pyinstrument records duration using 'wall-clock' time. ...

Seems misleading as cProfile uses time.perf_counter unless you supply your own timer, and time.perf_counter does measure wall clock time. See

https://github.com/python/cpython/blob/ec42789e6e14f6b6ac135...

https://docs.python.org/3/library/time.html#time.perf_counte...


I've been using pyinstrument for a while.

The biggest problem with the standard profiler is that the reported times are not split by code path. For example, if you have two parts of your code that call the same library function, and you want to know which path is the slow path...you can't. The time reported for each line is a sum of all times/paths it was called. Worse, the visualization tools don't hint that this is the case, so you end up with very incorrect plots. Pyinstrument will give you the time, by path. Super useful, and a huge time saver!


pyprof2calltree is much better.


The title for this post has a typo - it should be "profileR", not "profile" (er, without the capitalization on the R :) )

I could guess from context, but thought it might be good to point out.

Source: From the repo: "Pyinstrument is a Python profiler"

(Feel free to delete this comment after fixing the typo, or not :) )


I'm a big fan of pyinstrument. Many of the newer profilers, (e.g. py-spy) attach to a process externally via SYS_PTRACE and though that seems great in many ways, it is very much a no-go when you're running code on an HPC cluster and you don't have root access.


On an HPC system, why wouldn't you use normal HPC-type performance tools? It will probably give you better analysis tools and allow you to profile/trace the libraries which are probably providing the performance. At least Score-p, Extrae, and TAU have Python support and have sophisticated viewers with support for call trees and inclusive/exclusive views. When I looked (maybe a couple of years ago) I couldn't find anything in the Python ecosystem to do that, though there may be now.


ptrace is quite slow, py-spy reads memory and reconstructs stack frames which is much lower overhead for the profiled app.

from their github readme:

  py-spy works by directly reading the memory of the python program using the process_vm_readv system call on Linux, the vm_read call on OSX or the ReadProcessMemory call on Windows.


I like how easy this is to wire up to a Django API, took me a few seconds to this hooked up on my local machine.

This gives nicer summarization/presentation than Django Debug Toolbar's profiler, so seems like a good one to have in the toolbox.


Does this also track time spent in cython modules and function calls through ctypes?


How does it compare with yappi[1]?

[1] https://github.com/sumerc/yappi


> Shows you why your code is slow!

Because you wrote it in Python.

Seriously, Python is probably the slowest mainstream language of all. If you’re building something where performance matters, you should be using a different language.


> Because you wrote it in Python.

Sure, so if I want to shave startup/a slow action from 200ms to 100ms in a non-performance-critical tool, I shouldn't use a profiler, I should rewrite the whole damn thing in Go?

Can we stop these low information, canned responses already.


> If you’re building something where performance matters, you should be using a different language.

And the python ecosystem understands this, with those performance critical bits being implemented in C/fortran/whatever, not pure python.


_Ruby enters the room_


So you don't make bad performance choices in C?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: