Hacker News new | past | comments | ask | show | jobs | submit login

Doesn't that mean you have to somehow know when it's been running too long and then yield back control somehow? What if the slow operation is something that's atomic, like multiplying some huge numbers?



Well the example given is decoding JSON. If that is happening in a long loop, you can yield once per loop and be safe. Not all problems are neatly broken apart like that, but in those cases how much of a chance does the server have to not timeout regardless, you know?

Note that once per loop might be too often, but you can just measure how long a loop run typically takes and compare to how soon you want to preempt the task, then yield at the right interval.


Seems like abstractions will bite you there, most will just do cool_library.unmarshall(request) in some variant, those libraries will not have the same method for yielding as you have.


Abstractions are meant to be broken! One could probably work around this problem by adding new functions to cool_library, or modifying existing ones, whose code would be copy-pasted from the library but with some asyncio.sleep(0) calls spliced in strategic places :). For legacy projects, it may make more sense to cheat like this than to rewrite the whole project in a saner tech stack.


Before webworkers this was how you did things in the browser to not get the "page is not responding" popup on computationally expensive operations. Break each big operation into many small operations and step to the next phase using setTimeout(1,..)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: