Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love the idea, but it feels like just an idea at this point. I'd rather read about them releasing their 'compile-time' analyzer and revealing their measurements for how much startup time it saves.

In our codebase, we have pretty strict developer-enforced rules about not doing I/O at the module level, usually through the use of simple "Lazy" wrappers for module-level objects. I'd be curious to know what other approaches people have taken with Python here.



It is an interesting approach, though I feel like this could introduce some nasty unintended consequences given how dynamic and introspective Python can be (admittedly I haven't studied this particular implementation).

I always treated this a bit like single underscore private functions/methods, i.e., follow a convention that produces code that's easy to reason about, even if it's not strictly enforced by the language/compiler. So in practice this equates to separating out modules that mutate global state, and placing the majority of logic in "strict" modules that only declare a bunch of "pure" classes/routines. So the "non strict" code is really just a thin layer of wiring gluing everything together. For instance my Celery task files tend to be very thin.


well, we also heavily use static typing, so you end up with something like

my_db_conn: Lazy[DbConn] = Lazy(lambda: make_db_conn(...))

and MyPy will tell you if you're doing something silly when you try to use it.

EDIT: After typing up this response and submitting I realize you were talking about their strict approach rather than ours. whoops :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: