Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using Mypy in Production (crmarsh.com)
116 points by charliermarsh on Aug 22, 2022 | hide | past | favorite | 129 comments


As someone who owns multiple label makers and organizes my utensils by type when I put them into the dishwasher I really like Mypy. Cool writeup, good points. I think a lot of people (ML scientist people especially) who haven't been forced to use a type checker don't realize the productivity benefits of having a type checker running on your whole codebase - they only see the annoying slow-down of having to do things a particular way and having the type checker bug you for inconspicuous errors.

Example: One pattern (which I don't think a lot of people are familiar with?) that I started adopting recently is the use of `Literal` for type-checking strings. For example, instead of something like

(on closer reading I realized this was in the blog post as well, but I suspect maybe some ML people will have seen this specific case before)

  class ActivationType(enum.Enum):
    sigmoid = "sigmoid"
    tanh = "tanh"

  def get_activation(key: str | ActivationType) -> nn.Module:
    key = ActivationType[key]
    if key == ActivationType.sigmoid:
      return nn.Sigmoid()
    if key == ActivationType.tanh:
      return nn.Tanh()
    raise KeyError(key)
you can do something like this instead:

  from typing import Literal

  ActivationType = Literal["sigmoid", "tanh"]

  def get_activation(key: ActivationType) -> nn.Module:
    if key == "sigmoid":
      return nn.Sigmoid()
    if key == "tanh":
      return nn.Tanh()
    raise KeyError(key)
The advantage is that you can do something like

  act = get_activation("tahn")
and Mypy will show an error for your typo (instead of having to run your code and eventually hit the `KeyError`). So if you're just trying to quickly implement an idea, you don't have to kill brain cells searching for typos.

Of course, doesn't make a difference if your coworkers all use Vim and Emacs with no extensions...


With Literal you can also do this:

    @dataclass
    class Message:
        event: Literal['message']
        msg: str
    
    @dataclass
    class File:
        event: Literal['file']
        url: str
    
    typedload.load(data, File | Message)
Where data is something like `{'event': 'message', 'msg': 'bla'}`.

After the load, you can trust that your objects are well formed and let mypy do its thing.

So types can be useful at runtime as well. Otherwise the alternative would be to use raw dictionary, which mypy can't check.

pydantic does a similar thing but it uses its own different typing, so it needs a mypy plugin or it flags everything as wrong.


For those who aren't familiar, typedload is a third party library: https://pypi.org/project/typedload/


You can also use TypedDict if you would normally use a raw dictionary but want the type checking.


Yes, but you still need a module like typedload to do the runtime checking.

TypedDict performs no checking by itself at runtime.

    class A(TypedDict):
        a: int


    A(d=32)
    # Returns {'d': 32}
    typedload.load({'d': 32}, A)
    # TypedloadValueError: Value does not contain fields: {'a'} which are necessary for type A


unlike fixed records, dictionaries access is impossible to optimise via JIT like PyPy


> raise KeyError(key)

This would be flagged as unreachable I believe.

mypy/pyright also supports exhaustive checking with unions

  def get_activation(key: Literal['sigmoid', 'tanh']) -> nn.Module:
    match key:
      case 'sigmoid': return nn.Sigmoid()
      case 'tanh': return nn.tanh()
There are limits to mypy/pyright's exhaustive checker.. it will fail if the union is in CNF, you will need to convert it to DNF.

[1]: https://en.wikipedia.org/wiki/Disjunctive_normal_form


It seems to me that the problem with the first version is that it's stringly typed with `key: str | ActivationType`. There would be no issue if `key` was required to be an `ActivationType` with `key: ActivationType`, as you've done in the second version.

Code like the first version is usually someone is trying to make the API more convenient, but perhaps a better way, that's more in line with static type checking, would be to have a separate `activation_from_str(...)` function.

For a similar situation, I was thinking about doing something along these lines (somewhat Rust inspired), although the duplication of variant names in the from_str method isn't ideal:

    import enum
    import typing

    # stubs so this example is self-contained & runnable
    class Sigmoid: pass
    class Tanh: pass

    class ActivationType(enum.Enum):
        sigmoid = Sigmoid
        tanh = Tanh

    class Activation:

        @classmethod
        def new(cls, variant: ActivationType):
            return variant.value()

        @classmethod
        def from_str(cls, variant: typing.Literal["sigmoid", "tanh"]):
            return cls.new(ActivationType[variant])

    activation1 = Activation.new(ActivationType.sigmoid)
    activation2 = Activation.from_str("tanh")

Now, adding a new activation type just means adding it as a variant to ActivationType and adding a string to the literal list in from_str. Duplicating the name in from_str isn't great, but IMO it's better than having to add a new branch for each variant and the external API is nice too.


Just ran into an interesting issue with Literal

    >>> l = Literal["foo", "bar", "baz"]
    >>> "foo" in l
    TypeError: typing.Literal['foo', 'bar', 'baz'] is not a generic class
    >>>    
    >>> isinstance("foo", l)
    TypeError: Subscripted generics cannot be used with class and instance checks
So Literal is a Schroedinger's generic.


Type classes in Python are meant as annotations; they are not for run-time. They essentially get erased (not exactly, as they are available for introspection, but they aren't your typical code).


No, I understand that (although things like @runtime_checkable and NamedTuple muddle things a bit). My point is that, based on the error messages, Literal both is and isn't a generic type.


I love your username. The militants turn, startled.


I can't articulate specifically why but using typing in Python just feels like so much pain compared to other languages that have "opt-in" nominal typing syntax (PHP et al.).

The workflow feels frustrating, the ecosystem seems diverse and has no clear "blessed" path, and I'm still confused about what is bundled by Python and what I need to pull in from an external source. I REALLY want to use mypy but by the time I've figured out how to pull it all together I probably could have finished the program I'm working on.

The relevant factor here might be the size of Python programs I typically work on, somewhere between a few hundred lines and a few thousand.

I'm glad other people are having success because hopefully that'll smooth the pavement for the next time I circle around and try to add some meaningful types to my Python programs.


I think something that might help you understand a little better is this: Python includes everything you need to write type hints, but it doesn't include any type checking functionality. They're making a dynamic language where you can write types if you want, but they don't really do anything. To check your types, you need to use a third-party typechecker like Mypy.


I appreciate your response but I'm actually a pretty experienced Python programmer. I get it conceptually, I've even used it in practice just to get a feel for it, I just found the whole experience _painful_!


The gods intend that you use those annotations with a checker and some IDE that supports autocompletions and highlighting based on those.

They basically lure you into annotating types all over the place so that their products can work better.

When you work in an editor that doesn’t support those fancy things, you’ll feel exhausted pretty soon.


There are a few reasons why it sucks so much:

1. Most of the Python community doesn't really get the importance of static typing. As a result tons of dependencies completely lack typing, or are written in a dynamic way that is incompatible with static typing.

2. You have to go to some significant effort to set up static type checking in CI, and because of the above issue getting 0 errors is much harder than in e.g. Typescript.

3. The semantics of type hints are not defined so there are multiple incompatible interpretations. Better hope your dependencies use the same type checker as you!

4. MyPy is way better than nothing but it's actually pretty buggy and absolutely full of unsoundness and legacy hacks (not that surprising given its age).

I would strongly recommend using Pyright if you make the unfortunate mistake of starting a new project in Python. It is much better than MyPy and the author is some kind of bug fixing robot.


Personally, I like type hints at any program size. I do understand the benefit isn’t as obvious in small programs. But the value skyrockets when you have a large codebase that you can’t keep in your head.

All that said, Mypy is far worse than TypeScript. I think Mypy would be so much better if it improved support around dict inference. It shouldn’t just say “I dunno, it’s a dict of strings?” but rather infer an anonymous TypedDict


MyPy is good, and works as expected and similar to how typescript and flow works for javascript.

The problem with Python is that most code is shit. Sorry, but "fighting the compiler" because you want to return different types based on some flag isn't the compiler being annoying, it's the compiler exposing bad code.

It's not until before summer typing for django actually started to work somewhat. All stubs did was to make mypy not complain, but not until now it actually can catch some errors.

Getting a python project typed is a world of pain, and in the end you can't really trust it anyways because of all the hacks making the type-checker happy.


> MyPy is good, and works as expected and similar to how typescript and flow works for javascript.

Last time I used MyPy (1.5-2 years ago?) it wasn't nearly as full-featured or ergonomic as TypeScript. Has that changed recently? I remember running into lots of cases that it didn't support that were easy in TS, although it was long enough ago that I don't have specific examples.


It certainly still feels half baked. Ironically the best type checker imo, pyright, is written in typescript.


I love me some python, it's my favorite and most-versed language by miles, but typescript definitely gets the W when it comes to its type system. Idk exactly why - both js and python "bolted on" their type systems (syntactic superset), both are highly dynamic, yet python really struggles with typing systems and JITing. I think python is simply "more dynamic" than js. Js has a much simpler data model.


> I think python is simply "more dynamic" than js. Js has a much simpler data model.

I don't think it has much to do with that. There are some major differences between TypeScript and typed Python that explain why TS is so successful and typed Python is not:

1. TypeScript is its own programming language with its own syntax. Even though it's a superset of JavaScript, this still matters. The only special, type-related syntax Python supports is type annotations; defining and otherwise using types is done via Python's normal syntax, which is not very ergonomic or intuitive.

2. Microsoft invested a ton of engineering effort into TypeScript. The Python ecosystem simply has not received that large of an investment in static typing.

3. The Python programming language does not dictate how type checking should be done, so engineering effort is scattered across several projects like MyPy and Pyright. By contrast, TypeScript has essentially "won" that battle on the JavaScript side. (I know Flow exists, but I don't know anyone who uses it for new projects today—everything is done with TS.)


> I can't articulate specifically why but using typing in Python just feels like so much pain compared to other languages that have "opt-in" nominal typing syntax (PHP et al.).

Cognitive Load.


Using type checking in Python is always the wrong thing to do. You need to learn to use Microservices where you split up a large code base into seperate 3k lines Microservice chunks of code that interact over well documented APIs.

That's just how effective Python is done.


I don't think you're understanding their use case, they aren't building some Flask service, they're an ML drug discovery company supporting research scientists and as such they're more or less building a framework. There's a lot of code reuse and most large orgs building similar frameworks have adopted similar code quality standards - think PyTorch, Fairseq, Pytorch Lightning, Huggingface - which is why tools like MyPy / PyLint / Black exist. For people who have used those libraries there is a night-and-day difference between the anything-goes ones and the ones that do linting and unit testing.


In my personal experience codebases using MyPy tend to be considerably worse than codebases that don't.

By using a static type checker on a dynamically typed language, you have admitted you don't know what are you doing right of the bat. This means the software engineers in the project are very bad and therefore the code quality overall will also be bad.

Tools like MyPy exist to make Python appear to be more like Java to help Java developers, who aren't willing to learn how to code in a different programming paradigm.

That code will always be far worse than Python code written using the Python development paradigms.


> This means the software engineers in the project are very bad and therefore the code quality overall will also be bad.

Conversely, I do think you're very bad and don't have much experience with large python codebases.

For example the suggestion to split and use microservices makes no sense. Microservices are even harder to refactor.


What are some examples of popular python libraries that use this ideal python development paradigm?


Most of the Python libraries that are not locked to specific versions of Python, which is often the case with a lot of the badly written ML libraries.


Actually now I'm thinking you might be trolling.


You forgot to end with /s


I've seen Python projects using Monorepo MyPy and Python projects using Microservices.

Python Microservices style when it comes to code quality, maintainability, etc... wipe the floor with Monorepo MyPy style projects. It's not even close.


And I've seen the exact opposite. Messy, poorly/under/incorrectly-documented buggy microservices which barf on corner cases, painful to refactor, no way to verify correctness without tons of unit and integration tests. Conversely, I've seen huge heavily-typed mono repos which are a breeze to operate on, wrap my head around, jump to definitions, automatically refactor, and actually run with confidence.

So do our anecdotes cancel out?


No, because you can always replace bad microservices wholesale. It can't be "painful to refactor", because you literally just delete the code and rewrite it.

If you have introduced static typing, you then have to start doing refactoring and verifying correctness. "heavily-typed mono repos which is a breeze to operate on" I highly doubt such a thing exists, more likely you are used to a certain level of bad code and don't understand it can be better.


> I highly doubt such a thing exists, more likely you are used to a certain level of bad code and don't understand it can be better.

Ohhh trust me, I know bad code. And I know good code. My unit of measurement is how fried I feel at the end of the day. Dynamic loosy-goosy python? Brain fried, constant debugging, little confidence in deploys. Static types, pydantic, pycharm, mypy, DI? I'm in the zone all day.

I don't think you are the arbiter of all code, so I'm not sure what grounds you have to tell me what my taste in code is.

Additionally, you would be better served by writing comments with less presumption in them. It makes the discourse more adversarial than it needs to be.


That's horrific. The crazy thing is I think you might be serious and not trolling. It's like something out of a Dilbert comic.

Typing is good, anyplace you can get the computer to check more of your work - the better. There are practical limits and trade-offs as always, but some typing is better than none.

Microservices are usually a terrible idea. I know they're popular, but I've only had bad experiences with them. I strongly recommend against microservices in most situations.


I don't think one size fits all. There are plenty of successful python projects that don't match this. E.g., every python library (numpy as microservices doesn't make sense). Plenty of large successful Django apps exist. I am using typing effectively in solo dev flask webapp that doesn't need to be split into microservices.

Sorry if your post was sarcastic and it went over my head.


I don't think Numpy is mostly written in Python. It's C.

So using type hints to annotate your flask api, etc. to generate docs is great. No problem.

The issue is that there have been a lot of Java developers switching to Python and they want to pretend that Python is a statically typed language (and use tools like MyPy) because it's more familar to them.

Python is not a statically typed language and treating it as one leads to terrible results. Especially in large projects.

The issue of course if that the Java developers have no point of reference on what a successful large Python project looks like. So they think they are doing great with MyPy when if you compare what they are outputting to a proper large Python project, it's pretty clear they are doing terribly.

When you use Python properly you write self contained microservices. Typing information exists but only on the external interfaces e.g. API. You also don't share business logic code between the self contained microservices because that massively decreases the maintainabily of the overall system. You show your Python code is correct by using Unit Testing and Mocking.

Basically, there is a way to do large Python projects and MyPy (and other static type checking tools) have no place in that story. It only exists to support people in making bad decisions on their codebase.


Numpy has a large amount of python code. C is definitely used in places, but majority of numpy is python. Not all operations need C implementations especially as many can be built on top of other python functions which do wrap C. Tensorflow/pytorch similarly have a large amount of python code. I mostly work on developing a library that builds on tensorflow and is like 50K lines of code. Dividing that library into services makes little sense. One of the recent typing new features is mostly devoted to numerical ecosystem in python (variadic generics to do template like types for matrices). I don't want to use dynamically typed language, but python is clear leader in ML ecosystem. A lot of my department does work that builds on ML research/libraries and those are mostly found in python.

If other languages had comparable ML ecosystems then a different language may have been chosen. But at moment today there's no competitor anywhere close. One lazy metric, I would estimate 90%+ of research papers to ML conferences are in python.


"Dividing that library into services makes little sense." No, but dividing the library into smaller sub-libraries, that only interact with each other over well-documented interfaces, does make sense. It's called encapulation.

If at some point you decide the way one of those sub-libraries works is wrong, could be done better, etc. then you can write a new sub-library that just provides the same public interface.

One of the reasons why Python is so successful is it's usage of dynamic typing. Obviously, if you lose static typing some else needs to take it's place, in Python's case, stronger encapulation.


> If at some point you decide the way one of those sub-libraries works is wrong, could be done better, etc. then you can write a new sub-library that just provides the same public interface.

That's...that's literally how type systems and interfaces work. You do know Python supports behavioral subtyping (Protocol), right?

It sounds like you certainly have had some bad experiences with poorly-written typed python. But that speaks more to those maintainers not knowing how to actually use types effectively, vs a shortcoming of static typing in python. Python's type system has plenty of shortcomings, but has plenty of escape hatches as well.


The size of the enclosed section is much bigger. A large section of code has an small well-defined interface.

What you get with typing and interfaces is that all the code has an interface. Small sections of code have a large badly-defined interface.

The main reason people are using typing in Python is to support large Monorepos. Because everything is typed in them, all the code in the repository depends on all the other code in a spaghete dependency graph.

This results in the following issues:

* Small code changes lead to hour long unit test runs since most of the unit tests need to be rerun for every change.

* It's impossible to update the Python version since all the Python code needs to be updated at the same time.

* Long check-in times for changes running MyPy over 100k lines of code.

* Maintainability issues since code can't be updated without knock on effects all over the codebase.

Okay, so what are the benefits of using types in Python:

* On average MyPy type checking will catch 1 bug per developer per year, which wouldn't of being caught normally.

* It makes people who previously coded in statically typed languages feel more familar with the code.

Basically, if you look at the Pros vs Cons, you should ditch the typing checking. It's a net negative to the codebase.


> Small code changes lead to hour long unit test runs since most of the unit tests need to be rerun for every change.

You're writing unit tests wrong if they are taking that long. Unit tests should be quick.

> It's impossible to update the Python version since all the Python code needs to be updated at the same time.

I've done it, so, objectively not impossible.

> Long check-in times for changes running MyPy over 100k lines of code.

TFA addresses this. It can be greatly sped up with use of caching in CI.

> Maintainability issues since code can't be updated without knock on effects all over the codebase.

That's exactly why static types are better - automatic refactors.

> On average MyPy type checking will catch 1 bug per developer per year, which wouldn't of being caught normally.

I catch several per day, simply from IDE highlighting, in real time, and fix them immediately, which only works because of the type system. (maybe it's wrong to call these bugs at this point, it's more like proto-bugs which never even get checked in cause there is immediate feedback)

> It makes people who previously coded in statically typed languages feel more familar with the code.

I came from a fully-dynamic-everything python world, and types were a breath of fresh air.

You've clearly been burned by (a) bad codebase(s), but I think you are drawing all the wrong conclusions. All the benefits of narrow interfaces, easy refactors, high maintainability, can be had with static typed python. I will admit it's much more challenging to be really competent at it than it ought to be, but this has been improving rapidly.


Encapsulation is a concept that is orthogonal to whether or not you use typing annotations in a language. You're conflating two completely different things and wrongly assuming that typed code is ipso facto poorly encapsulated. As most of your arguments stem from this premise, I find them very weak.


the enraged responses are pretty funny given that this is literally just what the BEAM vm is conceptually.

Trying to bolt a static type system on a dynamic object oriented language literally violates any benefit you get from using a dynamic language in the first place. (that is to say, change things as they run).

I have no idea why people try to enforce the programming paradigm of Java on Python. if you want a huge, static program... write it in a static language, don't write it in Python.


If I have a function that returns something in a list, why not declare it as such? The pain in Python typing comes more from bolting a largely (but not entirely) nominal type system on a structurally typed language. That can be addressed by Protocols.


This is sarcasm, right?


> I typically show candidates a snippet that uses typing.Protocol as part of a broader technical discussion, and I can’t recall any candidates having seen that specific construct before

I think the `typing.Protocol` [1] (aka "structural subtyping" or "static duck typing") does not get enough spotlight! This is one of the keys to migrate a very pythonic codebase to type hints, and allows to avoid infinite type hints shenanigans all over the place. Of course, MyPy supports this feature natively [2].

[1] https://docs.python.org/3/library/typing.html [2] https://mypy.readthedocs.io/en/stable/protocols.html


I had looked into protocols the other day but found no advantages over ABCs. In fact, failing to implement an ABC's interface is a 'compile-time' error, which is impossible to miss. Using protocols and mypy would be a soft-fail in comparison, where code runs but can fall flat at runtime still, if mypy is simply ignored/forgotten (it's optional).

I guess implementing multiple protocols (interfaces in other languages like C#) is possible and more awkward using ABCs?


There's some info in the PEP for protocols about their advantages versus other approaches:

https://peps.python.org/pep-0544/#rationale-and-goals

https://peps.python.org/pep-0544/#existing-approaches-to-str...

As to mypy being optional, it's easy enough to make it required via git hooks and/or CI. If your process doesn't make this easy for mypy or any other tool (e.g. formatting), I'd wager there are more fundamental issues.


> Mypy catches bugs

100% yes. It’s much better than the examples would lead you to believe. Mypy catches stuff like:

    def f(arg: Optional[Object]):
        arg.method() # type error
        if arg is not None:
            arg.method() # ok

It’s half the goodness of what you’d get from a well-typed language like Haskell or Rust but with the ability to say “trust me on this” and disable type checks for a line or two. Honestly I wish go tooling checked for nil pointer use as well as mypy unwraps optional values. Every time I add type hints and use mypy I find bugs. I would never make the case that type hints are better than a strongly typed language (especially with pattern matching), but it’s a great balance when writing python.


This 'flow typing' / TypeGuard approach is really awesome and probably the best part about languages like typescript/mypy. I wish that other languages would follow suit so that I can incrementally narrow the types in my program through conditionals.



In fairness, Dart took the fully-sound approach and while it is clearly better, it does make dealing with class member variables a massive pain for similar reasons.

Also clearly a bit of unsoundness is way better than just no type annotations at all, which is the alternative.


Interesting. It looks like it just doesn't allow narrowing types of class members and globals, sidestepping this whole issue. I like the Dart approach here. Users of the language can always bind a local variable to the thing you want a narrower type for, it's not the big of a hassle.


It is quite a hassle when every method looks like

  void foo() {
    var local_a = this.a;
    var local_b = this.b;
    ...
    this.a = local_a;
    this.b = local_b;
  }
But I'd say it's still worth it for the soundness. And I have confidence they'll come up with some improvement. Maybe a way to opt out of setters/getters.


Nice, that's a cool example. But that's not fundamental to the concept of flow typing, it's just a mypy limitation.



Yes, it seems that the issue is essentially one of binding the type to a scope but then mutating within that scope. The type system could just "look" for those mutations, or the language could add mutability to the type system.


> The type system could just "look" for those mutations

That's a very big "just". Typescript and mypy don't look through function call boundaries, doing so would open a lot of problems, I believe.

> or the language could add mutability to the type system.

I agree there, the core issue is that the call to `foo` leaks a an object where the member is rebindable through assignment, but the "flow typing" assumes that doesn't happen. Ideally such a call should be a barrier for flow typing, unless there was a way to specify functions that don't modify their parameter this way (readonly parameters?).

A similar problem but with a global instead:

https://www.typescriptlang.org/play?#code/DYUwLgBAlhBcEDsCuB...

This should be handled differently. Every function should be assumed to modify globals, unless they can be marked to be pure.


Yeah, that is definitely a big "just" - it's basically whole program type inference, which I expect to be very messy in Python, if not intractable.

> unless there was a way to specify functions that don't modify their parameter this way (readonly parameters?).

Right, so in a language with typed mutability (ie: Rust) or total immutability flow typing would be a lot easier to implement without these footguns. Python is such an insane language, I'm not surprised to see that you can subvert the type checker like this.


Painpoint with type annotations: not being able to reuse "shapes" of data, e.g. struct-like fields such as TypedDict, NamedTuple, dataclasses.dataclass, and soon *kwargs (PEP 692 [1]) via TypedDict.

Right now, there isn't a way to load up a JSON / YAML / TOML into a dictionary, upcast it via a `TypedGuard`, and pass it into a TypedDict / NamedTuple / dataclass.

dataclasses.asdict() or dataclasses.astuple() return naïve / untyped tuples and dicts. Also the factory functions will not work with TypedDict or NamedTuple, respectively, even if you duplicate the fields by hand [2].

Standard library doesn't have runtime validation (e.g. pydantic [3]). If I make a typed NamedTuple/TypedDict/dataclass with `apples: int`, nothing is raised in runtime when a string is passed.

Other issues you may run into using mypy:

- pytest fixtures are hard. It's repetitious needing to re-annotate them every test.

- Django is hard. PEP 681 [4] may not be a saving grace either [5]. Projects like django-stubs don't give you completions, it'd be a dream to see reverse relations in django models.

- Some projects out there have very odd packaging and metaprogramming that make typing and completions impossible: faker, FactoryBoy.

[1] https://peps.python.org/pep-0692/ [2] https://github.com/python/typeshed/issues/8580 [3] https://github.com/pydantic/pydantic [4] https://peps.python.org/pep-0681/ [5] https://github.com/microsoft/pyright/blob/8a1932b/specs/data...


> pytest fixtures are hard. It's repetitious needing to re-annotate them every test.

Yeah, this drives me nuts as well. But Pycharm has recently started inferring types and jump-to-declaration works, so I believe obtaining that information must be possible.

Celery is another library in the group like django/pytest in that everything about it is suuuper dynamic and *kwargs-y. It drives me up a wall that I can't readily decouple task functions from their interfaces in a typesafe way. Also I don't know how to decouple the tasks from a Celery app with a pre-defined broker uri without resorting to config files/env vars - that approach simply does not unit test well.


Yes anything that dynamic is super confusing. I honestly usually don’t bother type checking unit tests. There’s definitely some value but it’s really hard to justify on existing codebases. One thing I love about type hints (rather than compiler checks) is that you can do things like that.


The "shapes of data" thing resonates a lot especially coming from TypeScript.

The runtime validation stuff has been interesting to watch. Do you use Pydantic? It's very popular but I have a hard time getting over its willingness to cast / coerce implicitly (if I mark a field as an int, and pass in a str, I want an error -- is that weird of me?).


I haven't used pydantic yet. I'm conservative when adding (non-dev) dependencies since I am maintaining library packages. For other projects, it's a possibility.

> if I mark a field as an int, and pass in a str, I want an error -- is that weird of me

That is perfectly fine. And the question is why don't TypedDict, NamedTuple, dataclass raise during construction - since if they don't - we have to play it safe and do isinstance checking since we can't trust the field's types at runtime.

I suppose the idea the `typing` module [1] is to be unobtrusive: not to be involved in runtime checks.

What I want is something that does what pydantic does in standard library. I think it's a sensible request and would be indispensable for anyone wrangling data.

[1] https://docs.python.org/3/library/typing.html


You can type it as StrictInt etc to stop this implicit type conversion.

https://pydantic-docs.helpmanual.io/usage/types/#strict-type...


You should look into the 'dacite' library! It solved most of our load-json-into-typed-dataclass woes.


I have yet to get a moderately-complex Django project working with django-stubs. It seems to break with basic idioms like custom QuerySets, and I immediately run into new bugs every time I try it. Usually I give up and use dialled-down mypy settings.


https://pypi.org/project/jsonschema-typed-v2/

While not without caveats, but I started experimenting this, and it is quite useful.


For a while I used both mypy and pyright for my team’s codebase. After about half a year I eventually dropped mypy . I think type checking is valuable just that most of errors mypy detected pyright also caught and using newer type features often led to mypy false positives. I had trouble justifying using both when I could require my teammates to install pyright. Advanced type features tend to run into more bugs and while both are well maintained, pyright’s maintenance is magical. I do not know any other open source library that fixes bugs as fast (most bugs are fixed in under a week). The main thing that eventually forced decision was a flaky (depends on cache) mypy crash using paramspecs half a year ago. At time paramspec support was still in progress and there’s a good chance that specific issue is fixed.

The main awkwardness of pyright is it’s node library and most python devs I work with don’t interact much with node. But my team has a bash script that installs all our dependencies including node as needed (nvm) which mostly works. One benefit is you can use pyright as an LSP and it works very convenient in vscode.

Edit: 3rd party library lacking types is probably biggest issue. As my codebase is mostly typed by itself I’ve started gradually writing type stubs for library apis we use. Only writing stubs for small percent of what we use helps but there’s still a ton to add given codebase was started without types.


Woah, Pyright is written in Node? And it has its own [parser](https://github.com/microsoft/pyright/blob/b74be3b2cb2c5d35b9...)? That's really interesting. I wonder how it compares to Mypy on speed.

> The main thing that eventually forced decision was a flaky (depends on cache) mypy crash using paramspecs half a year ago. At time paramspec support was still in progress and there’s a good chance that specific issue is fixed.

I actually think I ran into this exact issue (ParamSpec-related, only fails when reading from cache, etc.), which led me to pin a Mypy development version for a while and was fixed recently.


> I wonder how it compares to Mypy on speed.

I use pyright at work at my current job, and used mypy at my previous job. Pyright has been better in almost every aspect in my experience. Its been more robust, and performs a lot better than mypy at type-checking in a fairly large mono-repo.


In my experience, the biggest issue with not using mypy, is the number of mypy plugins you lose access to. It's not uncommon for projects (or their ecosystems) that rely heavily on metaprogramming to provide mypy plugins to shore up the gaps in the type system; e.g. Django, strawberry, pydantic.

But other than that, I generally prefer pyright day-to-day because the experience using VSCode is far better.


I am moving all my open source projects to `mypy --strict`. Here's the diff of adding basic / --strict mypy types:

libvcs: https://github.com/vcs-python/libvcs/pull/362/files, https://github.com/vcs-python/libvcs/pull/390/files

libtmux: https://github.com/tmux-python/libtmux/pull/382/files, https://github.com/tmux-python/libtmux/pull/383/files

unihan-etl: https://github.com/cihai/unihan-etl/pull/255/files, https://github.com/cihai/unihan-etl/pull/257/files

Perks:

- code completions (through annotating)

- typings can be used downstream (since the above are all now typed python libraries)

- maintainability, bug finding + Easy to wire into CI and run locally

Longterm, unsure of the return on investment. I do promise to report back if I find it's not worth the effort.


The more I think about it, the more I think that the controversy around static type annotations in Python boils down to this:

  Improved readability
This is very subjective, and is particularly sensitive in a language like Python which (rightly) has such a strong historical emphasis on readability above almost anything else.

My personal opinion is that static type annotations are extremely detrimental to readability. They add jarring line noise that makes reading Python much less like reading English. Hitting a type annotation when reading Python forces my brain into little backtracking loops which hugely diminishes my ability to form a mental model of the code from a quick read.

I wonder if people who come to Python from other languages (that are already statically typed) are accustomed to the poor comprehension introduced by types, and so don't experience this drawback.


> Hitting a type annotation when reading Python forces my brain into little backtracking loops which hugely diminishes my ability to form a mental model of the code from a quick read.

This makes me think you've never dealt with untyped Python code in a large production repo.

The frustration seeps in the hundredth time you encounter an untyped `get_user_ids(...)` written by a co-worker (or yourself in the past), and wonder "Hmm, is this going to return a list of ints? Strings? UUIDs? A generator of those things?" and then you have to dig into the function to find out what it's actually going to give you, and then each function inside of that will have the same issues, and pretty soon you're building a mental model of a call stack to discern types anyway, you'll be happy to embrace the type system of python.

"Readability" doesn't matter if you're unable to simultaneously quickly build a mental model of what is actually happening in the code. Sure, `def get_user_ids(...)` is super readable in that I quickly understand "this is going to give me some user ids", but that's useless when you consider the immediate next step of "what format are they in, so I know what I can do with them afterwards". `def get_user_ids(...) -> Optional[Iterable[int]]` is infinitely more useful, because I know that I can't use the function unchecked inside a list comprehension, because it might return `None`. And if it doesn't return `None`, I know I have, for example, integer user ids which I can compare with `==`, vs a custom type which might not implement `__eq__`.


> This makes me think you've never dealt with untyped Python code in a large production repo.

I run an agency and we maintain dozens of large, mature production projects (up to ~100k lines of code each).

I do understand the arguments and examples in the rest of your post, and intuitively they make sense, but practically I have almost never had an experience like the one you describe in 20+ years of working on Python codebases.

> you have to dig into the function to find out what it's actually going to give you

This is exactly what I do, and it's just fine. I find it more difficult to build a mental model of what's happening in type-annotated code.

Like I said, it's subjective. That's why I find "types improve readability" arguments in posts such as this to be problematic. They are stated as basic axioms with no supporting data. It's easier to read for you.


> I run an agency and we maintain dozens of large, mature production projects (up to ~100k lines of code each).

100k is not small, but also not particular large; even for one single app with highly interweaved code. My companies main codebase is around 800k lines and I consider it as middle sized.

But lines of code are not the problem. The length of your code paths is it. Understanding the flow of one or two functions, ok, not that hard of a problem. Having your data flowing through some dozen functions, big problem. Complexity kills any understanding at some point. Or you need to invest unnecessary much time for it.


> It's easier to read for you.

It would be easier for me to read if everyone wrote their code in French. Is that a good enough reason to make the whole company switch?

There's subjectivity, and then there is "I'm not frustrated by having to dig for the return type of every function I call, so no one should using typing in their function signatures because I don't like to look at it".

> I run an agency ... up to ~100k lines of code each ... practically I have almost never had an experience like the one you describe in 20+ years of working on Python codebases.

You're either a genius who is able to maintain a huge graph of functions calls and typing in your mind, across multiple repos and many years, or your codebase is an absolute nightmare to work on (it could also be both, and you're just able to deal with it). If I interviewed at a company in 2022 and they told me they have multiple 100k+ line repos of untyped Python, I would run away as fast as possible, as would the coworkers that I value the most. The ones that wouldn't run away are the ones that think types are inconvenient and unit tests are annoying to write, which really aren't the people I enjoy working with.

Edit: if you want evidence, I pulled some examples from your profile.

I've never worked with Django, and let's pretend I'm onboarding at your company and need to get into your `django_dbq` repo.

You have a `Job.get_queue_depths()` method here: https://github.com/dabapps/django-db-queue/blob/23f8ebe80b66.... What does this return? Okay... it builds this `annotation_dicts` object from the objects in a `Job`, then filters them, gets the queue names, sorts them, then annotates them (?). `annotation_dicts` also seems to start with a `JobManager`, and I can't tell how/if it ends up actually being a `dict` of something, so I'm not sure if the name is right. Then we return a dict of name:queue_depth, okay good. I guess the keys to the dict are strings?... Wrong, they're `models.CharField`, and I only can find that because it happens to be in the same file/class.

If the function was typed, and mypy enforced, I would have `def get_queue_depths() -> Dict[models.Charfield, int]:` (assuming the value is an `int`, I also can't verify the `.annotate(Count(...))` returns an `int`...). This would have saved me all of that above, which is basically just doing static analysis in my head, and not adding any value to the business.


This is what comment blocks are for. Putting the return type in the funcfions docstring should be a requirement during linting, and ensuring the correct type is used should be part of PR review. Easy to find, easy to read, no static type checker required.


> Putting the return type in the funcfions docstring should be a requirement during linting, and ensuring the correct type is used should be part of PR review.

You're suggesting pushing the types from the function signature + docstring into only the docstring, so that we can offload the static type-checking from mypy to the reviewers of the PR? Why?

This is at best exactly as good as mypy (100% accurate and repeatable) and at worst (and most likely) will lead to human reviewers making mistakes/not catching edge cases/forgetting to update docstrings.


If I interviewed a python developer and he said anything close to what your post contains I would veto them on the spot.


Ensuring consistency and long term accuracy of the docstring is more effort and more verbose than type annotations.


I came from Python to Scala and I'm not sure I'd move back in a hurry. I like types annotations as it helps me understand what exactly is being passed between functions - sometimes that's no always clear from the code.

I think "readability" is subjective so I just thought I'd throw my own opinion in there.


Of course it takes longer to read and comprehend the method signature when there are types in it, since it makes the method signature longer. But by the time I make it to the end of the signature, before I've even taken a look at the function body, the types have already given me important information (assuming this is a codebase where MyPy types are enforced) that will make reading the code much easier, because I don't have the mental overhead of needing to infer the types of half a dozen arguments and the return type while reading the code. And often, the type signature is the information I was looking for anyway, and as long as I trust the function to be implemented correctly, I can skip reading the body entirely.

For instance, if I see a function signature like "def parse_int(x: str) -> int", I can probably guess what the function will do. It might be parsing the entire string as an integer, or it might be parsing the first integer it can find within that string, but that type signature by itself eliminates a lot of weird things that could plausibly fall within the purview of a function named "parse_int". It can't return an iterable over all integers parsed from the string; it can't conditionally return an int if there's a single integer and a list of ints if there's more than one; it only accepts a decoded string (i.e. not bytes or any other string representation); it can't take a list of strings and return the first integer it finds; it can't return a string representation of the parsed integer; it can't return None if parsing fails (meaning it will likely throw an exception instead); and so on. Thus, even if I need to read the function body to see exactly what it's doing, I go into it thinking "in what specific way is this function parsing a single integer from a single string?" rather than "I wonder what this function does".


I've seen libraries whose functions only accept `*kwargs` and you need to open the documentation (or read the body of the function) to be able to use that function.

If you know how many parameters it's easier.

If parameters have names that you can understand it's even easier.

If parameters even let you know what kind of value they want it's also easier.

When I read python I'm not reading a novel. I want to know how to use a function, and possibly I want to know that without reading the function.

So it might make 1 line harder, but makes me skip reading 50 lines.


I want to develop Python at 80 characters per line, inside vim. I enjoy the elegance and simplicity and frankly the whole culture around it.

Libraries, ecosystems and actual business reasons that matter aside, if I cannot have that, I would rather use something like Kotlin instead.

PHP does it better. Types are an add-on, but integrated into the interpreter. Also it has never been pretty in the first place, so nothing was lost.


I've been using Python since 2008 and the type annotations are the thing that broke me. I don't want to configure another tool on every project just to get (incomplete) type checking. Why do I have to pick a type checker? Just check my types.

`typing.Protocol` is a poorly designed `Interface`. `abc` is a band-aid over missing `abstract` class/method syntax.

Things that should be part of the language are left to libraries. Just add interfaces, enums, and abstract classes/methods to the language.

I'm helping a new developer learn Python and having to explain all of the hoops I've been jumping through for the past 15 years is embarrassing. Making things "simpler" is making things more complex.

My current project is going to be my last Python project. I'm tired of add-ons and hacks, I want a complete language.


> Making things "simpler" is making things more complex.

This hits the nail on the head for me with Python. The standard libraries take a very “batteries included” approach but the language constructs don’t, and that means people use the flexibility of the language to do things their own way. I find that really increases my mental workload when reading other people’s software (how have they done this) and when writing my own (which way should I do this).


Well said.


Really like Mypy. I have coded many python micro service with different framework but my minimum core set is:

  - Black (formatting)
  - Isort (import order)
  - MyPy (typing)
  - Pylint (linting)
Edit: s/unit test/linting


Same. How do you use pylint for unit testing? I only use it in my IDE.


Sorry, I meant linting. Editted


Huge fan of Mypy but you lost me at LOC worship.

Why do we do this? The more LOC is somehow attributed to more features? That simply isn’t true. More LOC means two things. One, what you are trying to do with the language is pushing its limits. Or two, you don’t understand the domain.

Most large monorepo’s I have seen fall into the latter category while few reside in the former. Game engines, mature enterprise software, and a few others are large (probably OP’s codebase too), but we seem to revel in the fact that we have so much code to grok. Sometimes conciseness is better than cleverness.

Back to mypy. I’d throw in black as well. Combining black, flake8, mypy, on precommit has loads of advantages. It’s completely opinionated but I find it helps me write better Python code.


I don't see it as "LOC-worship". It's about the most objective measure of codebase complexity/scope as could exist. It doesn't equate to features, but it certainly correlates to it, more business logic means more code.

> Sometimes conciseness is better than cleverness.

Isn't concise and clever on the same side of the balance? Cleverness and abstraction is how you cut down on LOC.

Abstraction has its own cost, so while you can trade LOC for cleverness/abstraction, there's limits/costs to that as well.

Also also, python/mypy still lacks support for higher-kinded types, which really puts a soft-ceiling on the amount of abstraction you can type-safely utilize.

Aaaanyways, I think the whole original point is, "we have a large codebase, and mypy helps keep the wheels on at speed".

+1 on black and precommit. That has done wonders for codebase consistency. I'd run flake8 too but the maintainer is weirdly opposed to supporting pyproject.toml for config.


> I'd run flake8 too but the maintainer is weirdly opposed to supporting pyproject.toml for config.

I haven't used it in anger yet, but FlakeHeaven (https://github.com/flakeheaven/flakeheaven) looks like a nice fork that includes pyproject.toml support.


Concise and clever aren't necessarily related. Concise means conveying your objective as clearly as you can, as small as you can. Clever is John Carmack's q_sqrt. Code that is small, that works, but no one understands why.

LOC should be looked at as a negative at a certain point. Software Engineers spend more time reading code than writing code. The more LOC you have, the more an engineer needs to read to get work done. LOC is not a good measurement of quality software nor is it a measurement of complexity of the problem, only the complexity of the code.


I actually agree with this. And I like your second point around LOC as a yellow-flag around poor domain understanding.

I tried to acknowledge the deficiencies in LOC in the second paragraph... but it felt like a useful shorthand for conveying a sense of scale and complexity (at least to an order-of-magnitude) in a domain-agnostic way.

Outside of Mypy, we use: black, isort, flake8, docformatter, and autoflake (to remove unused imports) -- all on pre-commit + enforced on CI. I'd like to see Black cannibalize more of that toolchain :)


Yeah, understandable. It's Python so things can easily get verbose. I've known too many engineers that see LOC as a measurement of complexity when those things aren't directly related. Complexity is usually unwrapped into a simplified model of things (human nature) but often there's 2x more plumbing code to cobble it back into complex mode to work with it. Not saying it's the case in your codebase but it's been what I've seen in large Python codebases. A complex domain is broken down into simple domain models then wrapped in services that add that complexity back because the original domain model was simplified from a business perspective already.

Complexity to me comes from obtuse concepts. The business trying to cast too wide a net or something along those lines. Microservices help to concise that vision into reusable bits but its gets a bad wrap because of the context switching (which means your domains aren't well defined.)

Keep using those tools though, they will save you so much time down the road when you enforce test coverage and fully lean in on that autoflake.


Mypy is very useful on big projects, and does catch bugs regularly in my code.

The ergonomics improved a lot and it's now usable, so the cost ratio/benefit is worth it today.

But barely.

Even assuming you use the latest Python version (lots of project can't), you still have to import tons of things you use all the time like Iterable, Self, Callable and so on.

Then you have to deal with with the poor Protocol solution for duck typing, aggressive defaults, mypy slowness (before 9.13 it's terrible, after it's just bad and mypyd is quickly mandatory) and a surprisingly high number of bugs (such frustrating time wasters). Add on that false positives, low support from some popular libs and incompatible type checker implementations, and you get a very much meh experience. Very far from the awesomeness on Python.

If you are unlucky and have to use anaconda, mypy dependencies make it extra fun to include.

Still, I'm glad it exists. It's still very useful. But thank god hints are optional.


I totally agree that importing type classes are one of the worst parts of the system. I can't wait to move everything to 3.10+


I really really wanted to like mypy but my experience with mypy and Django has been very poor - it is slow, type inference is not good and most of the errors are false positives. Perhaps I’m spoiled by Typescript or django-stubs is just not quite mature enough.


This article mentions the woes of circular imports. I thought MyPy let you work around that by doing if False: around your imports

eg a.py: import c

  if False:
     import b

  class X:

   def x(self):
       #type: () -> b.Y
       from b import something_that_returns_y

       return something_that_returns_y(self)
b.py:

  from a import X

  class Y:

    pass

  def something_that_returns_y(x : X) -> Y:

    return Y()
per https://github.com/asottile/flake8-typing-imports


Yeah that's correct! It's a valid workaround, and we end up doing that occasionally. (In Python 3.5 and newer it's `if TYPE_CHECKING` instead of `if False`.)


that's pretty horrible

a few more things like that and it will start to look like machine generated javascript


As the article mentions, the biggest problem by far with using mypy and the Python static typing ecosystem generally is the lack of third party support, even for big or new projects.

Python's benefits are as much about the libraries available as they are the language itself, and unfortunately it's kind of lacking right now especially compared to e.g. TypeScript support in npm.

Waiting for it to get better only goes so far; there isn't yet a cultural expectation around publishing typesheds for everything.


Would be interesting to seem them try pyright on their codebase. IME pyright is faster, catches more potential bugs, and doesn't require its own custom plugin system (which seems to be a major burden on other libraries).


> My unsubstantiated guess is that this is one of the most comprehensively-typed Python codebases out there for its size.

Not important, but FAANG companies have several orders of magnitude more strictly-typed Python than this.


Mypy is so so helpful and I can’t imagine Python without it. (I mean, I can, I did, it just sucked.)


Think they’re using a monorepo?


Basically, they screwed up by using a monorepo with Python and have decided to try and badly paper over it by using a type checker.

For reference, don't do either of those things in Python.


What do you feel makes Python less suited to monorepos than any other language? I've worked in Python-heavy monorepos and they seemed about like any other kind.

And there is nothing wrong with using a type checker for your Python type annotations.


Monorepos require static typing as a first class citizen of the language.

"And there is nothing wrong with using a type checker for your Python type annotations."

There is a lot wrong with that. Python is a dynamically typed language. If you are using a static type checker on a dynamically typed language, you are not taking advantage of the dynamic nature of the language.

You are basically not writing Python code at all. This means you get all the disadvantages of using a dynamically typed language with none of the benefits.

In summary, you have a made a very poor engineering decision.


This is a terrible take. Using type annotations doesn’t mean Python is no longer a dynamically typed language


This guy is full of terrible takes. Like his claim that typing catches "one bug a year".


If you enforce them then you can't do duck typing, this means adding lots of boilerplate code to your code base, increasing the amount of bugs, since bugs are proportional to the total number of lines of code in the project.

Typing related bugs that aren't caught in unit testing are very rare.


Any additional bugs per lines of code, which would be less in a strictly typed language anyway, is worth the structure you get from a typed language.

I disliked JavaScript and love python. Typescript has replaced python as my labor of love language for now but a typed python in a similar manner would be great


Typing doesn't remove many bugs from Python code (probably about 1 bug per year). It's statisically insignificant.

JavaScript is a bad language and adding stuff to it makes it better. The same isn't true for Python.


> If you enforce them then you can't do duck typing, this means adding lots of boilerplate code to your code base, increasing the amount of bugs, since bugs are proportional to the total number of lines of code in the project.

You can still do duck typing. Python's Protocol and TypedDict do that for objects and dicts, respectively. Python also has sum types. I've added type annotations to thousands of lines of legacy code and rarely have I had to change any logic to accommodate.

> Typing related bugs that aren't caught in unit testing are very rare.

Not true at all. Unless you're writing unit tests that cover every possible code path, your unit tests will miss stuff. And I'd argue that 100% test coverage is a Herculean effort to write and maintain, and isn't worth it. Static analysis gives you tons of checking for free. The biggest benefit is forcing you to properly handle function arguments that could be None. Are you always checking that values aren't None before using them?


Static analysis using static typing gives you almost nothing at all in practice.

It searches for a very narrow and rare set of bugs, which hardly ever happen.

Typing really exists to help a compiler, it's not a code correctness tool in the way you envision.


It absolutely is a code correctness tool. If I have a function that should only accept a string then static analysis will error in all the places that pass anything else. As long as you validate your inputs (i.e. request validation) then the type annotations are representative of the runtime types


> If you enforce them then you can't do duck typing,

This is a little misleading as mypy and pyright both support structural typing.


> You are basically not writing Python code at all.

I really dislike this take. I believe Python is about rapid iteration, doing more of functionality with less code, and enjoying the process of programming. You know what I enjoy? Easily knowing what features (methods, attributes, keys) an object has. You know what I really don't enjoy? Having to dig through reams of source code to figure out what attributes I can use, or what *kwargs a function takes.

Nothing about duck typing prohibits saying `def func(duck: Duck)`, because maybe it's a French duck and has .coin() instead of .quack().

Also also, how is static typing completely incompatible with dynamic features? Usually there exists an abstraction with static types that actually captures the behavior you want. Its rare you want to be able to handle literally anything. Ask oneself, "could this function receive absolutely, truly, anything, or is it just shortcut for not finding the right types?"


You might dislike the take, but it's correct. If I write a side-effect free Python program in Haskell style, that isn't a "real" Python program. Using a static type checker is a similar type of change to the code style.

"Having to dig through reams of source code to figure out what attributes I can use, or what *kwargs a function takes."

Figuring out the attributes and what kwargs a function takes is the role of your intelligent IDE and docstrings. You think you need to static typing for this since you haven't spent the time to learn how Python does these things.

Duck typing says, duck doesn't have a type, not even Duck, anything that quacks is a duck even a 115 ton truck.


> You think you need to static typing for this since you haven't spent the time to learn how Python does these things.

I've been writing python for over 15 years and use Pycharm extensively. Please don't make assumptions about other users' experiences without evidence.

IDEs absolutely do not infer what fields *kwargs take, let alone what types those fields are. I'm not talking named fields, I mean literally

    def func(*args, **kwargs): ...

Afaict, no IDE and no type system can help you there, in python, save for some really arcane type inference magic on every single call site.

> duck doesn't have a type, not even Duck

It absolutely does, if Duck is a protocol. If you have some object, and you call .quack(), it implements the Quacker interface, even without inheriting BaseDuck. You can still statically type it no problem, and it'll take a Truck no problem, so long as Truck.quack() is defined. Almost everything has a type, it just takes some effort to uncover it.


That's an IDE problem, with sufficently powerful code inspection, the fields can still be found. It is not the developers' job to manually add that type of information into their source code.

Ahh so if a method on an object dynamically adds in the quack() method at runtime, does the object spontaneously change type?


> Ahh so if a method on an object dynamically adds in the quack() method at runtime, does the object spontaneously change type?

Yes, actually.

This is an antipattern in the vast majority of use cases. There are far better patterns out there than just sticking a method on an object, and I detest codebases that do this sort of thing. It's almost always someone who uses the phrase "pythonic" unironically way too much, writing these sorts of things, and a nightmare to debug.

There's less problem with doing something like

    def quackerizer(foo: Any) -> Quacker:
        foo.quack = lambda...
        return foo

Still kinda odd, but at least you are doing a formal cast.

> That's an IDE problem, with sufficently powerful code inspection, the fields can still be found.

Ok, name one. Pycharm and vscode don't currently do it afaik. Maybe some vscode plugin, haven't used vscode much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: