I see people conflate strong/weak and static/dynamic quite often. Python is strong[1]/dynamic, with optional static typing through annotations and a type checker (mypy, pyright, etc).
Perhaps the easiest way to add static types to data is with pydantic. Here's an example of using pydantic to type-check data provided via an external yaml configuration file:
[1] strong/weak are not strictly defined, as compared to dynamic/static, but Python is absolutely on the strong end of the scale. You'll get a runtime TypeError if you try to add a number to a string, for example, compared to say JavaScript which will happily provide a typically meaningless "wat?"-style result.
In some significant ways, it's not strong at all. It's stronger than Javascript but it's difficult not to be. Python is a duck typing language for the most part.
Duck typing is an aspect of it being dynamically typed, not whether it is strong/weak. But strong/weak is not formally defined, so if duck typing disqualifies it for you, so be it.
For example, you will find functions where the runtime value of parameters will change the return type (e.g. you get a list of things instead of one thing). So unless we want to throw out huge amounts of Python libraries (and the libraries are absolutely the best thing Python has going for it) then we have to accept that it’s not a very good statically type language experience.
The JS community on the other hand had adopted TypeScript very widely. JS libraries are often designed with typing in mind, so despite being weakly typed, the static type experience is actually very good.
I don't disagree. However, often, when I use a library, I use it within a small function that I control, which I can then type again. Of course, if libraries change e.g. the type they return over time (which they shouldn't also according to Rich), you often only notice if you have a test (which you should have anyway).
Moreover, for many libraries there are types- libraries that add types to their interface, and more and more libraries have types to begin with.
Anyway just wanted to share that for me at least it's in practice not so bad as you make it sound if you follow some good processes.
YMMV. I have over two decades of experience with Python and about a decade with JS though it's all backend work. I use both in my day job, but write in Python more frequently. I've found the transition to Python static typing much more seamless and easier to adopt than TS.
Amusingly, I can't call any time where I'd had to deal with differently typed return values in Python, but just recently had to fix some legacy JS code that was doing that (a function that was returning null, scalar, or array depending upon how many values it got in response to a SQL query).
>For example, you will find functions where the runtime value of parameters will change the return type (e.g. you get a list of things instead of one thing).
I have long argued that such interfaces are doing it wrong. That's what "Special cases aren't special enough to break the rules." in the Zen is supposed to warn about, to my understanding.
Defining an operation between two different types is not at all the same thing as enabling implicit conversions. Notice for example that "1" * 2 gives "11", and not "2" nor 2. Interpreting multiplication of a string by an integer as "repeat the string that many times" doesn't require any kind of conversion (the integer is simply a counter for a repeated concatenation process). Interpreting addition as "append the base-10 representation of the integer" certainly does. (Consider: why base 10?)
You have a point that strong vs weak typing is not a binary and that different languages can enable a varying amount of implicit conversions in whatever context (not to mention reinterpretation of the underlying memory). But from ~20 years of experience, Python's type system is nothing like JavaScript's - and it's definitely helpful to those who understand it and don't fight against it.
In my experience it's typically people from languages like Haskell that can't see the difference.
> that's just operator overloading and it exists in many statically typed languages too
My point is that Python's "typing" guarantees allow a caller to call a function with the wrong type, and get back a wrong answer and/or silently lose data.
Strong typing is pointless if the language is unable to actually prevent common footguns, like passing in the incorrect type.
I'm moving more and more to the opinion that arguing about the spectrum of strong <-> weak typing is stupid, because type utility is on the spectrum of static <-> dynamic, with dynamic being full of footguns.
I see people conflate strong/weak and static/dynamic quite often. Python is strong[1]/dynamic, with optional static typing through annotations and a type checker (mypy, pyright, etc).
Perhaps the easiest way to add static types to data is with pydantic. Here's an example of using pydantic to type-check data provided via an external yaml configuration file:
https://news.ycombinator.com/item?id=41508243
[1] strong/weak are not strictly defined, as compared to dynamic/static, but Python is absolutely on the strong end of the scale. You'll get a runtime TypeError if you try to add a number to a string, for example, compared to say JavaScript which will happily provide a typically meaningless "wat?"-style result.