Python versions 3.11, 3.12 and now 3.13 have contained far fewer additions to the language than earlier 3.x versions. Instead the newest releases have been focusing on implementation improvements - and in 3.13, the new REPL, experimental JIT & GIL-free options all sound great!
The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.
"
Python now uses a new interactive shell by default, based on code from the PyPy project. When the user starts the REPL from an interactive terminal, the following new features are now supported:
Multiline editing with history preservation.
Direct support for REPL-specific commands like help, exit, and quit, without the need to call them as functions.
Prompts and tracebacks with color enabled by default.
Interactive help browsing using F1 with a separate command history.
History browsing using F2 that skips output as well as the >>> and … prompts.
“Paste mode” with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt).
"
Sounds cool. Definitely need the history feature, for the few times I can't run IPython.
Presumably this also means readline (GPL) is no longer required to have any line editing beyond what a canonical-mode terminal does by itself. It seems like there is code to support libedit (BSD), but I've never managed to make Python's build system detect it.
I have managed to build Python with libedit instead of readline, but it was a custom build.
If your assumption is correct, then I'm anxiously waiting for having the default Python executable in Ubuntu, for example, being licensed under a non-copyleft license. Then one would be able to build proprietary-licensed executables via PyInstaller much more easily.
I've love to see a revamp of the import system. It is a continuous source of pain points when I write Python. Circular imports all over unless I structure my program explicitly with this in mind. Using python path hacks with `sys` etc to go up a directory too.
The biggest problem with Python imports is that the resolution of non-relative module names always prioritizes local files, even when the import happens in stdlib. This means that, for any `foo` that is a module name in stdlib, having foo.py in your code can break arbitrary modules in stdlib. For example, this breaks:
# bisect.py
...
# main.py
import random
with:
Traceback (most recent call last):
File ".../foo.py", line 1, in <module>
import random
File "/usr/lib/python3.12/random.py", line 62, in <module>
from bisect import bisect as _bisect
ImportError: cannot import name 'bisect' from 'bisect'
This is very frustrating because Python stdlib is still very large and so many meaningful names are effectively reserved. People are aware of things like "sys" or "json", but e.g. did you know that "wave", "cmd", and "grp" are also standard modules?
Worse yet is that these errors are not consistent. You might be inadvertently reusing an stdlib module name without even realizing it just because none of the stdlib (or third-party) modules that you import have it in their import graphs. Then you move on to a new version of Python or some of your dependencies, and suddenly it breaks because they have added an import somewhere.
But even if you are careful about checking every single module name against the list of standard modules, a new Python version can still break you by introducing a new stdlib module that happens to clash with one of yours. For example, Python 3.9 added "graphlib", which is a fairly generic name.
I agree, it's unreasonable to expect devs to know the whole standard library. The VSCode extension Pylance does give a warning when this happens. I thought linters might also check this. The one I use doesn't, maybe the issue[0] I just created will lead to it being implemented.
But the problem remains, because these warnings - whether they come from linters or Python itself - can only warn you about existing stdlib modules. I'm not aware of any way to guard against conflicts with any future new stdlib modules being added.
It is a problem because stdlib does not use relative imports for other stdlib modules, and neither do most third-party packages, which then breaks you regardless of what you do in your code.
It was ultimately rejected due to issues with how it would need to change the dict object.
IMO all the rejection reasons could be overcome with a more focused approach and implementation, but I don't know if there is anyone wishing to give it another go.
Getting a cyclic import error is not a bug, it's a feature alerting you that your code structure is like spaghetti and you should refactor it to break the cycles.
The last couple years also saw a stringent approach to deprecations: If something is marked as deprecated, it WILL be removed in a minor release sooner than later.
Yep. They’ve primarily (entirely?) involved removing ancient libraries from stdlib, usually with links to maintained 3rd party libraries. People who can’t/won’t upgrade to newer Pythons, perhaps because their old system that uses those old modules can’t run a newer one, aren’t affected. People using newer Pythons can replace those modules.
There may be a person in the world panicking that they need to be on Python 3.13 and also need to parse Amiga IFF files, but it seems unlikely.
I mean the stdlib is open source too, so you could always vendor deprecated stdlib modules. Most of them haven't changed in eons either so the lack of official support probably doesn't change much.
Using Python on and off since version 1.6, I always like to point out that the language + standard library is quite complex, even more when taking into account all the variations across versions.
Agreed. I still haven’t really started using the ‘match’ statement and structural pattern matching (which I would love to use) since I still have to support Python 3.8 and 3.9. I was getting tired of thinking, “gee this new feature will be nice to use in 4 years, if I remember to…”
But they've worked very hard at shielding most users from that complexity. And the end result - making multithreading a truly viable alternative to multiprocessing for typical use cases - will open up many opportunities for Python users to simplify their software designs.
I suppose only time will tell if that effort succeeds. But the intent is promising.
No. Python is orders of magnitude slower than even C# or Java. It’s doing hash table lookups per variable access. I would write a separate program to do the number crunching.
Everyone must now pay the mental cost of multithreading for the chance that you might want to optimize something.
> It’s doing hash table lookups per variable access.
That hasn't been true for many variable accesses for a very long time. LOAD_FAST, LOAD_CONST, and (sometimes) LOAD_DEREF provide references to variables via pointer offset + chasing, often with caches in front to reduce struct instantiations as well. No hashing is performed. Those access mechanisms account for the vast majority (in my experience; feel free to check by "dis"ing code yourself) of Python code that isn't using locals()/globals()/eval()/exec() tricks. The remaining small minority I've seen is doing weird rebinding/shadowing stuff with e.g. closures and prebound exception captures.
So too for object field accesses; slotted classes significantly improve field lookup cost, though unlike LOAD_FAST users have to explicitly opt into slotting.
Don't get me wrong, there are some pretty regrettably ordinary behaviors that Python makes much slower than they need to be (per-binding method refcounting comes to mind, though I hear that's going to be improved). But the old saw of "everything is a dict in python, even variable lookups use hashing!" has been incorrect for years.
Thanks for the correction and technical detail. I’m not saying this is bad, it’s just the nature of this kind of dynamic language. Productivity over performance.
> Everyone must now pay the mental cost of multithreading for the chance that you might want to optimize something.
I'm assuming that by "everyone" you mean everyone who works on the Python implementation's C code? Because I don't see how that makes sense if you mean Python programmers in general. As far as I know, things will stay the same if your program is single-threaded or uses multiprocessing/asyncio. The changes only affect programs that start threads, in which case you need to take care of synchronization anyway.
Python doesn't do hash table lookups for local variable access. This only applies to globals and attributes of Python classes that don't use __slots__.
The mental cost of multithreading is there regardless because GIL is usually at the wrong granularity for data consistency. That is, it ensures that e.g. adding or deleting a single element to a dict happens atomically, but more often than not, you have a sequence of operations like that which need to be locked. In practice, in any scenario where your data is shared across threads, the only sane thing is to use explicit locks already.
1. The whole system is dedicated to running my one program,
2. I want to use multi threading to share large amounts of state between workers because that's appropriate to my specific use case, and
3. A 2-8x speedup without having to re-write parts of the code in another language would be fan-freaking-tastic.
In other worse, I know what I'm doing, I've been doing this since the 90s, and I can imagine this improvement unlocking a whole lot of use cases that've been previously unviable.
Sounds like a lot of speculation on your end because we don't have lots of evidence about how much this will affect anything, because until just now it's not been possible to get that information.
> ditto. So that’s not relevant.
Then I'm genuinely surprised you've never once stumbled across one of the many, many use cases where multithreaded CPU-intensive code would be a nice, obvious solution to a problem. You seem to think these are hypothetical and my experience has been that these are very real.
This issue is discussed extensively in “the art of Unix programming” if we want to play the authority and experience game.
> multithreaded CPU-intensive code would be a nice, obvious solution to a problem
Processes are well supported in python. But if you’re maxing your CPU core with the right algorithm then python was probably the wrong tool.
> my experience has been that these are very real.
When you’re used to working one way it may seem impossible to frame the problem differently. Just to remind you this is a NEW feature in python. JavaScript, perl, and bash, also do not support multi threading for similar reasons.
One school of design says if you can think of a use case, add that feature. Another tries to maintain invariants of a system.
If you’re in a primarily python coding house, your argument won’t mean anything when you bring up you’ll have to rewrite millions of lines of code in C# or Java, you might as well ask them to liquidate the company and start fresh.
“Make things as simple as possible, but no simpler”, I for one am glad they’ll be letting us use modern CPUs much more easily instead of it being designed around 1998 cpus
The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.