Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My concern, and IMO what should be the overwhelming concern of the maintainers, is not the code that is being written, or the code that will be written, but all the code that has been written, and will never be touched again. A break like this will force lots of python users to avoid upgrading to 3.17, jettison packages they may want to keep using, or deal with the hassle of patching unmaintained dependencies on their own.

For those Python users for whom writing python is the core of their work that might be fine. For all the other users for whom python is an foreign, incidental, but indispensable part of their work (scientists, analysts, ...) the choice is untenable. While python can and should strive to be a more 'serious', 'professional' language, it _must_ have respect and empathy for the latter camp. Elevating something that should be a linter rule to a language change ain't that.



Exactly this. I totally agree. It's incredible to think that some people still run python 2 scripts; something unpalatable to the point of being nauseating for a day-to-day python programmer, but totally understandable in the context of incidental usage by a scientist dealing with legacy systems.

If these things start happening to python 3 on a larger scale, might as well throw in the towel and go for python 4.


I read "python 2" and a part of my soul cried in utf-8.


Programming languages are too obsessed with unicode in my opinion:

String operations should have bytes and utf-{8,16} versions. The string value would have is_valid_utf_{8,16} flags and operations should unset them if they end up breaking the format (eg str[i] = 0xff would always mark the string as not unicode, str[i] = 0x00 would check if the flag was set and it so check whether the assignment broke a codepoint and unset the flag if so)


There are zero reasons to not go full UTF-8, everywhere, all the time. Strings should not be allowed to be built directly from bytes, but only from converting them into explicitly UTF-8 (or, for API specific needs, other encodings should you want to enter the fun world of UCS-2 for some reason), which can and should be a cheap wrapper to minimize costs.

Bytes are not strings. Bytes can represent strings, numbers, pointers, absolutely anything.


That would only work if you never deal with legacy (pre-utf8 data), or if you are American and all your legacy data in ASCII.

If you are actually dealing with legacy data, you want your programs to not care about encodings at all.

File names are sequence of bytes... if it's not a valid UTF-8, why would program even care? A C program from 1980 can print "Cannot open file x<ff><ff>.txt" without known what encoding it, why can't Python do this today without lots of hoops? Sure, the terminal might not show this, but user will likely redirect errors to file anyway and use the right tools to view it.

File contents are sequence of bytes. Every national encoding keeps 0-31 range as control characters, and most of them (except systems like Shift-JIS) keep lower half as well. Which means a program should be able to parse .ini file which says "name=<ff><ff>", store <ff><ff> into a string variable, and later print "Parsing section <ff><ff>" to stdout - all without ever knowning which encoding this is.

There are very few operations which _really_ care about encoding (text rendering, case-insensitive matching, column alignment) and they need careful handling and full-blown unicode libraries anyway, with megabytes of data tables. Forcing all other strings to be UTF-8 just for the sake of it is counterproductive and makes national language support much harder.


I agree that there need to be utf-8 specific functionality but not everything `is` utf-8, for example filenames and filepaths. For example a JSON document should be utf8 encoded, but json strings should be able to encode arbitrary bytes as "\x00...\xff". since they can already contain garbage utf16 we would not lose much.


Does your language uses any form of accented letters?


Not sure 3 did anything to the number of circular permutations of encode and decode one need to fiddle with until that damn csv is pulled in correctly.


There are places that don't operate in anything but ASCII. Or even anything but 0123456789,\n\r.


Are you sure it didn't cry in `bytes()`?


Still loads of Python 2 scripts floating around at my employer and my prior 2 employers (6 years total)


I guess it did not happen but I liked the idea of keeping python2 as a supported but forever frozen language.

Sort of like if c was still c90 and compilers mostly had no extensions.


The Python maintainers have switched from considering backward compatibility useful but costly to considering it actively harmful; there was a campaign a few years ago to convince library maintainers to stop making their code Python-2-compatible:

> The Python 3 statement was drawn up around 2016. Projects pledged to require Python 3 by 2020, giving other projects confidence that they could plan a similar transition, and allowing downstream users to figure out their options without a nasty surprise. We didn’t force people to move to Python 3, but if they wanted to stick with Python 2, they would stop getting new versions of our projects.

(https://python3statement.github.io/)

I know this sounds like a joke, and you probably think you're misreading, but no. Projects pledged to require Python 3 by 2020. They made a promise to break backward compatibility. Not just a few minor projects, either; TensorFlow, Spark, IPython, Pandas, NumPy, SymPy, Hypothesis, etc.

Since that happened, everyone who considers backward-compatibility good (if costly), rather than evil, has abandoned Python.

The "other users for whom python is an foreign, incidental, but indispensable part of their work (scientists, analysts, ...)" would have to fork Python, but it's probably too late for that; they can hardly hope to fork TensorFlow, Pandas, etc., as well.


1. That python 3 statement was not drawn up by "the Python maintainers". It was drawn up by downstream library owners.

2. To the extent you object to changes in the core language, the python maintainers do have a backwards compatibility statement and prominent timelines for deprecation. You may disagree with these, but they are public.

3. At the time it was written, the python 3 statement proposed dropping support for a version of python with known security problems and no plans for security updates. It seems like your argument is with the python 2 to python 3 transition, which feels like a conversation we've had here before.


Anyone can of course apply security fixes to Python 2, because it's open-source.

My objection is not to library owners dropping support for Python 2, which is a perfectly reasonable choice for them to make—backward compatibility can be costly, after all, and the benefits may not be worth it. My objection is to library owners pledging to drop support for Python 2, because that entails that they think backward compatibility is itself harmful. To me, that's pants-on-head crazy thinking, like not wanting to wear last season's sweater, or not wanting to use JSON because it's too old.

Observably, since this happened, the Python maintainers have been very active at breaking backward compatibility. (And there's substantial overlap between Python maintainers and major Python library maintainers, which I suspect explains the motivation.) I think this is probably due to people who don't think backward compatibility is actually evil (the aforementioned "all the other users for whom python is an foreign, incidental, but indispensable part of their work") fleeing Python for ecosystems like Node, Golang, and Rust. This eliminates the constituency for maintaining backward compatibility.

I do think the botched 2→3 transition was probably the wellspring of this dysfunction, but I don't think that in itself it was necessarily a bad idea, just executed badly.

As a result of this mess, it's usually easy for me to run Lisp code from 40 years ago, C code from 30 years ago, or Perl or JS code from 20 years ago, but so difficult to run most Python code from 5 years ago as to be impractical.


It seems to be a gross exaggeration to say most Python code from 5 years ago doesn't work on current Python versions. Python 3 was mainstream a decade ago, and almost all code written for Python 3.3 or 3.4 still works on Python 3.13. Maybe some libraries have had breaking changes, but at least for common libraries like Numpy, Scipy, and Matplotlib, most code from a decade ago still works fine.


There's plenty of Python 2 code from 5 years ago, and virtually none of it works on current Python versions. A decade ago virtually all Python code was Python 2 code; in 02014 Python 3 was almost unusable. Perhaps what you mean is that most individual lines of Python code using Numpy and Scipy from ten years ago work fine in current Python versions, but very few complete programs or even library modules do.


They made a new version which is highly indicative of breaking changes if not the entire meaning behind bumping the version. What's the problem? I think it's bold of you to rag on volunteers for a supposed botched upgrade, whatever, but I don't know anyone writing python 2 today?


Plenty of people are writing Python 3 today, too, but because things like this proposal seem reasonable, and things like removing cgitb and asyncore are actually happening, most of them will regret it eventually. Sometimes volunteers screw up, and sometimes they have dysfunctional social dynamics that hurt people.


You're aware that you're several years late to this argument, right?


I agree, it's too late for Python. But it's not too late for people who think the work they're doing has serious intellectual content of lasting value to choose a different language today, so that their work doesn't become unusable five years from now, and it's not too late to keep the same thing from happening to other programming-language ecosystems.


If you actually read the discussion of this PEP, you can see many Python maintainers strong opposing it on backwards compatibility grounds: https://discuss.python.org/t/pep-760-no-more-bare-excepts/67...

In fact, the consensus was so strongly against it that it has already been withdrawn.


The people who you describe as valueing backwards compatibility are exclusively downstream consumers of others work. They value infinite free labour by others to any maintenance by themselves. This is of course a perfectly rational but unsupportable position.


The great thing about free software is that they have the legal right to do that maintenance, and it's actually not very difficult to do. The Python 3 Statement was an attempt to use public shaming to disincentivize them from doing it. Astoundingly, it seems to have been successful.


Regarding open source that labor is expensive, rare, and at any given time insufficient to need. In this case it also has a value only in context of the majority containing to do it. What is being signalled here is a lack of desire to keep doing it followed by nobody opting to volunteer for the mission.

Basically you've been a guest long enough and now the towels, entertainment, and snacks are going away. If this appears blunt it is pretty obvious why bluntness is required anything else asks for a decade of free support given by inched at the expense of more laudible goals.


No, what is being signalled here is the intent to publicly shame anyone who continued to maintain backward compatibility. Given that public humiliation is among most people's greatest fears, in retrospect, it shouldn't be surprising that anyone volunteered for the mission. But it did surprise me.

Your comment would be correct if we were discussing a lack of continued support for Python 2, or even a public announcement of a planned cessation of such support. But we're discussing a public promise to break support for Python 2, in the form of a petition seeking more signatories. Although the difference may be too subtle for you to have noticed, it's an entirely different animal. It's like the difference between hotels that don't promise you a room that allows smoking, and hotels that promise you a room that doesn't allow smoking. The second case is a promise to keep your room free of the nauseating stench of Python 2.

But what user would want that? Why would you prefer a language or a library that promises to break backward compatibility? What's the benefit to you of the language making your code cease to function every year or two? Job security, perhaps?


They would want that because the cost of constantly saying no to users with attitudes that range from grateful to entitled is non-zero. Support requests for 16 year old software should come with a support contract and a check.

I suspect anyone willing to pay enough could get support for whatever they please and with enthusiasm.


No, it's obvious why Python maintainers would want to drop backward compatibility. What I don't understand is why users would want it. I thought that was pretty clear in my comment; I'm not sure how you managed to misinterpret it to be saying I didn't understand something that's obvious.


[flagged]


At this point you have seriously transgressed the boundaries of civility; having been informed that you had completely misinterpreted my previous comment, the least you could do is to apologize. Instead you are responding with sarcastic remarks apparently predicated on the same misinterpretation you've just been corrected on. You've exhausted the presumption of good faith. Probably even your initial comment was merely trolling; the exaggerated wording full of absolutes and stereotypes should have clued me in.


[flagged]


I know you really, really want me to think that I was owed new Python 2 versions of other people's libraries, but I'm not going to. I've already explained several times that what I think is bad is not people dropping backward compatibility with Python 2, but demonstrating the intent to publicly shame anyone who continued to maintain backward compatibility.

All of your comments are arguing against a position I've never held as if it were my position, well after you have no reasonable excuse for that error. That's dishonest and offensive.



Most active developers opt into this behaviour via linter rules[0]. I see only the downsides of building this into the language.

[0] https://docs.astral.sh/ruff/rules/bare-except/


This, in a huge way. Especially given all the code that already exists.


major changes with breaking backward compatibility would require a major bump. Fine for python 4 I would say.


Sure I guess? But we've just barely stopped talking about Python3, and that was released 13-16 years ago[1]. Is _this_ change worth another decade of thrashing the ecosystem? Is __any__ change?

[1]: depending on if we count 3.0 vs 3.2 when it was actually kinda usable.


Again, python does not use semantic versioning. 3.12 and 3.13 are different major versions. The deprecation policy is documented and public. https://peps.python.org/pep-0387/.


TIL.

In the doc you linked, they reference "major" and "minor" versions. So they claim to have some concept of version numbers having different significance... Why don't they adhere to semantic versioning if they dress their version numbers like that?

At least Linux just admits their X.Y scheme means nothing.


Why, it just should be opt-in:

  from __future__ import no_bare_except
...and enjoy.


Python should have the ability to set per-module flags for these kinds of incompatibilities.

I guess that __future__ is doing something similar, but I am thinking of something like declaring how options should be set for your module and then only the code in that module is affected. (It would be nice being able to set constraints on what options your dependencies can enable)

I guess that for std functionality this is impossible (like if the dict changed its key sorting it might be too hard to dispatch on the original file) but for syntax it should be perfectly possible.


Honestly 2->3 has been a HUGE mess, and python should be learning from these sorts of mistakes.


s/except:/except Exception:/g


Not the same: should be BaseException.

I guess this highlights op’s issue quite well.


Disagree, I’m so disappointed in companies who do sprint type development refusing to use Python. It works well with the “Silicon Valley startup ecosystem”.

That being said, as far as workplace differences I’d say Java shops would be the ideal, slower, less long term problems but so much more initial investment.


As SV startup with Python monolith, yes, it's very common for startup but generally gets ejected because lack of strict typing and speed. We are replacing with Go, Node and .Net.


Python offers typing with static compiling. .Net doesn’t really match with startup culture.

I’m on the fence about Go, but maybe that’s my preference to having classes.

But yeah I’m the general case if I was an investor I’d be more careful with purely Python based startups.


> Python offers typing with static compiling.

Python doesn't enforce types and as far as I know has no plans to.

> .Net doesn’t really match with startup culture.

Who the hell cares? If it's the best tool for the job, use it. Anything else is unprofessional as hell.


Tell that to the people who downvote me which seems unprofessional as hell.

If I want to learn .Net which is more time consuming and more difficult to find employees why would I use it? Makes sense if you are in an area with a lot of windows people, but that’s not the case anywhere other than Texas.

And the compiler enforce typing. Admittedly not as nice as Go since you have to rely on external tools but workable.

People like their curly brackets though. Just not as helpful when dealing with system problems.


.Net came from group we acquired who yes, deployed things on Windows. However, their code now runs on .Net Core, in Linux Containers on Kubernetes. It's very performant as well, my only gripe is startup JIT. .Net does great in startup culture if you are not chasing trends and want code that works.


Hmm, usually the application start latency is very good. Significant improvements have been made to ensure that Tier-0 compiles fast. A base ASP.NET Core template takes about 120ms to start on my machine as tested with .NET 8 and Hyperfine (I modified it with starting the server with await app.RunAsync, then raising a CancellationToken on it in 10ms which outputs an error message in console about the fact and exits).

There is a good chance something else might be going on in one of the dependencies or perhaps some other infra package a team maintains, that slows this down. Sometimes teams publish SDK images on accident that have to be pulled over the network if they got evicted from the node cache, or try to use self-contained instead of runtime image + plain application - I know at least two cases where this was causing worse than desired deployment speed on GKE (arguably GKE is as much at fault here, but that's another topic).


It's very likely it's some library but at this point, I'm over caring. It's 20 seconds, everyone can cope with deployment rollout in Kubernetes taking 3 minutes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: