Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

TL;DR:

The important stuff that makes a good case for Python 3:

- Adittion of "yield from" allows easier programming with async I/O "a la " Node.js (using 'await')

- Standarized annotations of function arguments and return values can help in the future for type checking, optimization, etc.

Even more important stuff

- Unicode can be used in symbols. You can now use Kanji characters in your function names, to annoy your coworkers and win the International Obfuscated Python Code Contest.

Other stuff

- Minor unimportant stuff that is definitely no reason alone for switching for Python 3.



Of those, only the async I/O stuff seems compelling. But compelling it is, at least as used in Curio. It feels like this is still shaking out, with the standard library and Trio (?) alternatives, but it looks really cool.


If you work with international users full unicode support is very compelling.


Why do your users care about symbols internal to your code?


They don't, they care about `str` being unicode and developers not having to do additional work to support unicode strings.


Strings can be unicode in Python2, you start a unicode string with u".

Python2​ has full unicode support. While Python3 supports only unicode strings.


I was thinking through my response to this, and realized that I would just be repeating what I already said.

Is there any reason to require extra work to support unicode strings?


I've come into large python2 projects which had been started with non-unicode strings (because the initial developers didn't think about it). At some point a user with non-English characters invariably signs up and then shortly complains. It has been significant work to (1) convert everything that should be converted to unicode (2) re-train the developers to use the unicode syntax.


Python 3 has, more or less, just renamed unicode() to str() and str() to bytes(). unicode() support was already complete in Python 2. The rename is not a user-facing feature.


String literals are unicode by default, which they were not before.


True. It is a nice thing for scientific code though. Often in a field α etc etc have a known meaning by convention and being able to write them as α rather than alpha can really make longer formulas more readable.


if you do open source development, your users are your testers and your future developers.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: