The try block method would be considered bad in most languages, and I hope it is considered bad in Python as well. Using exception handling as part of a normal flow of control is bad style, bad taste, and bad for performance.
Essentially, in CPython, you're already paying for an exception check on either a lot of the opcodes or all of them, so you might as well use it. Statements are very rich in Python, so generally to get the best CPython performance the name of the game is to get as much out of each one as you can, and minimize the total number of them. (I have to specify CPython because PyPy can be much more complicated.)
Certainly nowadays that would be in bad style, but, back in 1.5 it wouldn't have been so bad. Also, when 1.5 was out, 200MHz was still a pretty decent machine! The little things could matter quite a lot.
It's not considered quite that bad. In many languages, using exceptions for non-exceptional flow control is bad style. Python is a bit more ambivalent about this, for instance take a look at the iterator protocol which uses an exception to signal the iterator is done.
Yep, the fundamental termination mechanism seems to be the same. Years and years ago I got into some silly irc nerdgument with a python expert (I think one of the twisted people) about the ugliness of this design and for a moment I thought I got to triumphantly yell 'Told you so!' a decade later. Alas, not the case.
I take it you prefer the explicit test for the end of iteration?
As an historic note, the StopIteration form grew out of the earlier iterator form, which called __getitem__ with successive integers until there was an IndexError. That may explain a bias towards an exception-based approach.
I probably do, although in practice, given how well (and composable) python comprehensions/iterators/generators have turned out, getting all worked up about some implementation detail now seems a bit churlish and pointless.
There was a paper at the 1997 Python conference on this topic titled "Standard Class Exceptions in Python". A copy is at https://web.archive.org/web/20030610173145/http://barry.wars... . It evaluated the performance of try/except vs. has_key and concluded:
> This indicates that the has_key() idiom is usually the best one to choose, both because it is usually faster than the exception idiom, and because its costs are less variable.
The take-home lesson is that actually raising an exception in Python 1.5 was about 10x more expensive than a function call, but the try/except block when there is no exception was not expensive.
Interesting. My information was not only out of date; it was also wrong. The dangers of cargo-culting, although in my case, more theoretical than real, as I never had performance-sensitive python code in production.
Instead of checking the type of some_string, or seeing if it has the method split. Reason is: it's more straight forward, and it instantly tells the reader this function is meant to handle strings, and it will split them. If it gets a not-string for some reason, oh well, it'll still handle it.
You would check for values in a circumstance like this:
def dict_breaker(some_dict):
if 'items' in some_dict:
return parse_some_items(some_dict['items'])
These versions may be more efficient (only have to do one hash and interaction with the hashmap, but this probably won't be visible with integers/short strings), and don't suffer from race conditions in concurrent code with the hashmap being modified between the check and the use (yes, this can occur even with GIL).
Is there a reason you chose to `pass` instead of the more explicit `return None`? The former seems like it would be less idiomatic since its return value is not explicitly stated.
EDIT: I'm glad to see the Python documentation addresses this: https://docs.python.org/2/faq/design.html#how-fast-are-excep...