> The author argues a reducing the size of a function of "a few dozen lines" likely won't improve the readability of the code. That's not even defensible! I would fail any (professional) code review for having that many lines of code in a function.
I would argue that such absolutist rules are harmful. Yeah, there certainly are times when smaller functions are better. But there are times where separating a function into smaller ones does indeed make it 'harder' for me to read, since with each 'abstraction' you're losing context/details that might be very relevant in the code to follow. I would rather have a 40 line function with the dirty low-level details rather than the same split over 50 lines of different functions that I need to go into and figure out the details.
And then again, who is to decide what is 'one abstraction'? Abstractions can be at different levels, and abstracting different things. There really isn't an objective way to do it. I would argue that this is analogous to writing -- there are all kinds of books/stories/poems, short and long etc. and one isn't necessarily better than another.
I would argue that such absolutist rules are harmful.
Indeed. There are millions of us out there developing software for numerous different applications with numerous different trade-offs. “Never say never” is probably good advice here.
I once had a discussion with a prominent member of the ISO C++ standards committee about the idea of labelled break and continue statements of the kind found in various other languages, which let you affect control in an outer loop from within a nested inner one something like this:
outer_label:
for (int i = 0; i < 10; ++i) {
for (int j = 0; j < 10; ++j) {
if (something_interesting_happened) {
respond_to_interesting_thing();
break outer_label;
}
}
}
They were essentially arguing that such a language feature should not be necessary because you should never need deep nesting of loops in a program with good coding style anyway.
At that time, I was working on code where a recurring need was to match sometimes quite intricate subgraphs within a large graph structure. This is known as the subgraph isomorphism problem¹ and it’s NP-Complete in the general case, so in practice you rely on heuristics to try to do it as quickly as you can.
That’s a fancy way of saying you write lots of deeply nested loops with lots of guard conditions to exit a particular iteration as quickly as possible if it can’t possibly find the pattern you’re looking for. 5–10 levels of indentation in a function to find matches of a particular subgraph were not unusual. Functions at least 50–100 lines long were common and longer was not rare. It probably broke every rule of thumb the advocates of short functions and shallow nesting have ever written.
To this day, I believe it was probably also the most clear, efficient and maintainable way to write those algorithms in C++ at that time. But it would have been clearer with labelled breaks.
I would argue that such absolutist rules are harmful. Yeah, there certainly are times when smaller functions are better. But there are times where separating a function into smaller ones does indeed make it 'harder' for me to read, since with each 'abstraction' you're losing context/details that might be very relevant in the code to follow. I would rather have a 40 line function with the dirty low-level details rather than the same split over 50 lines of different functions that I need to go into and figure out the details.
And then again, who is to decide what is 'one abstraction'? Abstractions can be at different levels, and abstracting different things. There really isn't an objective way to do it. I would argue that this is analogous to writing -- there are all kinds of books/stories/poems, short and long etc. and one isn't necessarily better than another.