But then, 7 << 100 << (7 but each access blanks out your short-term memory), which is how jumping to all those tiny functions and back plays out in practice.
Because I need to know what they actually do? The most interesting details are almost always absent from the function name.
EDIT:
For even a simplest helper, there's many ways to implement it. Half of them stupid, some only incorrect, some handling errors the wrong way or just the wrong way for the needs of that specific callee I'm working on. Stupidity often manifests in unnecessary copying and/or looping over copy and/or copying every step of the loop - all of which gets trivially hidden by extra indirection of a small function calling another small function. That's how you often get accidental O(n^2) in random places.
Many such things are OK or not in context of caller, none of this is readily apparent in function signatures or type system. If the helper fn is otherwise abstracting a small idiom, I'd argue it's only obscuring it and providing ample opportunities to screw up.
I know many devs don't care, they prefer to instead submit slow and buggy code and fix it later when it breaks. I'm more of a "don't do stupid shit, you'll have less bugs to fix and less performance issues for customers to curse you for" kind of person, so cognitive load actually matters for me, and wishing it away isn't an acceptable solution.
Strange. The longer I've been programming, the less I agree with this.
>For even a simplest helper, there's many ways to implement it.
Sure. But by definition, the interface is what matters at the call site.
> That's how you often get accidental O(n^2) in random places.
Both loops still have to be written. If they're in separate places, then instead of a combined function which is needlessly O(n^2) where it should be O(n), you have two functions, one of which is needlessly O(n) where it should be O(1).
When you pinpoint a bottleneck function with a profiler, you want it to be obvious as possible what's wrong: is it called too often, or does it take too long each time?
> If the helper fn is otherwise abstracting a small idiom, I'd argue it's only obscuring it and providing ample opportunities to screw up.
Abstractions explain the purpose in context.
> I'm more of a "don't do stupid shit, you'll have less bugs to fix and less performance issues for customers to curse you for" kind of person
The shorter the function is, the less opportunity I have to introduce a stupidity.
Because jumping is disorienting, because each defn has a 1-3 lines of overhead (header, delimiters, whitespace) and lives among other defns, which may not be related to the task at hand, and are arranged in arbitrary order?
Does this really need explaining? My screen can show 35-50 lines of code; that can be 35-50 lines of relevant code in a "fat" function, or 10-20 lines of actual code, out of order, mixed with syntactic noise. The latter does not lower cognitive load.
I wouldn't have asked if I didn't have a real curiosity!
To use a real world example where this comes up a lot, lots and lots of code can be structured as something like:
accum = []
for x in something():
for y in something_else():
accum.append(operate_on(x, y))
I find structuring it like this much easier than fully expanding all of these out, which at best ends up being something like
accum = []
req = my_service.RpcRequest(foo="hello", bar=12)
rpc = my_service.new_rpc()
resp = my_service.call(rpc, req)
req = my_service.OtherRpcRequest(foo="goodbye", bar=12)
rpc = my_service.new_rpc()
resp2 = my_service.call(rpc, req)
for x in resp.something:
for y in resp2.something_else:
my_frobnicator = foo_frobnicator.new()
accum.append(my_frobnicator.frob(x).nicate(y))
and that's sort of the best case where there isn't some associated error handling that needs to be done for the rpc requests/responses etc.
I find it much easier to understand what's happening in the first case than the second, since the overall structure of the operations on the data is readily apparent at a glance, and I don't need to scan through error handling and boilerplate.
Like, looking at real-life examples I have handy, there's a bunch of cases where I have 6-10 lines of nonsense fiddling (with additional lines of documentation that would be even more costly to put inline!), and that's in python. In cpp, go, and java which I use at work and are generally more verbose, and have more rpc and other boilerplate, this is usually even higher.
So the difference is that my approach means that when you jump to a function, you can be confident that the actual structure and logic of that function will be present and apparent to you on your screen without scrolling or puzzling. Whereas your approach gives you that, say, 50% of the time, maybe less, because the entire function doesn't usually fit on the screen, and the structure may contain multiple logical subroutines, but they aren't clearly delineated.
Because 7<<100