That is the justification I usually hear, but I don't buy it. Like, 99+% of the time, I want to use these kinds of operations to manipulate lists which are orders of magnitude away from anything even approaching a scalability issue. I just don't care about memory allocations when trying to manipulate something like, say, a list of active users in a real-time chat or something. And often I find that the mess coming out of having to implement those same operations without expressive constructs is worse than the messes that people can create (though I grant I've seen those errors too).
This hints at some of the ideas behind Go – it's designed, perhaps, for Google-scale software. This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need. Maybe we just have to accept that.
> I just don't care about memory allocations when trying to manipulate something like, say, a list of active users in a real-time chat or something.
This is exactly the kind of thinking that the Go language pushes back against.
> Maybe we just have to accept that.
I think so. At least for now. I recently watched a talk by Ian Lance Taylor that made it very clear to me that generics are coming (https://www.youtube.com/watch?v=WzgLqE-3IhY). When we have generics then map/reducer/filter will absolutely be introduced as a library at the very least.
> This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need.
I don't think that's exactly it. It's more about runaway complexity. You might use these primitives to perform basic operations but other people will misuse them in extreme ways.
Consider this: suppose there was a built-in map() function like append(). Do you use a for loop or a map function? There'll be a performance trade-off. Performance conscious people will always use a for loop. Expressive conscious people will usually use map() unless they're dealing with a large dataset. This will invariably lead to arguments over style among other things.
For loops violate the https://en.wikipedia.org/wiki/Rule_of_least_power. Because they could do anything, you have to read each very carefully to find out what it's actually doing (which may not be what was intended). Flat-map and filter are more concise and clearer, and if my platform makes them slower that's an implementation bug I should fix.
In practice, reading a for-loop has been less problematic for me than reading the incantations of a functional programmer who’s been reading about category theory.
I know all about the virtues of functional programming patterns, and use them in personal projects, but in my day job working with dozens of engineers in the same codebase, I appreciate not having to decode the idiosyncrasies of how each engineer decides when and how to use higher order constructs, and the subsequent performance, operational, and maintenance implications. It’s a lot easier for me to just read a for-loop and move on with my life.
This hints at some of the ideas behind Go – it's designed, perhaps, for Google-scale software. This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need. Maybe we just have to accept that.