Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure I agree with the tone of the article, but I do find that Javascript developers have a weird relationship with "functional programming". I see so much convoluted code with arr.reduce() or many chained arr.map().filter().filter().map(), that would just be so much simpler and easier to read if it was a classic for-loop. I suggest this to people and they scoff at the thought of using a for-loop in Javascript.

> Consider the humble modal dialog. The web has <dialog>, a native element with built-in functionality. [...] Now observe what gets taught in tutorials, bootcamps, and popular React courses: build a modal with <div> elements

The dialog element is new! Only broadly supported since 2022. I find it hard to fault existing material not using it. Things like dialog, or a better select, are notable because they were lacking for so long.



Personally I use filter and map (and others like .some, .every .flat and flatmap etc) all the time but I avoid reduce.

filter and map immediately tell me what the purpose of the loops are - filter things and transform things. A for loop does not do this.

To someone familiar with functional programming these are very normal and easier to read and grep than just a loop. In other words filter and map give additional context as to the intent of the loop, a bare for loop does not.

Not to mention this is not abnormal in languages outside of JS, even non-functional ones.

That said Ive seen too many convoluted uses of reduce that I just avoid it out of principle.


Yeah, the "problem" with reduce is that it can do anything, and so doesn't offer much over a traditional for loop with mutable local variables to accumulate into. Of course, if you can replace a reduce() with a filter and/or map or whatever (which more clearly states intent), by all means do so!

If you really need arbitrary computation, I'm not sure there's any real readability benefit to reduce() over local mutation (emphasis on local!). Sure, there's immutability and possibly some benefit to that if you want to prove your code is correct, but that's usually pretty marginal.


Reduce can be very useful to signal that the state used is inherently limited. My rule of thumb is to use reduce when the state is a primitive or composed of at most two primitives, and a for loop otherwise. What counts as "primitive" depends on the language of choice and abstraction level of the program, of course.


Fair observation, but just opening a local lexical scope (in an expression-oriented language) can help with that. Also ... something something ST monad :)


I like map() and filter() in Python, but unfortunately they’re 2nd class citizens compared to list comprehensions, which continue to get optimizations to further increase their speed.

I like comprehensions as well - and their syntax is quite readable - but I’d like for the two to be more at parity.


One of the main reasons that map() and filter() don’t get optimizations in Python is because they’re lazy.

It’s a lot easier to optimize comprehensions because you can make a lot more guarantees about what doesn’t happen between iterations: the outer stack doesn’t move, the outer scope can be treated as static for purposes of GC-ing its locals, various interpreter-internal checks for “am I in a new place/should I become aware of a new scope’s state?” can be skipped, the inner generator can use a simplified implementation since it never needs to work with manual send(), and so on.

Map/filter can’t take advantage of any of those assumptions; they have to support being passed around to different arbitrary places each time next() is called on them, and have to support infinite sequences (so do comprehensions technically, but the interpreter can assume infinite comprehensions will terminate in fairly short order via OOM lol).

That said, there are likely optimizations that could be applied for the common cases of “x = list(map(…))” and “for x in filter(…):” (in nongenerator functions) which allow optimizers to make more assumptions about the outer context staying static.


Thanks for the detailed explanation!


list(map(f, ...)) should almost always be replaced with [f(x) for x in ...] though.


I’m with you, but I see that pattern in enough code I read that it might be easier to fix (optimize) the language than the programmers here.


Agreed; if you’re immediately requesting the entire generator, there’s little point in using it in the first place.


It's hard to talk in the abstract because obviously people can abuse any type of code feature, but I generally find chaining array methods, and equivalents like c# linq, much easier to read and understand than their looping equivalents.

The fact that you single out .reduce() here is really telling to me. .reduce() definitely has a learning curve to it, but once you're used to it the resulting code is generally much simpler and the immutability of it is much less error-prone. I personally expect JS devs to be on the far side of that learning curve, but there's always a debate about what it's reasonable to expect.


The wonderful thing about .reduce() is that it can compute literally anything. The problem with .reduce() is that it can compute literally anything. As for the rest of the morphism menagerie, I like being able to break up functions and pass intermediate results around. It's literally cut and paste with map/filter, with a loop it's rewriting. Yay composability.

That said, it's easy to get carried away, and some devs certainly do. I used to be one of those devs, but these days I sometimes just suck it up and use a local variable or two in a loop when the intent is perfectly clear and it's not leaking side effects outside of a narrow scope. But I'll be damned if I let anyone tell me to make imperative loops my only style or even my primary one.


Reduce cannot calculate literally anything, in the sense you mean. It corresponds in computational power with primitive recursion. And quite famously, there are problems primitive recursion cannot solve that general recursion can.

On the other hand, I don't think I've ever seen something as recursive as Ackermann's function in real life. So it can probably solve any problem you actually mean to solve.


What the previous user means is that reduce is not a function that returns a list (albeit it can).

It just accumulates over some value, and that value can be anything.


Naw, GP is right, I'd forgotten about the limits of primitive recursion. But for almost any given real-world problem, it's something you can get away with forgetting.


Unfortunately, since we don't have continuations, we cannot make reduce _stop_ computing. In such cases where that is needed, it might be better to use a loop that can be broken out of.


Well, you can always throw an exception :) (ducks)

But yes, it's best used on sequences where you know you'll consume the whole thing, or at least when it's cheap enough to run through the rest with the accumulator unchanged.


> The fact that you single out .reduce() here is really telling to me. .reduce() definitely has a learning curve to it, but once you're used to it the resulting code is generally much simpler and the immutability of it is much less error-prone. I personally expect JS devs to be on the far side of that learning curve, but there's always a debate about what it's reasonable to expect.

Not only that, but the words that GP uses to single out .reduce() start with:

> I see so much convoluted code with arr.reduce() or many chained arr.map().filter().filter().map()

Which I do not doubt, but the point is diminished when one understands that a mapping of a filtering of a filtering of a mapping is itself a convoluted reduction. Just say that you prefer to read for-statements.


I say convoluted. I prefer using the functional-style array methods, but there's a time and place for everything, and I feel a lot of Javascript developers extend those methods beyond what is reasonable and into a convoluted mess, especially with reduce.

Give me a good classic `T[] => I` reduce function and I'm fine with it. Not the more common case of folks mutating the accumulator object.


I’m fond of chained array methods, especially if the callbacks I pass are named functions - it can make for code that I find way more legible than for loops. But even with anonymous callbacks I’d rather do that ~90% of the time than use a for loop anyway.


> that would just be so much simpler and easier to read if it was a classic for-loop.

Only if that's how you were taught. For devs brought up in the FP manner, the for loop is as hard to grok as you find the chained stuff.

The Blub Paradox in action.


I have never, ever seen this. And I’ve worked next to some very FP-forward types and academic CS/abstract math background types who have vastly deeper knowledge of their fields than I ever will of mine.

Plenty of them don’t like imperative loops, sure. But I’ve never seen someone assert that a simple loop is not intelligible to them while chaining functions are.


If you're replacing a chain of filters and maps with a nested loop, you're far past "a simple loop" and well into the realm of an unintelligible for loop. The chain of maps and filters tends to make that far easier to read by decomposing it into separately-comprehensible pieces.


Oh totally agreed; for complex transformations, being able to compose and tap the intermediate states is really nice (though I quite like generator/yielding functions with captured stack state as a happy medium).

But that’s not the claim I was pushing back on. That claim was that, given a loop and some equivalent chained list-processing functions, primarily FP-taught people could not understand the loop due to their background. That’s bogus.

That complex loops are hard to understand for anyone is inarguable. But there’s not some magical “functional mindset” in which imperative/mutative code is unintelligible to the point of total obscurity and functional code is not. If there were, how would anything ever get rewritten in a functional style?


I think it’s a far cry on the original author’s side to call the use of map().filter().map() an obsession with FP. Underscore.js simply provided super-useful functions over arrays and objects, which IMO makes for more legible code. Often times you can re-use filters with named functions. If you have multiple conditions for iteration exit, it’s clearer to chain filters than it is to have nested ifs or early exits in a for loop — it refactors better.

The real obsession with FP is the pervasive use of immutable state. But in apps, state must be effectively mutated, but now with a copy of the original immutable object. The FP obsession believes that this better than directly mutating the state each and every time.


Using these functions is not the problem. It's using them passed the point of being reasonable where I see the incorrectly placed obsession.


I think the reason for JS devs having "a weird relationship with 'FP'" is, that JS itself, in its nature, is not a particularly well suited language for FP. It has essential elements, that by their nature are global mutable state. For example timers and their ids.

Also weak typing plays into this. Some operations not immediately raising an error when the wrong type of thing is provided, is a recipe for bad experience when having point free style or semi point free style. If given the choice, at least one thing would be good to have, strong typing or static typing.


On the other hand, from my experience, large development teams cannot write CSS at scale. "Writing clean CSS code" doesn't seem to be a skill that most (frontend) developers seem to value, so abstractions like CSS Modules (which is barely an abstraction - it's just compile-time BEM) and CSS-in-JS came up to manage complexity of both a large codebase and large team.


On the other hand, scaling CSS is fundamentally hard especially in a large team where it's more of an organizational problem.

It takes eternal vigilance to maintain it. Refactoring it is a huge feat. And when you want to modify or add a single component, it doesn't make sense to re-cred in the whole CSS apparatus, and it's not necessarily trivial to figure out where to make the CSS change, so CSS files tend to become append-only.

It's like how learning how to "write clean code" doesn't really change much about how hard it is to change large software systems over time in a large team.


These difficulties with CSS are how you know an org can't "design system".

You will see similar issues with their Word and PowerPoint, setting properties on elements instead of styles.


Yeah, most orgs use Figma as the source of truth, so if your designers don’t “design system”, it finds its way into the codebase.

Even if they do follow a design system, it tends to evolve over time, and it can be a bumpy road to reach a mature one.


Yeah, this is exactly my point. It's hard to do, doubly so when most folks don't value it.


The benefit of FP has nothing to do with it being easier to read. FP is HARDER to read. The benefit of FP is modularity. Take your example:

   arr.map().filter().filter().map()
Every step on that is a modular operation.

   const x = arr.map()
   const y = x.filter()
   const z = y.filter()
   const d = z.map()
now I can easily do this:

   const x = (arr) => arr.map()
   const y = (arr) => arr.filter()
   const z = (arr) => arr.filter()
   const d = (arr) => arr.map()
   
   o = d(z(y(x(arr))))

This is the main reason why functional programming is elegant. It makes every corner of your program modular and solves the fundamental program of program organization and modularity.

with a for loop it is easier to read initially but if that filter chain grows the functional approach will begin to be better.

Take an example of a for loop doing the same thing:

   const acc = []
   for (const x in arr) {
      y = map1(x)
      if(filter(y) {
         continue
      }
      z = y
      if(filter(z)) {
         continue
      }
      d = map2(z)
      acc.push(d)
    }
Yes more readable but imagine if the logic gets even more complex. Then at scale functional will become more readable AND more modular. Modularity, however, is the main problem here. You cannot split or modularize that for loop.

But I agree with you. Initially and not at scale, functional programming loses readability. Functional programming is harder to read because people think procedurally over functionally. We like numbered instructions or a todo list, not a series of composed function calls and thus from a general view point functional programming is harder to read than procedural programming. But this has never been the benefit of functional.

Functional programming is supposed to solve the modularity problem with how you organize programs to be reusable. Functional programming objectively does this while it is only subjectively considered less readable by the majority of people.


I don't actually have all that much of a problem with chaining fp-style functions, but I note that I see a fair bit of anonymous functions and excessive chaining where none of the benefits you point out are realised.

My actual problem is with bad reduce functions that try to shoehorn multiple things into one function and often break the core concept of functional programming by mutating the input arguments. It's that code that would just be better written as a normal loop.


> It's that code that would just be better written as a normal loop.

Have to disagree with your conclusion here. It's that code that should be rewritten, so it doesn't mutate the input arguments at all.


When I'm doing FP, rather than making a long chain, I assign at each step, then call the next one. As you've shown above. Sometimes I even comment on why the code is doing stuff.

Making it read as a list of steps, helps me understand what it's doing a little easier than that long chain.


Same. Chaining all of those together does nothing useful, is arguably less readable, and the JIT will optimize out the intermediate assignments. So there’s not really any reason to do that big chain (unless you’re talking about HoC/composable functions, which this example is not).


Future me has to re-solve this issue. And Past Me is an idiot. I gotta write code both those fools can understand.


> simpler and easier to read if it was a classic for-loop

What you are expressing is that they are more familiar to you.

Nothing about map or filter makes them any more complex than a for loop, nothing.

At best they can be less familiar to a class of programmers.


> I see so much convoluted code with arr.reduce() or many chained arr.map().filter().filter().map(), that would just be so much simpler and easier to read if it was a classic for-loop.

It's interesting; when I'm writing JS in an actual .js file, I do tend to use for loops. But when I'm writing JS at the (CLI or browser-dev-console) REPL, I'll always reach for arr.map().filter().filter().map().

When you're at a REPL, with its single-line/statement, expression-oriented evaluation, it's always easier / more intuitive to "build up" an expression by taking the last expression from history, adding one more transformation, and then evaluating the result again to see what it looks like now. And you don't really want side-effects, because you want to be able to keep testing the same original line vis-a-vis those modifications without "corrupting" any of the variables it's referencing. So REPL coding leads naturally to building up expressions as sequences pure-FP transforms.

Whereas, when you're iteratively modifying and running a script, with everything getting re-evaluated from the start each time, it's not only more intuitive to assign intermediate results to variables, but also "free" to have side-effect-y code that updates those variables. It's that context where for loops become more natural.

I bet that a lot of the chained-expression JS code you see out there was originally built up at a REPL, and then pasted into a file, without then "rephrasing" it to be more readable/maintainable in the context of a code file.

...though there's also just FP programmers who write JS (or more often TS) as if it were a pure-FP language — either literally making every variable const / using Object.freeze / using Readonly<T>, or just writing all their code as if they had done that — where for loops become almost irrelevant, because they're only really for plain iteration (they're not list comprehensions), and plain iteration has few uses if you're not going to do anything that produces side-effects.


Series (by Richard C. Waters) are a pipelining compiler for map/reduce.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: