Hacker News new | past | comments | ask | show | jobs | submit | jasonkillian's comments login

Agreed, but it's a pretty big footgun because `useEffect` is often the seemingly easiest way to do the things you list.

I've had similar conversations many times with coworkers before when they were using `useEffect` to keep a state value in sync with a prop. The officially-recommended alternative of manually storing and updating an extra piece of state containing the previous prop value is cumbersome and also had ways it can go wrong. So, since `useEffect` works well enough in most cases and is easier, often the code just sticks with that method. I'm not entirely sure what's really best all tradeoffs considered, but it definitely illustrates how rough edges often pop up in hook-based React.


You don’t have to store the previous, you can just do an if != right in the render function and call setState there. That’s something I learned from the new beta docs.


Can you point to that in the docs? Calling setState during render sounds like a fine way to get infinite loops.

Any of these sorts of hacks tend to be indicative of something not being structured poorly.


Isn’t it also explicitly discouraged with a warning because of how bad it could be?

The React docs are very clear on this:

> The render() function should be pure, meaning that it does not modify component state, it returns the same result each time it’s invoked, and it does not directly interact with the browser.

https://reactjs.org/docs/react-component.html#render



That’s such a weird 180 on their part considering how explicit they have been about it.

I guess they did a bunch of work to make the case work and are confident about it, but it just seems like such a weird decision considering they’ve been adamantly telling their devs not to do this for years.


Calling `setState` while rendering has been a legitimate thing to do since hooks first came out, but _only_ in one specific scenario / usage pattern.

From the original hooks FAQ at time of release:

- https://reactjs.org/docs/hooks-faq.html#how-do-i-implement-g...

I talked about the reasoning / behavior for this in my React rendering guide:

- https://blog.isquaredsoftware.com/2020/05/blogged-answers-a-...


That still doesn’t change my comment.

It used to explicitly be a thing you must not do for years, it’s weird to go straight 180 on something you actively warn developers not to do. If it wasn’t well documented in the past, then that’s reasonable, but actively spitting out console warnings and explicit documentation warning against it is just going to mean that anyone who isn’t new to react is going to be extremely weary about it.


Interesting. Thanks for following up.


It is a bit more complicated in practice though than "a React component is just a function that rerenders when called". In some ways, the function acts more like a class, and then React, internally, uses it to create "instances" of components that have their own set of data stored. (Which is why hooks like useState, useRef, etc. can work - because data is being stored internally in React tied to a component instance.)

It _is_ true that when you call a React function component it "runs its code" just like any regular old JS function. But when that function gets run and what all the side effects of its code are actually is quite complex.


Yeah, this is not to belittle the complexity of React under the hood. But they are functions, and it seems you can assume they will be called in a straightforward manner when they render (whether they are invoked as explicit function calls or via returned JSX).

The only real complexity (for the developer) is the use of hooks, effects etc. if you don't mess with useMemo (which you generally shouldn't). Certainly they aren't pure functions, they have side effects and are stateful, and that has some nuances, but (kudos to the React team) once you understand hooks as a reference to the instance value and a setter for that value, they're pretty easy to understand.

I guess I don't personally find thinking of them as a class as that useful, my mental model of "it's just a function with some external references (via hooks)" gets me there.


> if you don't mess with useMemo (which you generally shouldn't)

why? is this not the primary way to re-init expensive internal component state when specific props change?


GP's claim is a little too strong, but in my experience most uses of `useMemo` / `useCallback` are only necessary because people define things in the wrong scope, or write giant spaghetti components with 15 different props and no internal structure. The best memoization technique is not calling things repeatedly in the first place.


I find it necessary for a lot of different types of patterns that optimize performance. See the very recently written beta react docs, they go over this quite well.

In general, I think it's always a mistake to tell people "don't use this tool because you may shoot yourself in the foot", best to explain the when and why. But I do have a problem when the tool has just stupid defaults that make it harder to use correctly in the first place.


He touches on this in the piece briefly.

> I think as developers, we tend to overestimate how expensive re-renders are. In the case of our Decoration component, re-renders are lightning quick.

Certainly it's there for a reason, and you may have expensive operations, but in my experience developers reach for useMemo much too early and often, and it just adds complexity to their functions. The cost of checking the parameters for changes adds overhead that may be more expensive than just re-doing the "expensive" operation. My rule of thumb is if the operation is less than O(n) where n < ~5000 I don't reach for useMemo.

There have been some benchmarks done on this, and when it pays off to use.


> TypeScript's type annotations are really a DSL embedded into JavaScript. And they can, and, depending on the problem at hand, should be treated as such.

I think this is the key. If treated as you describe, meaning the advanced types are well-written, well-documented, and well unit-tested as if they are "true" code, then using them shouldn't be too much of an issue.

However, I think people often just assume that the types aren't "real" code and thus the normal concepts of good software engineering don't apply and type monstrosities which nobody can understand result.

Imagine if this code[0] wasn't well-documented, fairly clearly written, and also tested. It would definitely be a liability in a codebase.

In addition, the rules of how advanced TypeScript concepts work can be quite nuanced and not always extremely well defined, so you can end up in situations where nobody even _really_ understands why some crazy type works.

[0]: https://github.com/sindresorhus/type-fest/blob/2f418dbbb6182...


Happened to see this thread on HN, (and disclaimer I'm an engineer at Hex) very cool project jwithington!

Wanted to take a second to address your comment about performance, lqet. All your points are absolutely fair - I personally apologize for some of those layers of divs and some of that JS as they're definitely my fault!

For context, Hex lets you write Python and SQL code in a notebook-esque format and then create apps from that to share across the web. So there's actually quite a breadth of functionality we need to support under the hood that adds to frontend complexity. We also revamped our whole app-building experience recently, so there's a couple straggling bugs (like the text selection one you mentioned, whoops!).

But I totally agree with you - thinking about all that JS makes me wince a little haha as we definitely care and want to improve frontend performance. We plan to make better use of code-splitting and lazy-loading of JS so that the frontend code for more complex apps is only pulled in if/when necessary. (We also want to work on building better tooling to make analysis of code-splitting effectiveness easier - we've found that a lot of existing webpack bundle analyzer tools don't provide enough visibility for our use cases. Maybe an open source project for us one day!) And we want to decrease over-the-wire data size and reduce necessary network calls so you can get a faster initial load. We're a small team, so can't make promises when exactly this will all happen, but hopefully with these changes and other improvements things will feel a bit snappier someday soon


In addition to being essentially a combined "filter" and "map", it's also a "better" filter than filter itself in TypeScript in such that it narrows types much more ergonomically[0].

In TypeScript, you might have an array of multiple types (e.g. `Array<A | B>`), and use a `filter` call to only keep the `A`s. However, in many situations TypeScript can't figure this out and the resulting array type is still `Array<A | B>`. However, when you just use `flatMap` to do nothing more than filtering in the same way, TypeScript can determine that the resulting type is just `Array<A>`. It's a bit unfortunate really - `filter` is faster and more readable, but the ergonomics of `flatMap` type-wise are so much nicer! Just some interesting trivia.

[0]: https://github.com/microsoft/TypeScript/issues/16069#issueco...


I wonder if it is possible to add a feature to Typescript to help with this:

You could potentially add a syntax for type guards function types, then add a signature to filter that accepts a type guard and returns an array of the guarded types.

Shouldn't be too much of a stretch given that we have type guards.

The syntax is a bit annoying... should be something like filter<A, B>(cb: A => A is B)

:/


You can use a type guard[1] as an argument to Array.filter, but the function has to be explicitly typed as such.

I don't know why the type isn't narrowed in Array.filter like it is in if statements without this weird workaround.

  const array: (number | string)[] = [];
  
  const mixedArray = array.filter(value => typeof value === 'string');
  // mixedArray: (number | string)[]
  const arrayOfString = array.filter((value): value is string => typeof value === 'string');
  // arrayOfString: string[]
This example in Typescript playground: https://www.typescriptlang.org/play?#code/MYewdgzgLgBAhgJwXA...

[1]: https://www.typescriptlang.org/docs/handbook/advanced-types....


Oh so there is an overload!

filter<U extends T>(pred: (a: T) => a is U): U[];

Additionally, getting TS better at inferring type guards is an open issue (literally): https://github.com/microsoft/TypeScript/issues/38390


Wouldn't recommend doing this - if the original array is of significant length this'll get quite slow because `acc.concat` has to create a brand new array of slightly longer length on each iteration it's called. Better to just use `push` like you suggested before and then return the array if you want to use `reduce`.


Yes, of course, that's why I used `push` at first.


Use the comma operator: (acc.push(2*n), acc) will return acc. Or e.g.

  [1, 2, 3, 4, 5].reduce((acc, n) => (n % 2 ? acc.push(2*n) : null, acc), [])


If you're just iterating through the array and mutating an object on each iteration, just use a for loop.


Obviously you can alternately write:

  let input = [1, 2, 3, 4, 5], output = [];
  for (let i = 0; i < input.length; ++i) {
    let n = input[i];
    if (n % 2) output.push(2*n);
  }
  return output;
But in some circumstances the other style can be more convenient / legible. The immediate question was about pushing to an array and then returning the array, for which the comma operator can be handy.


No argument that the comma operator is a neat trick when you need it.

FWIW, it's 2022:

  const output = [];
  for (const n of [1, 2, 3, 4, 5]) {
    if (n % 2) output.push(2 * n);
  }


Minority opinion: please `let` your mutable references. I know `const` doesn’t signal immutability, but we as humans with eyeballs and limited attention span certainly benefit from knowing when a value might change at runtime.


Disagree: virtually everything in JS is mutable, so this almost means "never use the `const` keyword". Pretending that the `const` keyword means something that it doesn't makes things harder for my limited human mind to understand, not easier. Plus using `let` inappropriately makes my linter yells at me, and I usually like to just do whatever my linter tells me.

Anyway, I use TypeScript, so if I really want to assert that my array is immutable (as immutable as stuff in JS-land gets anyway) I just write:

  const input: readonly number[] = [1, 2, 3, 4, 5];
or even

  const input = [1, 2, 3, 4, 5] as const;


I realize I could be clearer in what I’m asking for: please use const when you use reference types as values, and use let when you intend to mutate the reference. Using const and then changing a value is certainly allowed but it’s confusing and it’s missing an opportunity to signal in the code where changes might happen.


I `readonly` and `as const` everything I possibly can. I do know that const doesn’t mean immutable, as I said, but I think it should and I think there’s value in establishing the idiom even if it’s not currently adopted. Because otherwise const basically means nothing unless you’re mutating everything already.


This is a great article and I agree with it fully.

The argument that a lot of popular React voices have made, "React is fast and it's prematurely optimizing to worry about memoizing things until a profile shows you need it", has never rung true with me. First and foremost, there's a huge time cost to figuring out what those exact spots that need optimization are, and there's also an educational cost with teaching less experienced engineers how to correctly identify and reason about those locations.

There are only two reasonable arguments for not using `memo`, `useMemo`, and `useCallback`. The first is that it decreases devx and makes the code less readable. This one is true, but it's a very small cost to pay and clearly not the most important thing at stake as it's only a slight net effect. The second argument is that the runtime cost of using these constructs is too high. As far as I can tell, nobody has ever done a profile showing that the runtime cost is significant at all, and the burden of proof lies with those claiming the runtime overhead is significant because it doesn't appear that it is typically when profiling an app.

So, given that the two possible reasons for avoiding `memo`, `useMemo`, and `useCallback` are not convincing, and the possible downsides for not using them are fairly large, I find it best to recommend to engineering teams to just use them consistently everywhere by default.


I've always thought of "premature optimisation" as optimising something that's not your "hot path". If there's no clear hot path, everything is the hot path, and small optimisation gains everywhere are the only thing you're going to get. So at this point, it's not premature.

You could also rewrite your code so that there is a clear hot path, but in that case it seems to be React rendering, that's optimised by using memo and avoiding it completely.


The death from a thousand papercuts.

I'm not terribly convinced with memoization though. You're using extra memory, so it's not free optimization. We have Redux memoized selectors everywhere. I can't help but wonder how much of that is actually a memory leak (i.e. it's never used more than once). Granted, components are a bit different.

I always do cringe when I see a lint rule forcing you to use a spread operator in an array reduce(). It's such a stupid self-inflicted way to turn an O(N) into an O(N^2) while adding GC memory pressure. All to serve some misguided dogma of immutability. I feel there is a need for a corollary to the "premature optimization is the root of all evil" rule.


> I always do cringe when I see a lint rule forcing you to use a spread operator in an array reduce(). It's such a stupid self-inflicted way to turn an O(N) into an O(N^2) while adding GC memory pressure. All to serve some misguided dogma of immutability. I feel there is a need for a corollary to the "premature optimization is the root of all evil" rule.

I think a rule of "don't try to use X as if it was Y" would be reasonable. I love immutability, but the performance cost in JS is really high. Many people are fine with using Typescript to enforce types at compile time and not at runtime. Maybe many people would be fine with enforced immutability at compile time (Elm, Rescript, OCaml, ...) and not runtime?


How could you not have a hot path? You're saying that you've measured actual usage and discovered that each thing happens to be called exactly the same number of times? That strikes me as extraordinarily improbable.


That's not exactly it. It's more of a "If you have nothing that takes more than 1% of your resources, no single optimisation can get you more than a 1% reduction in your resources". That seem to be how most web apps are: you parse a little bit of HTTP, a little bit of JSON, you validate a few things, you call the database, that does a few things too, you have a bit of business logic, you call the database again, then have a bit of glue code here and there, and finally respond to the user with a little bit of HTTP and maybe some HTML, maybe some JSON.

If that's how your app works and nothing can be optimised significantly, that's usually here where you can make big gains in performance by changing a big thing. One of these big things might be to put a cache in front of it, because a cache hit will be way faster than responding again to the same request. Another could be to change language. For example, from Python to Go. Since Go is (most of the time) a bit faster on everything, you end up being faster everywhere. Or even from Python to PyPy, a faster implementation. Another could be redesigning your program so that you have one single obvious hot path, and then optimising that.

That seem to be the case for them here: no component is taking all of the resources, but by using memo everywhere, all of them take less resources, which leads to a good reduction of resources in general.


It seems to me you're being pretty breezy about readability. At most places, developer time is by far the most expensive commodity, and the limiting factor in creating more user value.

In particular, bad readability is one of the sources of a vicious circle where normalization of deviance [1] leads to a gradual worsening of the code and a gradual increase in developer willingness to write around problems rather than clean the up. Over time, this death by a thousand cuts leads to the need for a complete rewrite, because that's easier than unsnarling things.

For throwaway code, I of course don't care about readability at all. But for systems that we are trying to sustain over time, I'm suspicious of anything that nudges us toward that vortex.

[1] https://en.wikipedia.org/wiki/Normalization_of_deviance


I don't disagree with you on readability being important or on the value of developer time. It's just that the marginal costs of `memo`, `useMemo`, and `useCallback` are quite low. They don't add cyclomatic complexity, they don't increase coupling, they can be added to code essentially mechanically and don't carry a large cognitive overhead to figure out how to use, etc.

The main downsides are that they take slightly longer to type and slightly decrease the succinctness of the code. And then there are a few React-specific complexities they add (maintaining the deps arrays and being sure not to use them conditionally) but these should be checked by lint rules to relieve developer cognitive load.

Of course I'd rather not have these downsides, but in the end, it's still much less developer overhead than having to constantly profile a large application to try and figure out the trouble spots and correctly test and fix them post-hoc. And it means users are much more likely to get an application that feels snappier, doesn't drain as much battery, and just provides a more pleasant experience overall, which is worth it imo.


> has never rung true with me.

Yeah, me neither. I'm seeing first-hand a "large" (but probably not Coinbase-large) webapp dying by 10 thousand cuts.

The "you shouldn't care if it rerenders" components are, together, affecting performance. Going back and memoizing everything would be a nightmare and not a viable business solution. Rewrite everything from scratch is also not viable. So we have to live with a sluggish app.

At the same time, memoizing everything does make your code unreadable.

Honestly, it's a mess. I only accept working with this kind of stuff because I'm very well paid for it.

On my personal projects I stay far away from the Javascript ecosystem, and it's a bless. Working with Elm or Clojurescript is a world of difference.

Clojurescript's reframe, by the way, uses React (via Reagent) and something somewhat similar to Redux, without having any of the pitfalls of modern JS/React.

I can write a large application and ensure that there are no unnecessary rerenders, without sacrificing readability and mental bandwidth by having to memorize everything.

The conclusion I have, which is personal (YMMV) and based on my own experience, is that modern JS development is fundamentally flawed.

Apologies for the rant.


> The conclusion I have, which is personal (YMMV) and based on my own experience, is that modern JS development is fundamentally flawed.

So because web developers using a particular UI library can debate one aspect of using the library, modern JS development is fundamentally flawed unless one transpiles from Elm or ClojureScript?


I think this is a great proposal and a huge step in the right direction for JS. I am curious though, is there a reason not to just essentially duplicate the Joda[0]/Java[1]/ThreeTen[2] API? As far as I understand, they are generally considered a gold standard as far as datetime APIs.

Is it too Java-y that it wouldn't make sense to port to JS? Are there copyright implications?

The JS Temporal proposal _does_ as far as I can tell, share many of the underlying fundamental concepts, which is great, but then confusingly has some types, such as `LocalDateTime`, which mean the exact opposite of what they do in the well-known Java API [3].

There is still discussion going on about these details, but from my perspective it seems like the best thing would be to just copy the Java naming conventions exactly.

[0]: https://www.joda.org/joda-time/

[1]: https://docs.oracle.com/javase/8/docs/api/java/time/package-...

[2]: https://www.threeten.org/

[3]: https://github.com/tc39/proposal-temporal/issues/707


This. They already copied the crappy date API from Java. Why not copy the good one too?


I don't usually laugh at HN comments, but this is pretty good. And it's true. Joda-Time and its near relatives are just genuinely good. So much effort has gone into addressing so many edge and corner cases that it seems like a shame to not just tuck all that work under one's arm and steal it.


I assume it doesn't help that Oracle is arguing in court that APIs should be copyrightable.


There is already a JS port of Joda[1] which works well and, crucially, uses all the same names and concepts. Could we just replace the Temporal proposal with this and save a lot of work and confusion?

[1] https://js-joda.github.io/js-joda/



+1 to going off of Joda API. This is a very well-thought out datetime API with a fluent interface for building date/time objects in an immutable way. Just doing this would be a huge step improvement for JS.


I find that the Java 8 java.time API [1] easier to understand than Joda Time [2] when working with timezones. In particular, the OffsetDateTime and ZonedDateTime classes in java.time seem well-designed and easy to use. The equivalents in Joda Time are harder for me.

[1] https://docs.oracle.com/javase/8/docs/api/java/time/package-...

[2] https://www.joda.org/joda-time/apidocs/index.html


Agreed. Joda had the right abstractions (instants, durations, etc) but the class hierarchy for them was unnecessarily complex. A lot of this complexity comes from opting for the abstractions to be either mutable or immutable.

For example, `ReadableInstant` [1] in Joda implements 3 interfaces and has 7 subclasses. And really, what is the difference between `AbstractDateTime` and `BaseDateTime`? Whereas `Instant` from java.time [2] is an immutable value type and I haven't found it lacking in any respect.

On the whole java.time has struck me as extremely well designed (after coming from Python and previous date and time libraries in Java) and I think it would behoove other languages to liberally copy its design.

[1] https://www.joda.org/joda-time/apidocs/org/joda/time/Readabl...

[2] https://docs.oracle.com/javase/8/docs/api/java/time/Instant....


Considering that JodaTime was more or less the reference for java.time, I would be very disappointed if java.time was worse and didn't get rid of JodaTime's design flaws.


> Considering that JodaTime was more or less the reference for java.time

It would probably be more correct to say that java.time is Joda version 2. Colebourne, the original author of Joda, was also one of the leads on JSR-310, and very much intended that 310 learn from the mistakes on Joda.


> I am curious though, is there a reason not to just essentially duplicate the Joda[0]/Java[1]/ThreeTen[2] API?

They're definitely an influence, just not the only one.

> LocalDateTime

That's a strawman proposal that's essentially user feedback and not a part of the API. Not yet atleast.


> The JS Temporal proposal _does_ as far as I can tell, share many of the underlying fundamental concepts, which is great, but then confusingly has some types, such as `LocalDateTime`

I can't find a mention of LocalDateTime in the Temporal docs, did you mean something else?


It's a current WIP idea for the spec: https://github.com/tc39/proposal-temporal/pull/700

Figuring out a name is still part of the ongoing discussion, so this specific case of `LocalDateTime` isn't a huge deal, and I might have misrepresented things slightly in my original comment, sorry! But I do think the overall point still stands - that it might be best to just use the same names and terminology as Java does.



Joda Time, like Python's datetime and a lot of other date/time libraries, has a date/time class that includes a timezone. Temporal keeps that separate. I think it's a different enough design that Joda Time's popularity isn't enough of a justification for adopting its API.


It has a date/time class that doesn't include a timezone too...


Hooks are great for simple use cases like `useState`, `useContext`, some uses of `useRef`, etc. and can make the code easier to read and reason about (while conveniently working very well with TypeScript).

The rules do start to get really tricky though with complex use cases of `useEffect` and multiple levels of nested hooks, and implementation problems are often not easy to spot or debug.

Dan Abramov has written a lot about the philosophy of hooks[0] at his site overreacted[1], I'd love to see a 'retrospective' write-up from him or another React team member about what they think the success and failures of hooks have been so far and if there are any changes planned for the future!

[0]: https://overreacted.io/why-isnt-x-a-hook/, https://overreacted.io/algebraic-effects-for-the-rest-of-us/, https://overreacted.io/a-complete-guide-to-useeffect/

[1]: https://overreacted.io/


I've found js-joda is a good JS datetime library: https://github.com/js-joda/js-joda


Interesting, thanks!

I've used Joda and java.time a lot, and I have a very high opinion of both.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: