> React performance concerns in the real world are typically measured in, at worst, hundreds of milliseconds.
I disagree. Try loading up a library heavy site (i.e. React plus a half dozen associated helpers for state, UI and whatever, which is pretty common) on an old or cheap Android device. It can take multiple seconds before the page is fully loaded. Even once the libraries are loaded, React sites often involve parsing a giant lump of JSON before you can do anything, that’s particularly CPU intensive and takes time on low end devices.
> Yes, there are a few specialized domains where this will matter. No, you probably don’t work in one.
Again: I beg to differ. Anyone hoping that their site will rank well on Google needs to factor in site performance via core web vitals. Time and time again I’ve seen React-based sites perform horribly and it’s very difficult to dig your way out of the hole after the fact.
I actually think most of the problem isn’t React itself, it’s the ecosystem and the philosophy that surrounds it so often. You’ll have React, you’ll have some extra state management library on top, you’ll have some hideous CSS in JS bulk on top because no one wants to actually learn CSS… it’s all prioritisation of developer experience over user experience. And it’s industry standard these days.
> I disagree. Try loading up a library heavy site (i.e. React plus a half dozen associated helpers for state, UI and whatever, which is pretty common) on an old or cheap Android device. It can take multiple seconds before the page is fully loaded. Even once the libraries are loaded, React sites often involve parsing a giant lump of JSON before you can do anything, that’s particularly CPU intensive and takes time on low-end devices.
The entire site may take multiple seconds to load, sure, but it's quite rare that this is a React issue. Typically, the real issue is something like the website making a bunch of API requests in serial rather than parallel. For instance, the issue of JSON parsing that you cited is actually an SSR issue; that's entirely orthogonal to React, and you can make the same error in any other popular web framework.
> I actually think most of the problem isn’t React itself, it’s the ecosystem and the philosophy that surrounds it so often. You’ll have React, you’ll have some extra state management library on top, you’ll have some hideous CSS in JS bulk on top because no one wants to actually learn CSS… it’s all prioritisation of developer experience over user experience. And it’s industry standard these days.
I think you're saying "React is slow" when you mean to say "the entire frontend ecosystem is slow". To which I respond... sure? None of this is React-specific. If you don't use React, you'll still have to use state management (or manage it yourself; have fun!), figure out something to do with CSS, etc, etc.
I worked at New Relic and focused specifically on improving page load performance for one of the products using our RUM data. Even after combining/parallelizing API calls then embedding the data into the page, I found there was still an average 3-6 second average page load cost using React vs an already rendered HTML page which was nearly instant. Not huge but frustrating to discover the constraint when it was my task to improve it and our company was trying to be an example of great performance.
If your page is taking 3-6 seconds to load, you are either doing something wrong, or not understanding where the problem is and blaming React incorrectly.
My React app is statically exported and cached by the browser. There is only a single graphql call that fetches all necessary data. Backend is my only bottleneck. Everything else happens in less than 10ms.
And yes it is entirely possible to shoot yourself in the foot with any language and any framework. Including React.
That depends heavily on the device and network being used for testing. In my experience 3-6 seconds isn't out of the question by any means for a decent sized react SPA loaded on an older device and mobile network.
That isn't necessarily specific to react, any client-side rendered framework will have issues. React tends to be noticeably worse in my comparisons though, and any client rendering is going to be worse than server rendering your HTML.
API requests at page-load are definitely going to lower the page speed score. No API requests should happen at all, ideally, and all script and CSS to render everything "above the fold" should be loaded in-line. Nothing that is visible "below the fold" should ever run or load until the page is scrolled down by the site visitor. Only the bare-minimum script parsing that is required for the content "above the fold" should happen. Sure you can load scripts in-line for stuff below the fold, but make sure it doesn't actually get parsed by the browser until that feature is likely to be visible on the screen.
It doesn't matter what library or framework is used - even jQuery can score 100% page speeds on Google Lighthouse. It really only depends on following every single nitpicky thing that Google Lighthouse recommends, and finding a solution for it. I took a site (actually a few thousand sites) that was scoring about 5 to 35 out of 100 on Google Lighthouse (depending on the customiztions), up to 100/100 on Google Lighthouse. It took a lot of work, but the clients are happy now.
Where React doesn't do well is with lots of DOM elements. In another project I created a complex web application that had about 3000 div elements with drag-and-drop interface and all kinds of stuff going on. Well React couldn't handle it. The browser started crashing due to memory issues if the page was open longer than 20 minutes or so, it slowed to a crawl and eventually froze. I ended up switching that one component to a canvas based solution (Konva) and it solved the problem completely. I still use React for all the other simple UI stuff, but I learned my lesson about what React is really good at and what it's not.
> Nothing that is visible "below the fold" should ever run or load until the page is scrolled down by the site visitor.
On the other extreme, completely deferring any loading of 'below-the-fold' content until it's visible can also have horrendous consequences, if that loading involves downloading any external resources. Not every visitor can just make further requests near-instantly, and it's those RTTs that really slow pages down, in my experience. Excess API calls are just one (very common) source of excess RTTs.
The obvious compromise would be to load 'below-the-fold' content ASAP once all 'above-the-fold' content is finished, unless it's unacceptably heavy for the target device. (Then again, some people still won't be happy: I recently talked to one person who derided the load times of a certain blog, but I found that all of its meaningful content was loaded and visible very quickly: they were only taking issue with the Disqus comment widget at the bottom at the page.)
> Not every visitor can just make further requests near-instantly
We use a "loading" spinner. People know they have a shitty computer, and they'll wait an extra second instead of spending $1000 on a new computer. That kind of page speed problem isn't one we could ever fix, and it's something the user is typically well aware of being on their end, because every site is slow, not just ours.
> completely deferring any loading of 'below-the-fold' content until it's visible can also have horrendous consequences
We use a combination of lazy loading and SSR (not React SSR, it's hosted on a custom CMS). Content is SSR to maximize SEO. In some places we'll only render the first 3 or 4 items and then lazy-load the rest if it's a lot of data. Too many DOM elements at page load is also bad for the page speed score. Javascript is lazy-parsed, meaning the text of the script can be loaded in-line, but not parsed until the page is scrolled and the content is in view. Loading the text of a script isn't what slows the page down, it's the parsing of larger scripts that causes page speed issues. When the page scripts can be parsed in smaller chunks, the scrolling experience can be more fluid and the page speed test is satisfied. All images below the fold are lazy loaded. All 3rd party widgets are lazy loaded where possible, because they all suck and they mess with page speed pretty badly. There are lots of other tricks to get to a perfect 100 score on Google Lighthouse. Fortunately for us, "below the fold" is generally easy to accomplish across all of our sites, by the nature of the type of sites we are building. It doesn't work for all kinds of sites, but for ours it works very well.
You obviously didn't read what I wrote or understand it. I specifically said this:
>Sure you can load scripts in-line for stuff below the fold, but make sure it doesn't actually get parsed by the browser until that feature is likely to be visible on the screen."
I specifically said you could load a script in-line for stuff "below the fold" as long as the browser doesn't parse it until it's used. That's very different than doing an HTTP request for a script file while scrolling.
But you know what? Forget it. I'm done trying to explain things to people who think they already know it all and didn't even understand my original comment.
> users can wait an extra second [every time they scroll to load new content]
which is exactly the bad experience the commenters above are talking about.
They understood your comment, and they disagree. This is not the good advice you think it is, unless your main goal is to score 100 on lighthouse for SEO purposes, not UX.
The loading spinner is specifically for people with shitty internet connections when loading dynamic data after the initial page load. And you're completely misunderstanding practically everything I wrote and replacing what I wrote with your assumptions. Go ahead, it's the internet, bash away all you want. But I know what I did, I know it works, and I know it's not janky at all - it's your assumptions that are wrong. The advice is good, your understanding of it is not. You don't need to reply, I won't be trying to explain any of this any further just so you can misunderstand everything I wrote, again.
People scroll pages to skim. This also sounds like it might break CTRL-F.
If I can't skim your page instantly I will more than likely churn my visit.
Doesn't matter that I have a good computer on a 1Gbps connection, you ruined my experience. I'd rather wait 1 sec for the full page to load, than wait a series of 100msecs on what should've been a fully loaded page to actually load at an arbitrary point in time.
Maybe you missed the part where I said we're using SSR for content? That solves the CTRL-F problem easily.
You (and a lot of others here) are making a ton of wrong assumptions, imagining things I never said, and making up your own problems that don't exist in my code just to try to bash me, without even really understanding anything that I wrote in my comment. This entire thread sucks and is full of low-quality trolls. I've been doing front-end for ~30 years, I know what I'm doing. Don't bother replying, I won't be responding to further wrong assumptions and bashing.
How do you know that was a React issue and not a memory leak, or an error in your own code perhaps?
This smells of a memory leak, particularly if you forgot to add a dependency to a hook for example, but there is plenty of non react related code that could go wrong with drag and drop interfaces too
Not using any hooks, it's simple old-school react started from create-react-app, then converted over to Preact, and then updated to use latest webpack and all libs updated to latest versions, so this project has gone through many changes and I have no doubt if I started it new today from the ground up things would be different. I think the problem was with the drag-drop library, react-dnd I think. But the point is that switching from DOM to a canvas solution fixed all the problems I was having.
>it’s all prioritisation of developer experience over user experience.
And I have been saying this for well over a decade. Hopefully it catches on.
I have always asked for example of sites done in React where I couldn't tell it is done in React or other JS Front End. The "at worst, hundreds of milliseconds" is precisely why Web Apps never felt as good as native apps. And if we collectively cant make web apps good, why not go back to Interactive Web Pages that is mostly Jank Free. I say mostly because it is still not Native Apps level.
Recently submitted, [1] why does liking a tweet re-render the entire screen? done using [2] React-scan.
> why does liking a tweet re-render the entire screen?
They're probably using Redux, which has been the go-to for state management in React for nearly a decade. I've always been averse to it because it's implemented using contexts and triggers those full re-renders - it was designed during the class-based components era and relied on people actually implementing shouldComponentUpdate(), which isn't really a thing with modern function-based components.
If they were using modern hook-based state management like Zustand or Valtio, full rerenders wouldn't happen.
Redux uses context but does not do full re-renders except if you are using it wrong. It does a shallow compare on the object resulting from your selector to decide whether or not to re-render a component.
That being said, none of the React projects I've had the chance of working on in the last 5 or 6 years has used Redux, so stating that it's been the go-to state management library for nearly a decade kind of sounds weird to me.
My Core-i5 24GB of RAM desktop PC that's 8 years old can pretty much do anything I used to do in 2016, except smoothly run React websites, some of which are verging on becoming unusable.
This is nuts. People should stop using their M3 Pro Macbook as a benchmark for what's 'fast enough.'
“Works on my machine” in general is a huge problem in dev culture.
I understand the desire to work on machines with lots of power and memory with a nice high res display and fiber connection. It makes the experience a great deal nicer. Even so, we should be keeping old and/or thoroughly mediocre devices around and testing on them periodically so we don’t succumb to illusions of good performance.
It’s just embarrassing any image scrolling site will eventually just eat all the memory and crash and you lose your place. Used to kill me back in the Tumblr days and React perpetuates it.
>Try loading up a library heavy site (i.e. React plus a half dozen associated helpers for state, UI and whatever, which is pretty common) on an old or cheap Android device.
that was the case in 2015 when entry level android devices were quite slow, nowdays a cheap $200 android phone has at least 4-6GB of RAM and an eight core processor.
if that isn't enough for your react powered site, you are doing something very wrong.
Keep in mind that React is very single threaded (as is nearly everything in javascript). These phones might have 8 cores; but they tend to have the same single thread performance as 10+ year old iPhones.
The performance of client-side web applications asymptotically approaches load times of "at most a few hundred milliseconds" on the high end phone models owned by developers.
Which react sites? Are you sure you are not a victim of confirmation bias? How often do you check what framework does a smooth and performance website uses?
Because demo sites are snappy on even low-end devices. It's not react itself that causes slowness, but all the other libraries/tracking/ads.
Which (correctly) doesn't even have react listed, only frameworks that (may) use react as a library.
So at least be specific what is problematic, is it a particular framework, a particular library, the whole idea behind react (v=f(s)) or what? Anything can be proven against something non-concrete.
I largely agree with your points, but wish it’d be easier to have server side rendering that’s not as tightly coupled to the rest of the system.
What almost always happens is that with a SPA the core business logic is more or less separate from the bits that are shown in the browser, so that if AngularJS gets deprecated you still have an API that you can use for your Vue/React/Angular rewrite.
However, if JSF becomes too much of a maintenance burden, the chances of me migrating to HTMX or a similar technology, even a full on SPA are pretty close to 0% when I have only a similar amount of time as in the case above.
Why? Because the previous implementation is going to be in the same codebase and like it or not, as more time passes and as more people work on the codebase, there will be coupling. Those technologies will intertwine to such a degree, that a clear cut replacement will be really difficult.
I guess what I’m pondering is why we don’t have an RPC/IPC component as a central part of Django, Rails, Laravel, JSF and all the others - so I could have server side rendering but separate repos (with separately managed dependencies) for the back end and the front end.
Maybe even deploy them as separate containers that talk to one another and have resources managed separately, so the old JSF permission check logic for rendering components in a deep tree cannot eat the CPU time that’s needed for some scheduled processes in the back end.
> I disagree. Try loading up a library heavy site (i.e. React plus a half dozen associated helpers for state, UI and whatever, which is pretty common) on an old or cheap Android device. It can take multiple seconds before the page is fully loaded. Even once the libraries are loaded, React sites often involve parsing a giant lump of JSON before you can do anything, that’s particularly CPU intensive and takes time on low end devices.
Try doing it on a flagship handset like a Pixel 8 Pro, and it's still a slow and miserable experience.
These disagreements almost always boil down to one side saying "some people use React to build content-oriented websites" and the other side saying "but most of us are building web applications".
In a typical web application use case (yes, there are exceptions) you're only rarely looking at usage from old or cheap Android devices. Occasionally you'll see a tablet, but even that's rare. The vast majority of the time your users are on either Chrome or Safari on a relatively modern Mac or Windows machine. In many cases the application loads once at the start of at least 10+ minutes of work, so initial load times rarely matter much.
If you're building a website that needs SEO, then yeah, React is completely the wrong tool, and you should just make a proper static site or hook your marketing team up with WordPress (or whatever takes its place once Matt is finished blowing it up). But those use cases, while common, represent a tiny fraction of total engineering effort across the industry, so most web developers here don't have the design constraints that make React a bad choice.
I would also add that even in that last scenario React can be a great tool, there are many solutions that use React but make fully static exports (Nextjs/astro/gatsby/etc).
Pre-rendering everything has downsides of course, but it doesn't get much faster than that.
I think it's also a misconception that there is such a clear divide between app and content-oriented websites. Even the most content heavy websites, news sites for example, are highly interactive these days. Show me a popular content heavy site that doesn't have some sort of log in, interactive content, comment section, filtering, live blogs, alerts, notifications, search, and similar.
You also have to take into account the advantages of pre-fetching links, and only fetching and replacing the content instead of doing full page re-renders. All of these great things you can do easily with React and a React framework of your choice. You can even go further and have a hybrid of statically exported pages, SSR, and some SPA like highly interactive parts, reusing the same exact components in all these strategies.
All to say: React can be a great choice for a lot of use cases, also for content heavy websites.
The top sites using react are all e-commerce or social sites. I would bet the majority of react sites and devs that use them are not building apps that require SPA or something like React. Which is the problem.
> I would bet the majority of react sites and devs that use them are [n]ot building apps that require SPA or something like React. [edited to restore what I assume is the ended meaning]
On what basis do you bet this, though?
Sure, you see sites that bug you by being slow in contexts where you'd choose something else, but most devs are working on sites and apps that you will never see, because you're not the target demographic.
I don't have survey data one way or the other, but anecdotally everywhere I've worked on an SPA has been a clear case where trying to do it in backend+JQuery would have been a huge failure.
But there is a wide gulf between backend + JQuery and SPA. The frustration often shown is people treating the extremes as the only options available. Having a use case for which backend + jQuery doesn't cut it doesn't require reinventing navigation state and history in JS, or loading every stat on the page via JSON. There are middle grounds.
What middle ground do you propose that's as efficient to get rolling as React in 2024? You talk about not reinventing state and history in JS, but at this point the reinvention is already done and React is the pragmatic choice that you pick when you don't want to reinvent the frontend.
There are a bunch of stacks that I prefer to work in for my own projects, but what I need at work is almost always the standard option that everyone is already used to.
OK, let me restate something from my previous component explicitly: a site that’s concerned with SEO is not a “specialized domain”. It is very, very common.
Many sites have a two tier application with a CMS SEO “landing page” abd conversion funnel run by the marketing department and a user-login that takes you to a separate website that is SEO irrelevant.
So… a site that doesnt care about SEO and is not indexed and is purely for logged in users is also very common.
I don’t really understand the disagreement here. I didn’t claim those sites are rare. I was specifically refuting the OPs claim that page load time only matters in a small number of specialized domains. I am saying it is very common for it to matter. It’s also very common for it to not matter, you’ll get no argument from me there.
With React Server Components, you can have your cake and eat it too, by sending only the necessary HTML to the browser (thereby having good performance and SEO) but also hydrating with more interactivity if necessary. And I'm not sure where you think CSS in JS means you don't learn CSS, I'm not sure how you'd use it otherwise... Either way, there are CSS in JS (TypeScript) libraries that compile down to regular CSS classes, like PandaCSS, so again, you can have your cake and eat it too.
React Server Components strikes me as React solving a React-caused problem with yet more React. Which is fine, I guess, if you’re already locked into the React ecosystem. But as someone that isn’t, looking at the whole proposition from the outside, it just screams vendor lock-in to me. There are too many devs out there only expert in the way React does things and can’t step outside of it. RSC is an additional crutch that allows this to continue but that doesn’t mean it’s healthy.
My main contention with the OP’s point was the assertion that React is almost always the right answer. One of my biggest bugbears about any web dev discussion (particularly on HN) is that everyone treats it as a one size fits all argument and it isn’t.
If you’re making a webapp with interactivity levels like Gmail has then React is a sensible choice. The page reloads very irregularly. But if you’re making something like a blog with lots of drive by visits and only small islands of reactive content IMO it’s the wrong choice.
Perhaps it is vendor lock in, but the concept as a whole is not that difficult to generalize, as other frontend frameworks have their own sorts of implementations of the concept. Phoenix in the Elixir world has LiveView, for example. It's just that React's is more seamless, especially with TypeScript, as you can run JS (TypeScript) on both the client and server, so that you have greater control over exactly which components you want to be on the client versus the server without having to do so manually if you were to use something like Rails or Django. Therefore, even if you are making something like blogs, you can still use React as essentially a templating language via JSX and also use React for interactivity, for example a comment form component.
> React Server Components strikes me as React solving a React-caused problem with yet more React.
This is not the case. RSC solves the hydration problem, in which hydration is profoundly expensive (larger bundle sizes, more client JS to parse and execute, and slower time to interactive), when most of the UI on any website can be non-hydrated. This also gives you the ability to write server only code (which as it would turn out, reduces sending third party deps to the client even more) for free with beautiful composability to client side hydration when you need it.
Everyone hydrates at some point. Maybe you write isomorphic javascript or maybe you render a rails or python app and sprinkle in some JS. RSC enables you to do this with complete composition and re-usability.
I'll put my money where my mouth is: RSC will continue to grow in adoption and its patterns will be adopted across many UI frameworks and libraries. This wasn't a solution in search of a problem, this was a large step forward in giving us more optionality as to how we architect websites.
> There are too many devs out there only expert in the way React does things and can’t step outside of it.
This is a weird ad hominem, attacking developers' skill instead of the actual technology. There are millions upon millions of React developers, and many of us have been building successful software for a long time and step outside of React every day.
> But if you’re making something like a blog with lots of drive by visits and only small islands of reactive content IMO it’s the wrong choice.
For some use-cases it is not the best choice, and for others it's the correct one. No one is hailing React as a one-size-fits-all solution, rather it remains a great balance that scales remarkably well to many needs.
That's technically true. Indeed, "isomorphism" has been a term of art for about 200 years.
More recently though — over the past decade or so — JavaScript enthusiasts have been using this word to describe code sharing, which isn't quite right.
It's been used to describe code that does A on server and B on client side, where A and B are deeply related but definitely not the same. It's not just code sharing, that's just a small part of it.
My argument is that it's a forced and silly misuse of the word.
I'm struggling to understand also how it came to be [ab]used in this context. To take an old Greek mathematics word and use it to mean something that it doesn't really mean? Why? Isn't that silly? Isn't it pretentious?
You literally cannot. You can bake two cakes. You can’t have your cake and eat it too.
Server and client rendering? You must concern yourself with both. The best frameworks will not perfect abstract this for you. They can’t, it’s leaky. When the cracks show, it will be painful.
CSS-in-JS? I’ve used it and fought for it. Have you ever looked at the css output? That’s not a cake I’d want to eat. Compare it to a codebase with a well architected set of css and the markup and css actually do work for you. They are clarifying and reinforce structure. There are levels of productivity you gain based solely on things being clear and not convoluted that many people in our industry cannot recognize because they are fixated on the wrong thing. Whether that be what other people are using, or what gives dopamine hits (hot reloading, cool tech, etc)
> You literally cannot. You can bake two cakes. You can’t have your cake and eat it too.
It is a common saying, not to be taken "literally."
> The best frameworks will not perfect abstract this for you. They can’t, it’s leaky. When the cracks show, it will be painful.
Better than before with pure server side solutions like Rails or Django, however. I use RSCs and they work just fine, because you are using TypeScript on both the client and the server, meaning there are greater abstractions that can be leveraged.
> Have you ever looked at the css output?
Not sure what CSS in JS library you used but with something like Typestyle or PandaCSS, you write CSS but in JS objects, so the generated CSS is simply turning those objects into the CSS you already wrote, not sure why it would be any different.
I know what analogies are. I was extending yours. I was just saying that there are tradeoffs. You don’t think there are, and that’s fine. I can see them and I make my choices accordingly.
I never said there weren't tradeoffs, just that RSCs enable greater functionality than before, such that one can "have their cake and eat it too," but it's not meant to mean that there are only two cakes in this entire analogical universe. Other frameworks (ie, cakes) have their own tradeoffs.
I disagree. Try loading up a library heavy site (i.e. React plus a half dozen associated helpers for state, UI and whatever, which is pretty common) on an old or cheap Android device. It can take multiple seconds before the page is fully loaded. Even once the libraries are loaded, React sites often involve parsing a giant lump of JSON before you can do anything, that’s particularly CPU intensive and takes time on low end devices.
> Yes, there are a few specialized domains where this will matter. No, you probably don’t work in one.
Again: I beg to differ. Anyone hoping that their site will rank well on Google needs to factor in site performance via core web vitals. Time and time again I’ve seen React-based sites perform horribly and it’s very difficult to dig your way out of the hole after the fact.
I actually think most of the problem isn’t React itself, it’s the ecosystem and the philosophy that surrounds it so often. You’ll have React, you’ll have some extra state management library on top, you’ll have some hideous CSS in JS bulk on top because no one wants to actually learn CSS… it’s all prioritisation of developer experience over user experience. And it’s industry standard these days.