There are only few use cases, where the new kind of SSR (with hydration) is worth it. An example are e-commerce sites where you want the customer to see all your great products as soon as possible and then some seconds later to be able to interact with your side fluently to buy something. These kind of scenarios paired with low end devices are the only proper use case. And I say consciously "only proper" because SSR comes with downsides as well:
- SSR is always slower than static sites
- SSR is often slower than CSR - especially when using a small and fast framework like Solid, Svelte or Vue3
- When rendering on the edge and using a central API (e.g. for the DB) SSR is always slower when interacting with the site than CSR because of the extra hop from your central API to the edge to the browser instead of from the central API directly to the browser
- SSR is always more complex and therefore more fragile, however this complexity is handled by the frameworks and the cloud providers
- SSR is always more expensive - especially at scale.
- Traditional SSR with html templates will scale your site much better, simply because traditional languages like Go or Java or C# are scaling much better than NodeJS rendering the site via JS
We owe the technology of the "new" SSR and genius stuff like islands many very smart and passionate people.
Overall, this article not balanced at all. It is pointing out only some potential benefits. It simply is a marketing post for their product.
I don't think this is _always_ true. With SSR you only render the HTML server-side on the first page load. Any interaction after that is just the JSON payload being retrieved and rendered by a JS framework, which is faster than a complete rerender.
Your comparison is valid for first page loads. But usually people visit multiple pages on a website. And then SSR usually wins, though I agree the added complexity is usually not worth it for 100% static websites.
Usually people visit multiple pages? I think this is a 90-10 situation. 90% of the sites I visit, I reach a web page through another website, either a aggregator or a search engine, and I almost never navigate that website. Those could be purely static for all I care.
The remaining 10%, where I stay 90% of the time, could benefit from a mostly static interface with a tiny amount of server-side rendering.
Have a look at Qwik, it’s a new framework from the author of Angular. It does SSR without the need for client-side hydration. It’s fast and immediately interactive.
I’m really hoping it gains some momentum because I’d love to use it in some client projects.
Author of angular is a real red flag after that whole angularJS -> angular 2.0 "transition". Worked at a company that spent a bunch of money on contractors trying to get a big angularJS app ported, they never fully completed it, and at the point I left it still had both versions running in the same app. What a nightmare.
Much more prefer the htmx way of SSR parts of the page dynamically. Also, totally server-side agnostic, so we can use what we prefer. Clojure in our case.
HTMX has to be my favourite thing web related. I never really got React and always found it a bear to setup and use, but HTMX and server ride rendering? Easy and extremely productive for a non-frontend guy like me.
I really hope it or something like it becomes popular long term.
I just started using htmx in a new personal project. I’m pretty excited to see how it goes. I’m doing a sort of back to basics stack with PHP, simple classless css lib and htmx. So far it’s been a refreshing experience
Not op but classless CSS frameworks are awesome. The idea is to keep it simple and use the appropriate HTML tags where there were generally meant to go, and the framework will theme the page to improve usability and add flair. I've developed some great little sites with no classes at all!
Obviously this approach has its limits, but it works well for proof-of-concept sites or sites that don't need to be very complex or dynamic. Just a sensible font size, nicer looking form elements, etc.
It's a CSS stylesheet you include in every page/view. You don't add any classes to your HTML and rely on the defaults of that stylesheet to style the semantic markup. If those defaults are good for your project, and you have some command of HTML elements beyond using divs and spans for everything, it saves a lot of time.
Yes indeed! The core aspect, however, is that your server is returning fragments of html that htmx places in the DOM extremely quickly. They have pretty good examples on their site illustrating some "modern" UI patterns.
As an example, you may have an HTML table on your page that you want to insert a new row for some reason on let's say a button click. You place some attributes that htmx understands on your button that will call for the TR HTML chunk from the server. You can imagine replacing all the rows for a paging click etc.
It’s much faster than that in practice, but of course it comes down to how well your backend is written. I’ve been using HTMX lately and I can blink and miss the UI updates.
I wish I had numbers, but in my experience it’s far better than you’d expect. Basically take the length of a REST call you’d have to make anyway and add a few milliseconds for the rendering.
It won’t be the right choice in all cases, but it’s a great option in many.
By chuck away I mean all those processor cycles to perform cryptographic verification... And then throw away the result because you have to do it on the next connect anyways
And I have written a library that does the full SSL handshake/upgrade cycle off of tcp sockets, and managed my own private CA, so I'm not a total idiot.
How is that different than all the other CSR SPAs that do a REST call on every button press as well? They do almost the same amount of work, and will also “throw away” the results.
It's not, but there are server side options (topic of op) where it's a websocket. I believe HTMx supports ws too, but gp was only talking about RESTful round-trips
You for sure wouldn't want to build a spreadsheet type app with htmx because of that aspect. Many other types of web apps can benefit from the simplification of the architecture, however. And like my paging example, many times you need to go to the server anyway. But sure, like anything, I wouldn't use htmx for every situation.
True BizarreByte. I love how htmx let's you easily add animation indicators on actions, because many times it's too fast due to the small chunks coming back and getting placed so quickly in the DOM
What sort of slow backends do you work with? My PHP htmx application responds in ~30ms including network transmit. Even if you’re across an ocean it’s maybe 300ms.
How well does their example work over satellite internet with 1.2sec latency? How about my cell connection when T-mobile throttles me to 64Kbps for going over my data allowance? How about my sister's cell connection, as she is on an MVNO and deprioritized sometimes to 128Kbps, sometimes to 6Mbps, and sometimes it varies within a minute between those two?
FFS, people, learn to write proper software that does everything locally!
All of the environments you describe there sound to me like they would benefit from web apps that become responsive after an initial page load of less than 100KB, followed by ~10KB round trips to the server to fetch additional data.
As opposed to the >2MB initial page loads that have become so common with heavy React SPAs in exchange for the theoretical benefits of avoiding more page loads for further interactions.
Any app with dynamic data would still need to make some sort of HTTP request before rendering some view with that new data. I don’t really get your point.
My mentor always taught "Program as if it has to run on the far side of Mars."
Software that doesn't need the internet for its function shouldn't connect to the internet. Software that does need the internet should be able to operate under adverse network conditions.
The Voyager probes are 160 AU away, far past Pluto, with a roundtrip latency of 37 hours. The radio transmitter onboard is only about 10x more powerful than a cell phone. The hardware has been in the cold vacuum and hard radiation of space for four and half decades. In spite of this, NASA maintains active two-way communication with it today, and continues to receive scientific telemety data of the outer solar system.
I don't expect web devs to design for deep space, but the core functionality of a website should still work for a rural user with a spotty satellite uplink. Don't do go loading JavaScript or other resource calls until the basics are received. I still remember the days of Facebook being fully functional on a 2G cellular connection, using little or nothing more than static HTML and CSS (and that was before the magic tricks HTML5 can do).
And what kind of apps are you using? You think Amazon and whatever else doesn’t also start a rest call on basically every action, many of which are “blocking” the UX?
Versus the overwhelmingly common SPA counterexample where any change in the UI means sending a request to the server, waiting for it return your json response, parsing that json response, building html out of that json response, and updating the dom.
Personally I would use htmx and roundtrips for all sorts of modification including data (such as reorganizing two rows in a table). But you'd also do that in an SPA, right? How do you prevent desyncs there? Also for e.g. sorting you'd need a server roundtrip anyway in the (likely) case where you use something like pagination or lazy loading.
For sending data, you would just have a reply that instructs HTMX to display a success message. On an SPA you'd have the same. With both, you can interact with the page while the data is being sent.
Of course, sometimes you want purely 'cosmetic' actions, such as an "add row" button that pops open some data entry fields. For something like that you should not use htmx itself, but instead basic vanilla JS or a simple library such as https://alpinejs.dev/ which complements htmx nicely for client-side stuff.
I go with the core idea of progressive enhancement: your front end code can improve what the server sent it rather than completely replacing it - if your style changes don’t break the semantic meaning, you don’t need to change any of your front end code.
What I typically do is render the first bit of data on the server and have the client JS use that as a <template> for changes or new records. That ensures that anything done on the client always matches without you needing to code cosmetics in multiple places but also means your pages load seconds faster than a SPA because the first view is ready to go. When the user saves the entry, you can swap what you generated with the server response a few milliseconds later.
htmx glosses over the difficult part, which would be keeping track of the relationship between all these hx-post and hx-target attributes, when they're all just strings in markup. That glossing over is why you don't see any well-established backend libraries on the server integrations page. (https://htmx.org/server-examples/)
Elixir solves this in a much smarter way, because the bindings to reactive values can be validated at compile time, inside the HEEx templates. It's all located together, but you still get the diff passing behavior. (https://hexdocs.pm/phoenix_live_view/assigns-eex.html)
It’s not a difficult part if you already have a powerful web framework. We’re using Htmx with Servant which has type-safe URLs built in. For the targets, automatic name generation is fine.
I prefer the unopinionated, language agnostic simplicity of htmx, just like HTML, over more tightly bound language specific frameworks. It provides a vocabulary that will translate across backend languages and probably endure over competitor frameworks.
I built a tiny deno+htmx experiment, you can check it out at https://ssr-playground.deno.dev, it's all server-side rendered html with htmx attributes sprinkled around.
I love the concept, but for my case, I'd need more granular filters. In particular, I only buy TVs that have analog audio outputs so I can hook them up to any speakers. That's a minority of TVs these days, but there are still a few around. Finding the good ones would be useful to me.
I'm not an audiophile. I just want something better than what the screen does, and I don't want to buy a soundbar or get into HDMI. And sometimes use headphones.
But still, I'm pretty sure, one day I'm going to get one of these optical -> analog boxes.
Thanks. Speed and quality content is what I've been focused on. I was tired of Google search spam and Amazon review rot so built this to try expose good/trusted/authentic content.
Can the same server side code render that fragment, regardless of whether it's part of the initial page load or a subsequent update? You need an additional route for the Ajax call, right? Just curious how this gets structured.
I haven’t used it recently but I believe it sets a header on the request when it comes from HTMX so you can change whether you send the whole page or just the fragment back.
You can also just send the whole page and use other features to select just the part that you want to update (obviously that has a cost of sending the whole page though).
If you don't do server side rendering, you don't (almost) automatically get a set of nice REST endpoints that return JSON/XML/ETC?
I get that the abstraction might be nice for security, but at least for corporate intranet applications, a nicely structured, secured (e.g ODATA) webapi you query for client side rendering has the added benefit that it can be invoked programmatically with REST by other authorized parties.
Obviously, you want the standard DDoS and security protections, but this fact alone, has turned me off server side rendering alone.
Isn't it also nice from a computation cost standpoint to let the client do the rendering? I suppose UX could suffer and for external facing apps, this is likely of the utmost importance.
Happy to be educated if I'm unaware of something else.
What difference does it make, with respect to security, whether the server returns html or json that needs to be formatted into html?
The computation for rendering (in every case I've seen, and I have to speculate in 80% of cases ever) is so trivial compared to the actual retrieval of the data to be rendered.
None really, I was not eloquent about it - just that returning well structured data makes easier to extract data. You could also argue that if you render locally, you might return data used only for rendering that's not output and that also leaves your server's boundary where it wouldn't be necessary to leave at all with SSR. But like you said, these are things that are not really security.
Getting an API for free is an anti-feature for all the temporarily-embarrassed monopolists on HN.
More seriously, though, it's nice to be able to just build without thinking too hard about if you're getting your abstractions perfect. To me, this is the main advantage of SSR - moving fast doesn't leave behind a wake of idiosyncratic APIs that need to be (carefully, dangerously) cleaned up later.
In my experience moving-fast SSR absolutely does leave behind a wake of idiosyncratic APIs that definitely need to be cleaned up later.
You still need client-server communication, so you still have an API, it's just an ad hoc API that speaks HTML and form data instead of JSON. And because you didn't think of it as an API while you were building it, it actually tends to be harder to clean up later, not easier.
You probably have more experience than me. My primary SSR experience is with more recent frameworks and libraries like blitz and tRPC, which make it much easier to delete those when they are no longer used.
but then you'll have a server side path for json and one for html. If I have rendering logic client side entirely, then I just have json / xml on the server. That's it.
No. It's one path. The client merely requests the the format it desires. The response handler on the server side returns the appropriate response format as requested by the client. Nothing crazy here.
Why are those things mutually exclusive? You can have an API and then have a different app that uses that API with server-side rendering. I.e. instead of a client SPA you have a server app using the API.
Sure they're both viable options, but running 2 server-side apps that could be 1 server-side app and a folder full of HTML/CSS/JS served over CDNs certainly has some trade-offs. e.g. in terms of security, I'd prefer to have a smaller surface area server-side.
Then you're comparing server-side apps to client-side ones. And imo the attack surface of managing a SSR webapp is fairly minimal - especially when you're already responsible for providing an API to the public. My point was more that API-first doesn't prevent you from doing server-side rendering and there are benefits to doing so.
No? Each client may be receiving the same document, but based on their device, view port, preferences, etc… the rendered result may be different.
Either way, the measurement of Joules/page is likely to be such an astronomically small number compared to the constant cost of simply having a server at all IMO.
Are you suggesting we give up on rendering layouts that respond to different window sizes, display resolutions, and zoom levels? I think what you’re suggesting is that clients requesting websites should receive essentially an image of the website with limited interactivity, but that’s not going to make anyone that’s ever used a website satisfies in C.E. 2023.
You can make web pages responsive to "different window sizes, display resolutions, and zoom levels" with no JavaScript at all, so that's clearly not what they're suggesting.
That’s true, but it’s not like translating HTML into a bitmap for a display is some spontaneous process that happens for free. Your browser is anyone going to interpret all that HTML and CSS, and your users are going to click on buttons and submit forms that require changing that HTML and CSS. Whether that happens through JavaScript or SSR is bike-shedding: processing will be done, computation is needed, we can sit here and argue about how the web front end world sucks but billions of users expect interactive web applications that are very difficult to deliver without JavaScript.
Client-side json-to-html is such a microscopic part of the compute cost related to showing a change in a website that it's a rounding error. Totally inconsequential. A single widely-used inefficient css attribute or repaint costs WAY more.
If it was truly the same computation then it could be a static site generator, but with typical server side rendering you are still doing a new render per user no?
But how many of the SSR projects you have dealt with had a requirement to enable API access for users? If it's not a requirement, then yea, you're right, it's a non-problem.
Several. I just literally do not see the problem. I'm not saying it's absolutely the most efficient way to do a given project, just that it never was presented as any kind of problem to provide both a SSR app with an API. Obviously you just can't reuse already existing API routes and methods that you would if you had structure it as a SPA but you're likely rendering using SSR from structured data anyway which you can just send as JSON or whatever on a separate route.
I've done this multiple times and it's fine. It's really not an issue for delivering projects. I'm not saying I'd do this every time - if the API was identical for both my app/website and any other clients I'd very possibly abandon the SSR approach for the ease of a frontend API that everyone can use identically, but that's a specific project requirement not a general statement on SSR and API development.
I may be misunderstanding this, but isomorphic SSR sounds an awful lot like the Java Server Faces concept of a server side DOM that is streamed to the client. JSF was largely dropped by java developers because it ended up scaling poorly, which makes sense since it violates one of the main constraints that Roy Fielding proposed for the webs REST-ful architecture: statelessness.
An alternative approach is to retain the statelessness of the first option they outline (I don't understand why it isn't "true" SSR): use normal, server-rendered HTML, but improve the experience by using htmx (which I made) so that you don't need to do a full page refresh.
This keeps things simple and stateless, so no server side replication of UI state, but improves the user experience. From what I understand of the isomorphic solution, it appears much simpler. And, since your server side doesn't need to be isomorphic, you can use whatever language you'd like to produce the HTML.
In the sample code, there is no "streaming" going on -- the server simply uses the client code as a template to generate HTML and sends it as a normal HTTP response. In pseudocode:
import "client.js" as client
on request:
document = new ServerDOM(),
client.render(document, data),
respond with document.toHtmlString()
> I don't understand why it isn't "true" SSR
This article seems to be using the term SSR exclusively in the frontend framework sense, where client code is run on the server. It's not how I use the term but it is a common usage.
Another possible reason that the htmx approach isn't discussed: the any-server-you-want nature of htmx is terrible for selling Deno Deploy :]
Ah, I thought there was some sort of diff-and-send going on to the client.
I do know there are folks using htmx and deno (we have a channel on our discord) so I don't want to come across as oppositional! Rather, I just want to say that "normal" SSR (just creating HTML) can also be used in a richer manner than the plain HTML example given.
I'm working on a server side framework and someone told me it reminded them of Java Server Faces. I think the approach works really well and latency is low enough when you can deploy apps all over the world. Also they didn't have HTTP2 or websockets back then... What I'm doing is basically a clone of Preact, but server side Ruby, streaming DOM-patches to the browser...
Slightly off topic, but I found JSF the most productive out of any framework. It has some not so nice edge cases, but when you are “in the green” and you don’t need to scale to infinity (which, let’s be honest, is the case most often than not) it really is insanely fast to develop with. For internal admin pages I would hardly use anything else.
> Slightly off topic, but I found JSF the most productive out of any framework.
In my experience, it has been a horrible technology (even when combined with PrimeFaces) for complex functionality.
When you have a page that has a bunch of tabs, which have tables with custom action buttons, row editing, row expansion, as well as composite components, modal dialogs with other tables inside of those, various dropdowns or autocomplete components and so on, it will break in new ways all the time.
Sometimes the wrong row will be selected, even if you give every element a unique ID, sometimes updating a single table row after AJAX will be nigh impossible, other times the back end methods will be called with the wrong parameters, sometimes your composite components will act in weird ways (such as using the button to close a modal dialog doing nothing).
When used on something simple, it's an okay choice, but enterprise codebases that have been developed for years (not even a decade) across multiple versions will rot faster than just having a RESTful API and some separate SPA (that can be thrown out and rewritten altogether, if need be).
Another option in the space is Vaadin which feels okay, but has its own problems: https://vaadin.com/
Of course, my experiences are subjective and my own.
>When you have a page that has a bunch of tabs, which have tables with custom action buttons, row editing, row expansion, as well as composite components, modal dialogs with other tables inside of those, various dropdowns or autocomplete components and so on, it will break in new ways all the time.
Everything you're describing sounds like someone was able to create requirements for features without push back or thinking.
I think part of the design process is thinking and really asking, why do we have an editable table in a row, and how useful and core to our business is this.
> Everything you're describing sounds like someone was able to create requirements for features without push back or thinking.
This might be it! However, the composability of solutions still matters, for example, while I dislike certain other aspects of React, its approach to nesting components is a breath of fresh air, especially with JSX (as long as state management is manageable).
> I think part of the design process is thinking and really asking, why do we have an editable table in a row, and how useful and core to our business is this.
I might have structured that sentence badly: the tables had editable rows (say, the ability to edit contents in a row, like Excel, but only when an edit button is pressed, as well as sometimes other action buttons are present; which may or may not get interesting when you are doing that on multiple rows and have validations against already entered data), which might sometimes open modal dialogs. For example, if you need to select some data which doesn't quite fit into an autocomplete text field, you might bring up a modal dialog for selecting what you need, maybe have a search form and so on.
Personally, I'd say that development would often be easier regardless of technology, if requirements could be aligned with the available technologies (as well what can be done well and easily within them) and not vice versa. Then again, the final say is up to the poeople who are giving you money, so there's that.
I did meet a few bugs (though they were from a few PrimeFaces components), and the js interop is not too trivial, but otherwise I can’t share your experience.
That's perfectly fine, it might just be that the project had certain challenges in regards to complexity, or that the codebase might have been a bit peculiar.
But I think that's why it's nice to provide even single data points to a discussion sometimes.
What exactly do you mean with violating statelessness? Most web apps have state on the server, i.e. cookie sessions. The UI state doesn't have to be state on the server though, that can still be in memory in JS and/or part of the URL.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.
If you can reasonably cache the response, SSR wins on first page load, no question. On the first page dynamic render "it depends", can be SPA or SSR. 2nd page render a well built SPA just wins.
"it depends....." Server CPU cores are slower than consumer cores of similar eras. They run in energy efficient bands because of data center power concerns. They are long lived and therefore are often old. They are often segmented and on shared infrastructure. And if the server is experiencing load, seldom an issue on the client's system, you have that to deal with also. Your latency for generating said page can easily be multi-second. As I've experienced on many a dynamic site.
Using the client's system as a rendering system can reduce your overall cloud compute requirements allowing you to scale more easily and cheaply. The user's system can be made more responsive by not shipping any additional full page markup for a navigation and minimizing latency by avoiding network calls where reasonable.
On dynamic pages, do you compress on the fly? This can increase latency for the response. If not, page weight suffers compared to static compressed assets such as a JS ball that can be highly compressed well ahead of time at brotli -11. I never brotli -11 in flight compression. Brotli -0 and gzip -1.
This is for well built systems. Crap SPAs will be crap, just as crap SSR will similarly be crap. I think crap SPAs smell worse to most - so there's that.
> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.
If you use features the end client doesn't support, regardless of where you generate the markup, then it won't work. Both servers and clients can be very feature aware. caniuse is your friend. This is not a rule you can generalize.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
Meh. Debatable. What's hard is mixing the two. Where is your state and how do you manage it?
If you're primarily a backend engineer the backend will feel more natural. If you're primarily a front end engineer the SPA will feel more natural.
Mobile cores are usually power/thermal constrained. Still parsing HTML or running an ungodly amount of JS that then spits out that HTML on the device is not that big of a difference, both kill the battery charge fast :)
As I said, servers cores are also power and thermal constrained. A16 is like 50% faster than high frequency latest and greatest Sapphire Rapids single core. 50%.
Businesses are not server-side compute limited. Ask any business basically. They would gladly trade more of their own CPU power & heat for more time on the site/service from their users.
I didn't say they were compute limited, did I? Still, Pouring unlimited funds on more servers so they can sit <10% idle for best client performance is a great way to light money on fire.
Faster -> more responsive. The client's cores are faster, even on mobile. If that's such a great trade, SPAs are potentially better than dynamic SSR.
What on earth does this tangled mess "simplify"? I've been at it with web dev for over 20 years and I was left scratching my head. The trouble with going down the JS rabbit hole is that you lose perspective on simplicity. **d help us if this becomes the new hotness. Oh, wait it already is. Oh well, until next month ...
I still don't get it. We went from great backend languages with widgets approach on frontend to letting the whole shiny thing taking over the whole system and impose weakness here and there in architecture. My theory is that incompetent technical leads or CTOs reading too much tech twitter but not immune to all the echo bullshit.
I don't remember those as the good old days, especially on true web applications.
I remember more bugs than I'd care to recount with the back button and scope and that's not even talking about having to simultaneously think in JavaScript and your server side rendering language of choice.
I also think there is a lot of room for multiple choices. For web applications, I think server side rendering as a default is a poor choice. For information conveyance, I think server side, or even pure static sites, makes a lot of sense.
Simplicity is a facade. Great experiences are often complex behind the scenes. The JS ecosystem just tends to splay things open so the complexity is visible.
Most SPA is totally unnecessary and a big waste of time.
People are worrying about the speed of SSR when they should be worrying about the developer time on the client which is several orders of magnitude more.
I think people have fallen in love so much with complex Javascript frameworks that they’ve forgotten how easy it is to get to an MVP with SSR.
Speed is important.
Speed of development is even more important for businesses in this era who have to get to revenue faster.
And that’s why things like Phoenix LiveView and its counterparts in other languages is catching on so quickly.
People are getting fatigued with the latest flavor of the month JS framework.
But what do I know… I’m just a lowly “developer” working for crumbs. Never even finished a CS degree. Sigh.
For simple projects it really doesn't make much difference. Depending on your available tools, client-side or server-side rendering might be easier. In the end the only difference is what is going down the wire: data or HTML.
That said, client-side rendering is strictly more general than server-side rendering. So I prefer to use client-side rendering everywhere so that I don't have to switch between two different modalities and maintain two sets of tooling (or worse switch in the middle of a project!) I gather this is against the current fashion but whatever.
There comes a point in a project where the amount of client-side features requested makes you wish you had started with a fully-fledged modern framework like React. New features could be a single React component plus some updates to existing callbacks, and a new API call, but instead requires adding to a big accreting ball of HTML templates and a hodge-podge of vanilla (and maybe jQuery) js amounting to a bespoke framework that someone had to develop to manage the complexity.
I absolutely love Django and old-style web frameworks, but they are not without their own complexity risks.
Mindshare will go towards rendering javascript components on the server since that's another complex problem that's fun to solve. That's good! We shouldn't have to give up the productivity gains of tools like React to improve time-to-interactive and other performance stats.
That said... I'm not going to pretend it's an urgent need and will wait for these tools to mature.
Idk Im not so much into web development but isn't ssr rendering much more expesive? I just move all the processing/calculation to my/server side instead of the clients. This means for a business with many clients I have to pay for the stuff that the clients themselves could have done instead...
I assume a modern webpage has some logic to it which besides the redering also needs to be processed and if you apply that to a scale of billions x years I guess yes. But as I said I'm not an expert in the field nor have I any numbers. It's just what I thought.
Edit:
Thinking of having only to serve a state once and having each action processed on the client side instead of making for each a call the backend which has to return a fully rendered page.
The server doesn't [have to] return the entire page on a change. Return small chunk of HTML or small chunk of JSON. There will be a small cost to do the HTML on the server but there is also a cost to do the HTML on the client: sending them 50000000000kb of JavaScript initially.
Woot sorry but have you ever used any cloud provider like AWS? Processing time and bandwidth are the things you try to avoid in order to avoid unnecessary expenses.
I've been using sveltekit for years and still struggle with it.
With sveltekit, I'm never really sure when to use prerender. I'm never sure how and where my code will run if I switch to another adapter.
With pure svelte, my most ergonomic way of working is using a database like pocketbase or hasura 100% client side with my JavaScript, so the server is a static web server. It's got real time subscriptions, graphql so my client code resembles the shape of my server side data, and a great authentication story that isn't confusing middleware.
I'm sure SSR is better for performance, but it always seems to require the use of tricky code that never works like I expect it to.
In Sveltekit it's SSR on first load and then client following as the components in that page change, as far as I know. How SSR is done, not where it's done, depends on the adapter. It's always server first unless you specially opt out.
Does anyone know the stats about what's being served?
For things like blogs, server-side HTML with a sprinkle of client-side Javascript (or WASM) makes a lot of sense.
But for applications, where you're doing, you know, work and stuff, in-browser HTML makes a lot more sense.
The thing is, as a developer, most of the work is in applications. (It's not like we need to keep writing new blog engines all the time.) Thus, even though most actual usage of a browser might be server-side HTML, most of our development time will be spent in in-browser HTML.
I love Deno, I hope it succeeds, but I'm disappointed to see them so confidently publishing a broad assertion like this that's very weakly argued, and heavily biased towards promoting their own position in the stack
> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.
Excuse my bluntness, but this is complete nonsense. Browser incompatibility in 2023 is mostly limited, in my experience, to 1) dark corners of CSS behavior, and 2) newer, high-power features like WebRTC. #1 is going to be the same regardless of where your HTML is rendered, and if you're using #2, server-side rendering probably isn't an option for what you're trying to do anyway. I can confidently say browser compatibility has roughly zero effect on core app logic or HTML generation today.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
This, again, is totally hand-wavy and mostly nonsensical. It's entirely dependent on what kind of app, what kind of features/logic it has, etc. Server-rendering certain apps can definitely be simpler than client-rendering them! And the opposite can just as easily be true.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.
This is only partly true, and it's really the only partly-valid point. Modern statically-rendered front-ends will show you the initial content very quickly, and then will update quickly as you navigate, but there is a JS loading + hydration delay between seeing the landing page content and being able to interact with it at the beginning. You certainly don't need "a desktop...with a wired internet connection" for that part of the experience to be good, but I'm sure it's less than ideal for people with limited bandwidth. It's something that can be optimized and minimized in various ways (splitting code to make the landing page bundle smaller, reducing the number of nested components that need to be hydrated, etc), but it's a recurring challenge for sure.
The tech being demonstrated here is interesting, but I wish they'd let it stand on its own instead of trying to make sweeping statements about the next "tock" of the web trend. As the senior dev trope goes, the answer to nearly everything is "it depends". It shows immaturity or bias to proclaim that the future is a single thing.
There are some pretty crappy bloated client-side apps but when it's done well and it is appropriate for the app in question, it's amazing.
I've been playing novelai.net text generation and I think their app is mostly client-side. It's one of the most responsive and fast UIs I've seen.
Also, the article has this sentence: "Performant frameworks that care about user experience will send exactly what's needed to the client, and nothing more. " Ironically, a mostly client-side app that's only loaded once, cached, and is careful about when to request something from the server, might be more bandwidth friendly than a mostly server-side app.
The problem is more fundamental and it’s this; web apps are broken and have been from the beginning. They were created to solve the problems related to software distribution and updates but these problems were solved in the early 2000s when broadband became prevalent and it was no longer painful to download large software packages.
The early straw man was that downloading apps was too daunting a task for users and yet some how they managed to download and update email clients, word processors, iTunes and ironically browsers themselves.
Since I began my career in 1995 I’ve seen application architecture pundits proclaim the correct way to develop applications go from thick client native to thin client native to thin client web to thick client web back to thick client native (iOS & Android) and now, according to the article back to thin client web. I’ll submit the best model is thick client native using the “web” as a communication backbone for networked features.
The first example shows the server rendering a handlebars template and then sending that as a response to the client -- it's then stated that this "isn't true SSR"
Then the same thing is done without a template language, using strings instead, and this is some different kind of SSR altogether and the "true SSR".
Which also seems to insinuate that only JS/TS are capable of SSR?
Server-side rendering! Well, kinda. While it is rendered on the server, this is non-interactive.
This client.js file is available to both the server and the client — it is the isomorphic JavaScript we need for true SSR. We’re using the render function within the server to render the HTML initially, but then we're also using render within the client to render updates.
My first contact with HTTP and HTML forms was an immediate throwback to my mainframe experience. The browser was like a supermodern 3270 terminal, getting screens from the server, sending data back, getting another screen and so on.
There were a number of products that allowed a web app to maintain a 3270 connection to the mainframe and render the terminal screens as an HTML form. Fascinating stuff.
This is why I'm really excited about htmx [1]. No need to write isomorphic javascript at all. You can still use server side templates but have interactive web pages.
It really is so terrific. After using it for over a year, I agree with the creator's of htmx when they say that this is how web development would have been if HTML as hypermedia was continually improved all these years.
When you start using htmx, you raise your eyebrows and think - hmmm this could be something interesting. When you use it for many months, you then open your eyes very wide and think - this is something special! In hindsight is so damn obvious, why didn't it happen much earlier?!?!
I just started playing around with it after fumbling around with Vue for a bit. I really like that there is so much less magic involved, no getting lost in a twisty maze of proxies. A real breath of fresh air. But then I haven't done real frontend development since JSPs were hot, so I'm not sure my liking it is a good thing.
There are 2 things that are orthogonal in current trend. This SSR buzz is not actually selling Server Side Rendering, they are selling 'one language to rule them all' (they call this dumb name "isomorphic").
Therefore, they are not solving all the problems of client-server + best UX constraints. Basically the problems we have all this time comes from:
1) There's a long physical distance between client and server
2) Resource and its authorization have to be on server.
2) There's the need for fast interaction so some copy of data and optimistic logic need to be on client.
The "isomorphic" reusable code doesn't solve [latency + chatty + consistent data] VS [fast interaction + bloat client + inconsistent data] trade-off. At this point I don't know why they think that is innovation.
IME the big gains nearly always come from how data is surfaced and cached from the storage layer.
You may get some nominal gains from sending less JS or having the server render the html, but IME the vast majority of apps have much bigger wins to be had further down the stack.
"A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete." - would like to have a word with you...
The issue I have with SSR is that it offloads processing power onto the server. That means I have to pay more as the host instead of relying on user's browser to handle the compute "for free".
Surely most of the compute cycles for turning a web page into pixels happen on the client anyway? I'm not convinced the server necessarily has to do massively more work to return HTML over JSON (though it would obviously depend on how the HTML-generation was coded. If you're trying to use the client-style page rendering techniques on the server, issuing API calls over the network and interpreting Javascript code, then you have a point).
Edit: my "most" claim is probably too strong on reflection: while there's still a lot of work to do to convert an in-memory DOM into pixels, it's likely to be highly optimized code (some of it handled at the GPU level) that uses minimal compute cycles. And while the V8 engine may be similarly optimised, it still has to interpret and execute arbitrary JS code supplied to it, plus handle all the necessary sandboxing. It'd be interesting to get a breakdown of what compute cycles are used converting a typical SPA into pixels, and of course a comparison with how much time is spend waiting for data to come across the network.
The problem with this idea is that the user's browser's compute is not "free". Offloading the computing means the users have a worse experience, which will affect your userbase and page rankings.
The argument against this is the cost of a user's bandwidth. If I have all the computation done on the server side, then I have to wait for the round trip every single time just to download the results. In this case, the browser's compute is more free, as the cost to send a remote request is more than likely higher.
Like most things, there is no simple right answer, and it depends on what you are doing. But blindly assuming the experience will be worse using CSR is as silly as assuming SSR will always be worse as well.
It depends on the style of application. For most line of business applications the data on the client side is potentially out of date as soon as it requested, and as a consequence very little can be done on the client side except arrange or rearrange data for display. All requests that make any substantive changes must be sent to a server somewhere anyway and client side data caching is almost useless.
That sort of depends. If the compute needs to happen regardless and now you add a layer of shipping data to and from the server, that could add even more latency and make for a worse experience.
Depends on the complexity of the page. A lot of sites with heavier JS and SPAs eat a lot of memory which can cause problems for users of the many laptops out there with 4/8GB of RAM, as well as many smartphone users who have 2GB of RAM or less. In the case of the latter visiting a heavy website can be enough to prompt the OS to kill some other app to make memory available, which means in that situation one's site is in direct competition with other things the user may be needing more than the site.
> Does it though? Loading a webpage barely registers in cpu usage etc on a reasonably modern device.
CPU usage is not the problem. In most cases the problem is latency including the network latency to issue and return a substantial number of remote API requests across the Internet to get the data necessary to render the page.
In many cases unless you make page specific APIs that aggregate all the necessary data into a composite object on the server side this is the number one thing that slows things down. Network turnarounds are expensive but they are a lot less expensive when made inside of a datacenter than from a thousand or more miles away.
Aggregating the data requests is pretty easy if you use something like GraphQL + one of the clients (Apollo or Relay). Probably other frameworks can do it too, I'm too lazy to check.
I'm probably not in your target demographic but when a website pushes computation to me for simple things like displaying text and images I close the tab.
>when a website pushes computation to me for simple things like displaying text and images I close the tab
How will you even know without looking at the source or blocking JS across the web? Like, sure, if they've fancy animations across all elements from the moment you open the page should be obvious. But what about something like https://rhodey.org/? It opens instantaneously in my ancient laptop connected to a terrible internet line. Check source. Only a single empty div in body. Everything is rendered with JS.
I block JS across the web by default. For some sites I'll learn the miminum set of domains to allow for temporary whitelisting. But most aren't worth the effort.
Sure, great for you, but most websites and most developers don't care about use cases like this and always use JS to "enhance" the UI and UX, however you interpret it. They (supposedly) need to achieve basic accessibility but then do whatever they can in the UI. I don't think anyone would care about users who "disabling JS by default", at least now.
And congrats on getting fired for refusing to use the companies internal tools. Not all web sites are brochure-ware. Sometime the target demographic is a limited number of internal employees who open the app once and keep it open.
Never mind that i don't know how you would display images server side.Your client needs to decode that image and render it to screen at some point
If a website is just showing text and images it shouldn't really be dynamically rendering anything anywhere. Write the content to static files during deployment and serve them.
It's not an issue, it's a trade-off. Do you want your users to experience faster initial page loads? If yes, then the cost might be worthwhile. If not, then not. Especially in ecommerce it's well worth the cost.
"A fully server side rendered version with isomorphic JS and a shared data model"
Seriously, how did we get there? Having dealt with jsp, jsf (myfaces, trinidad, adf...), asp.net, asp mvc, angular, plain html/css/js, how is is possible for FE web dev to be such a mess? So much complexity, for what? How many have to deal with millions of visit per day? Or even month?
It seems to me history is quickly forgotten and new generations know very little about the past.
I feel behind here, my company doesn't do any SSR and theres basically no way we could port it over. Definitely missing out on some key concepts.
I could build some SSR apps on my own but like, the real hard stuff about development comes >1 year in when you start running into those deep complexity issues. Can't really simulate that in a tiny pet project.
I think that biggest issue with page size is not due to client side rendering, but rather thanks to bundling and idea that you need to download the same minified Lo-Dash on each and every page. Why can’t we just use public CDNs is beyond my understanding.
I really like client side apps. They are so much more responsive. The only problem is with bundle sizes.
It could have worked with a trusted, open broker. I'm sure there could be a compatible sustainability model. I feel like a lot of potential trust was broken by the likes of Facebook and Google.
I'm not sure. I think the issue was not the provider but that if you visited pages it was possible for that page to gain information about your history based on whether resources were cached.
You can CDN your own bundles which include your libraries without much issue. You don't even have to really CDN them as much as make them cache friendly (name them with hash) and set the TTL to 30d. Download once then the browser will keep a copy for future page visits
Note: Remix is not built on React, as the article states.
Of all the new ways of thinking, Remix is the leader in not promoting a specific paid delivery platform. So in that sense I can see why people might want to mitigate its advantages by trying to tie it to React.
(having said that, Shopify might tie it down more, but I see no evidence so far)
How can there be a future if nobody develops it anymore? I think it's clear the opposite is happening.
Do you mean frontend-only devs losing their jobs? That's also unlikely since web dev went full stack over a decade ago. People don't build SPAs just because they don't know how else to build a web app. If you have specific examples of crappy SPAs you should blame that shop for sucking, not the concept.
If you are looking for server-side rendering that enables rich, react-like user experiences, check out LiveViewJS (http://liveviewjs.com). Supports both Deno and Node runtimes.
Yeah, big time. It's faster, so crawlers give you better scores for page speed, which is important. Secondly, it automatically renders all of your content, vs if you dynamically load content, the crawler may just see a page with a "Loading" element and never actually view the content itself.
Google argues that it is able to handle javascript heavy client side code in it's crawlers, but the data seems to show otherwise.
Perhaps the best method is a mix of static or SSR content for the content-heavy stuff that you want indexed and SPAs for the truly dynamic experiences. This is easier said than done but there’s a good chance your marketing team is separated from “product” anyway. Marketing can continue to use WordPress or some other CMS with a static export or SSR and product gets the full app experience stuff.
It’s mentioned in other threads that SSR is more expensive as your scale - so you might as well make the “outside” layer of your site light weight and static/SSR for fast client loading and then give them the full SPA once they’ve clicked through your landing pages.
Yes. There's a separate queue for sites that need js rendering and it eats much more into your crawl budget. Best way to avoid it imo is to use something like Rendertron, which is made and recommended by Google.
If web sites have to so dynamic, I much prefer that the computation involved is done on their machine than on mine. I simply don't trust random web sites enough to let them run code on my machines.
What is it you dont trust? This Fear Uncertainty & Doubt clashes heavily with the excellent security sandbox the web browser is. What is the harm you are afraid of? What are you supposing the risk is/what's in jeapordy here?
Relying on sandboxes seems unwise to me. They're a useful backstop, but shouldn't be the primary defense. The primary defense is to minimize the exposure to risk in the first place.
As to what harm I'm avoiding, it's mostly around tracking -- which is something that browsers have a very difficult time preventing, especially if sites are allowed to run code in them.
Well, I wouldn't use such a website anyway (especially a document converter -- that is better done using a real application), regardless of where the processing was done, unless I was very certain that the website was trustworthy. For one thing, even if the website purports to not move my data to their servers, how do I know they're being truthful without going to extremes such as sniffing traffic?
There have been plenty of sites that have lied about such things.
> What I have for native applications that I don't for the web is the ability to firewall off the native applications.
There you're placing trust on the firewall's sandbox. Are you sure the application can't communicate with the outside at all? DNS exfliltration for example?
A firewall is not a sandbox, but yes, I am sure that the applications can't communicate with the outside at all. My logs would show if they were. Any and all packets that originate from them are dropped, including DNS lookups and the like.
actually most people will miss out on most of the usable internet without javascript. not everyone goes to the same sites as you or has the same browsing patterns.
The future is not to stick to a single religion but to apply one's brains when architecting solution as it all depends on multiple factors and there are no silver bullets in this universe.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.
I don't see why we should assume the server is faster at processing the input data into HTML than the client is. It could very easily be that the client device does this faster. SSR additionally prevents progressive rendering, since you must generat eall the HTML ahead of time, which can make pages feel slower. Also HTML+JS data size can be larger than data+JS size (and you /may/ need the data anyway for the SSR version to do hydration). Of course all this varies, which is why it's silly to claim a general principle.
Performance is not _only_ determined by the processing time. Downloading the JS bundle and parsing it takes a lot of time. There's no need for that on the server. First page load is always very slow for CSR. Any client-side navigation after that is fast. SSR has a low initial page load and uses client-side navigation after that.
The claim that server side rendering is faster than client side rendering is interesting..
How come one machine(the server), is better than 1000 machines(the client)?
If you need to show data to the client then you need to transmit it, either in JSON or HTML. If you don't need to show it then why are you transmitting it?
But realistically the amount of data is likely small for most applications and it's probably not the bottleneck.
“Server side rendering” is such a terrible term. The server isn’t doing rendering, the browser is. The server is sending a complete well-formed DOM for the client to render. Well done, modern devs! A plain .html file does that.
I really hope some of the heavy front-end frameworks die a death, some common sense prevails, and we get a lighter, faster loading, more responsive web. I can dream.
> Well done, modern devs! A plain .html file does that
And then if you want to take that rendered data and do anything interactive with it you have some js soup of parseInt(document.getQuerySelector(".item > .item__quantity") all over the place. HN has some weird hate for this new server side rendering, when it's really the smart thing to do and equivalent to what any app is doing: the "frame" of the app is downloaded once (and we can send the initial data with it), and then it can become interactive from there. e.g. if the data needs to be reloaded we can make a small JSON request instead of reloading the whole page and re-rendering it.
> And then if you want to take that rendered data and do anything interactive with it you have some js soup of parseInt(document.getQuerySelector(".item > .item__quantity") all over the place.
Nothing stops a dev from providing both a server-side render and an API endpoint, for those that don't want the JS soup. In fact, such a design is not uncommon, and it's fairly straightforward to write a backend interface that both the server-side rendered endpoint handler and the API endpoint handler can use.
> HN has some weird hate for this new server side rendering, when it's really the smart thing to do and equivalent to what any app is doing: the "frame" of the app is downloaded once (and we can send the initial data with it), and then it can become interactive from there. e.g. if the data needs to be reloaded we can make a small JSON request instead of reloading the whole page and re-rendering it.
The "smart" thing to do depends on what your requirements are. For minimal latency, server-side rendering tends to fare much better, as it requires only one round trip to fetch all the necessary information to render the page contents.
> And then if you want to take that rendered data and do anything interactive with it you have some js soup of parseInt(document.getQuerySelector(".item > .item__quantity") all over the place
And that's what most of the web needs - few use cases require having to manipulate every bit of the dom to send constant updates to the end-user. Social networks, financial sites, banks, betting sites etc. The rest do not need these heavy frameworks and the extensive dom manipulating capability. The last thing you want in an ecommerce checkout process is to distract the user by manipulating the dom to give him 'updates'. So nobody does anything like updating the user with info like 'latest prices', 'your friend just bought this' etc right in the middle of the checkout process. Same goes for blogs, most of publishing.
I don't understand how jQuery and direct DOM manipulation is in any way better than something like Svelte for a modern Web app, especially something like a store.
Because what most websites, ecommerce stores need per any given view/page are a few unique, isolated jQuery functions to manipulate what is strictly necessary. Be it listing a category listing of products, be it adding to cart by clicking a button, be it updating quantities in cart, address or payment.
The 'modern' frontend frameworks take way too much after frameworks like React that were born from social networks in the inception of the social network age a decade ago. Facebook needed to have people poke each other, like posts and comment under them, while at the same time incrementing like, poke counters, as well as listing and updating a crap ton of friends, page and group listings on the sidebars, notifications and inboxes at the top and in the bottom bars and a whole lot of other stuff.
So while, say, for example React solved a major problem with there not being a major templating system or logic in the front end up until then, it also brought it with the baggage of the mentality which assumes that we need that kind of dom manipulation at any given time. True, one can indeed use something like React and keep it minimal like in the shopping cart example above. But it rarely happens so and instead even the business logic starts seeping into the front end.
The time when social networks exploded and such extensive DOM manipulation became 'cool' as a result, was a time in which the frontend was stuck in between Flash and the emerging jQuery/JS mess that some preferred instead of Flash to make websites 'modern/cool'. It was 'professional' for sites and apps to interact with users back in the early internet. Flash was used for it, then it became uncool as the web moved to jQuery, JS etc. Social networks exploded right in the middle of this transition, amplifying this trend. You wanted a 'modern' website that had moving parts. Not a plain HTML + simple CSS + JS website even if it loaded fast. Every widget and form had to be active, interactive and do stuff. Facebook was !all! the rage in that period, and everyone literally imitated them in everything they do, including tech stack and practices. Then Twitter also amplified the trend. Everything added up on it, and we ended up with the frontend mess where we tend to shove everything and then complain about complexity...
Thats not what modern frameworks are evolving to, though. Svelte doesn't have a "touch everything" approach like React. Interactions are compiled at build time and only the JS needed to change the specific things that aren't static is generated. There is no runtime.
With jQuery on the other hand you're bundling a whole bunch of stuff you probably don't need to do things modern vanilla JS is perfectly capable onlf. And you'll have to reinvent the wheel every time either way.
Sure, your simple approach is probably enough for small stores selling a few items, but it really doesn't scale to all of them. And it makes for a way better development experience to use a modern framework for all but the very simplest of interactions.
There is nothing wrong with parseInt(document.getQuerySelector(".item > .item__quantity"), except for it not being parseInt(document.getElementById("uniqueAutogeneratedId")).
Developers just shouldn't write that kind of fragile code by hand. But there's nothing wrong at all with the code being there.
That is literally fragile code. You contradict yourself. I really want to see any of the people that hate on modern frameworks build any complex web app in a reasonable amount of time with the same level of stability as using i.e. SvelteKit
I'm of two minds. I want agree with you of well formed DOM for the browser to render. That's great. Now, do we have to go all the way back to flat files where the whole page has to refresh to update one silly field or selection update? No, we don't have to go full cave man for that. We can still use the front end to make changes after the initial load. we don't need an app to be running in each user's browser for a large majority of places where this is happening.
Basically, by those standards, ngnix is the most popular server side renderer ATM. It can beautifully render HTML and pretty much any file format. It can even render video files and with Ngnix plus, you get bit more server side rendering for vidoe files too.
Apache used to be a good server side renderer too but those were the old days.
You can do both post and get. That is all you need really to make anything work unless your are doing spyware and like graphical applications such as maps and what not.
Yeah but the point of "serverside rendering" is that you can just fill in the dynamic values serverside and serve plain html instead of needing a bunch of javascript and dom manipulation
Point certainly taken but I think that "rendering" is the overloaded term.
Rendering basically means, to take data & logic and transform it into a view for another system (or person).
Graphical rendering is probably the needed operative word for this point? A bit of annoying semantics but I think rendering just means to provide a structured view for some state.
yeah people demand this overkill without understanding what they are demanding. Everyone seems to use react so we must also use react, then the site no longer works for mobile so then you also need react native. All when you can use vanilla js to do the small bits needed for a PWA from one simple codebase.
I sure was when I had to do front end work. Finally got out of anything front-end for good and it's probably been the single most pleasant change in my career ever. I didn't start out doing front end work though, so I could see while I was doing it how ridiculous it was compared to almost any other domain in software dev and only getting worse. A good portion of front end devs I meet have not done anything else so they don't have a point of reference.
And the future beyond that will be client-side rendering. In the beginning everything was rendered on the mainframe; then CICS allowed partial screen updates and even dynamic green screen design. Then the early web where everything was server which made the job of web indexing much easier. Then we moved back to rich client apps -- applets, flash, eventually SPAs -- with no way for search engines to easily index things. A best of all worlds scenario is a rich UI that only needs to make API calls to update the display, keeping performance fast and content flicker-free (and the server-side API could have an agreed upon standard for being indexed -- or submitting updates for indexing -- to search engines).
There is no truly perfect scheme, only ways in which we think we can improve on the status quo by swinging the pendulum back and forth.
The client-server wheel of life just keeps turning, and turning, and turning. It's an eternal human truth: each generation yearns to improve on the previous generation's efforts.
This server-client zeal to improve has been tremendously productive of good ideas over the last few decades. It will continue. Hopefully saving power and CO2 can be the focus of the next couple of turns of the great wheel.
Don't know why this comment was downvoted - it's the truest take here I can spot. The fact is that the factors that make one versus the other more preferable (the state and quality of frontend / backend tooling and environments, compute power and rendering capabilities of servers vs clients, round trip time cost vs responsiveness requirements etc etc) are continually changing over time and that's what's causing the back and forth swing, but lessons from previous iterations are generally learned.
I wouldn't be shocked if we sooner or later saw language-level support (think of something like Elm, improved) for writing "just" code and then later marking up which parts execute where, and the communications and state synchronization crud and compiling down to the native language is just handled.
Googlebot has been able to index SPAs since 2019. They use a Headless Chrome instance and allow a number of seconds for things to render after each interaction.
With the caveat that server-generated HTML is indexed immediately, while pages that need client-side rendering get put into a render queue that takes Google a while to get to (days?).
That's why you write down your use case for every project. Have a news site which needs to be indexed by Google immediately? SSR.
Have some Jira or whatever? CSR.
Most CSR applications are behind a login wall anyway. Thinking of the core applications of services like WhatsApp, Discord, Gmail, Dropbox, Google Docs etc.
Bottom line, whether SSR really being “the future”: “it depends”.
Hence you don't build documents with SPAs, they are meant for applications. And usually you don't care about indexing the inside of applications, only the landing pages and such, which are documents (should not be a part of the SPA).
A blog built as a SPA? Sucks. A blog built as a collection of documents? Awesome.
I would have thought they could spin up headless Chrome instances to simply pull down, render, and then index websites. Apparently this is too resource intensive for them? I'm sure the idea has come up (there's no way I thought of this and they didn't).
You'd think right? There must be other reasons then... how does Google benefit from not building better SPA crawling infrastructure? It's certainly gotten _better_ over the last few years, but still seems lacking.
In theory, the "modern" frontend frameworks could be useful for a subset of applications. In practice, they are wildly overused, largely (IMHO) because front-end developers have forgotten how to build without them.
If I gave this as an example, people would say I'm being unfair to the front-end folks. But since Deno posted it, I think it's fair say that it's overkill to use a front-end framework like React (mentioned as a comparator in TFA) to implement add to cart functionality on an e-commerce site. And that for users with slow browsers, slow/spotty Internet, etc., an architecture that uses a heavy front-end framework produces a worse overall experience than what most e-commerce sites were able to do in 1999.
Edit: IMHO all of this is an artifact of mobile taking a front seat to the Web. So we end up with less-than-optimal Web experiences due to overuse of front-end JS everywhere; otherwise shops would have to build separate backends for mobile and Web. This, because an optimal Web backend tends to produce display-ready HTML instead of JSON for a browser-based client application to prepare for display. Directly reusing a mobile backend for Web browsers is suboptimal for most sites.
> In practice, they are wildly overused, largely (IMHO) because front-end developers have forgotten how to build without them.
I've been a "back-end" developer who sometimes does "front-end" stuff for a long time. Both with web tech going back to classic asp, web-forms and those Java beans for JSF or whatever it was called, and, with various gui-tools for C#, Java and Python, and I think one of the reasons people use the "front-end" tools you're talking about in 2023 is because all those other tools really sucked.
I guess NextJS can also be server side rendering, but even when you just use it for React (with Typescript and organisation-wide linting and formating rules that can't be circumvented) it's just sooooo much easier than what came before it.
Really, can you think of a nice application? Maybe it's because I've mostly worked in Enterprise organisations, but my oh my am I happy that I didn't have to work with any of the things people who aren't in digitalisation have to put up with. I think Excel is about the only non-web-based application that I've ever seen score well when they've been ranked. So there is also that to keep in mind.
> And that for users with slow browsers, slow/spotty Internet, etc., an architecture that uses a heavy front-end framework produces a worse overall experience than what most e-commerce sites were able to do in 1999.
I think this is heavily dependent on company focus (and to some extent - the data requirements of the experience)
Basically - I think you can create a much stronger, more compelling experience on a site for a person with a bad/slow connection with judicious usage of service_workers and a solid front end framework.
But on the flip side... Making that experience isn't trivial, requires up front planning, and most companies won't do it.
IMO the big value add from React and friends is all of your rendering logic is in the same language and the same code base. I do not want to go back to templated HTML from Ruby/Java/PHP/whatever combined with ad hoc JS to handle whatever parts need to be dynamic. If you know your UI can be almost completely static (like with HN) then the trade-off from the old way is acceptable. But if you don't know where your site's going to go because you're a startup then it's hard to buy into old school SSR. NextJS, when done right, can be an acceptable 3rd option.
I started my webdev time with Django and flask, switched to Spring boot at the next job with various templating languages, depending on the artefact and some laravel sprinkled in.
Finally, the employer decreed that moving forward all frontends had to be done in Angular (version 6 or 7 at that time) and I have to say... I don't understand the point you're trying to make.
The frontend stacks aren't particularly more complex then the equivalent application done with html templates and varying ways to update the DOM.
Personally I'd say they're easier, which is why UX also started to demand state changes to be animated etc, requests to be automatically retried and handle every potential error scenario, which was never even attempted with pure backend websites
Nowadays I prefer using Typescript for anything html related and would not use backend templates unless the website is not going to be interactive
> I don't understand the point you're trying to make.
> The frontend stacks aren't particularly more complex
I'm not making a point about programmer experience at all. I'm saying that for most uses of most sites, the fact that Angular (or similar) is running in the user's browser is making the user experience worse. Performance is worse, accessibility can be worse, and so forth. And (again, for most uses of most sites) there is no benefit to the end user.
Consider the blogs, brochureware sites, landing pages, and e-commerce product pages that absolutely don't need something like Angular that today nonetheless do include it. Most Web apps are much closer to those than to Google Earth, Facebook, or Spotify's Web player.
Correct. The design tradeoff is dependent on knowing how much of a Lisp interpreter you need to build. For most sites, the answer is "none" and it's not worth degrading user experiences just in case your e-commerce site ends up needing the ability to also serve as a designer for Minecraft levels.
(Even if it does, there is no requirement to ship the heavy JS needed for the Minecraft editor to all the e-commerce product description pages.)
"The future of the Web is what suits our business model" /s
But in all seriousness, the web has websites, it has apps, it has games. Pick a tool that's appropriate for the job and forget about what is the past/present/future.
The rise of metaframeworks is interesting because it brings nuance to this. The line between site and app can be blurry.
For example, my app has a main screen that needs to be client rendered. It also has a user settings screen that could be implemented as a traditional server rendered page with no JavaScript, except it's a lot more practical to build everything inside the same project and technology. Apps and their marketing pages are often put on different subdomains for the same reason.
Metaframeworks that blend rendering modes help users get a lighter page load where appropriate, with less developer effort.
'Metaframework' is a term for frameworks that wraps React or Vue or similar. Next.js, Nuxt, Gatsby, etc. I think Astro is considered a metaframework too.
They're sometimes called stuff like "a React framework", depending on whether the speaker considers React a library or a framework.
The thing that went wrong with front-end frameworks imho was that instead of being what was promised: you could update UI elements with NO NEED to contact the server at all, only posting back when something needed persisting, instead it became an excuse that every action on the front-end needed to call an API or 3 so we've ended up with over-complicated apps that instead of not relying on the backend are relying on it more than ever.
Any little glitch, slowdown or unavailability is affecting you not only once on page load but potentially with every single interaction. To make it worse, a lot of backend interactions are not made interactively or synchronously where the user might expect to wait a little while, they are made in the background causing all manner of edge cases that make apps somewhere from very slow to virtually unuseable.
I guess it's that old adage that people will make use of whatever you offer them, even if they go too far.
I'm always amused to hear web types speak of grinding HTML, CSS, and JavaScript down to somewhat simpler HTML, CSS, and JavaScript as "rendering". Rendering, to graphics people, is when you make pixels.
It's consistent with the use of 'render' or 'paint' to describe what a UI component does to, well, render itself. For most UI systems this has involved higher level APIs than directly pushing pixels for a long time.
It's obviously nonsense. The lowest latency cache and state storage is clientside. You can piss around with multi regions and SSR to minimize latency but that's just placing a lot of regional caches near your users. The nearest place is in their actual browser -> offline first is the future
Being dependant on a reliable internet connection sucks, especially when travelling. SSR just won't work for mobile.
With offline first, the client is the lowest latency server possible.
Yes you should sync too. Offline first, not offline only
It is ridiculous. It's pretty much newspeak. Like calling installing applications "sideloading" when you're not using some megacorp's walled garden. Also, I'd say "HTML" not "HTTP". What's HTTP(/3) these days is not what HTTP(1.1) was in the past.
Depends on what exactly it is. If you for example take a react app that was doing rendering on user side and change it so that it is "pre-rendered" on the server it makes sense to call it server side rendering..
Can someone explain me: Deno is becoming such a confusing framework, initially NodeJS alternative now it seems to me that is trying to compete with NextJs?
It's not trying to compete with Next, but advertising how Deno's similarity to the browser and being able to run on CDN-like networks (which i refuse to call "the e*ge") can let you build a better version of Next's features yourself.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.
but the page is loaded later because you have to wait for the server to perform this work. There is no reduction in total work, probably an absolute increase because some logic is duplicated. If there is a speed improvement it is because the server has more clock cycles available than the client, but this is not always true.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
Huh? It takes less code to build a string in a datacenter than it does in a browser?
> but the page is loaded later because you have to wait for the server to perform this work. There is no reduction in total work
Removing or shortening round trips absolutely removes work. Sending you a page, letting you parse the JavaScript, execute it to find out the calls to make, sending that to the API, the API decoding it and pulling from the database, rendering the JSON and returning that, you parsing the JSON, executing the JavaScript and modifying the DOM
Vs
Pulling from the JSON and rendering the HTML, sending it to you to render
yes, reducing round trips is very important for web performance. It can be done via a server-side architecture where external resources are sent immediately as prefetch headers, then the page is generated and sent after database calls etc are made. Or via a client-side architecture where API calls needed for initial render are either sent via prefetch headers, or included inline in the HTML response.
If you don't need page interactivity then a pure server-side approach works best because you do not need to send, parse, or execute any page logic. For highly interactive pages you tend to need all the logic to rerender each component on the frontend anyway, so client-side rendering makes sense as a simpler approach without significant performance costs. Isomorphic approaches are more complex and brittle, they tend to hurt time to full page interactivity because of duplicated work, but can be needed for SEO. Reducing overall page weight and complexity and lazy-loading where possible, and getting rid of the damn tracking pixels and assorted third-party gunk, are often more effective directions for optimization than worrying about where HTML is generated.
> There is no reduction in total work, probably an absolute increase because some logic is duplicated
The server is either building a JSON (or some other message format) response, or, it could just build the relevant HTML fragment. In many cases, there is no real increase in actual work on the server.
Conversely, the client side doesn't need to parse JSON and convert it to a DOM fragment.
There's solid reasons for both approaches, depending upon the context.
This does not 100% track with observed client-side performance. Another poster mentioned caching, which obviously reduces total work. I would also add shifting the work via pre-computation as another commonplace way to improve performance.
> It takes less code to build a string in a datacenter than it does in a browser?
The string build in a datacenter might be happening in a warmed-up JIT of some language, on a machine with enough capacity to do this effectively. By contrast, the browser is possibly the slowest CPU under the most outside constraints (throttling due to power, low RAM, multitasking, etc.). It is generally going to be better to do the work in the datacenter if possible.
>but the page is loaded later because you have to wait for the server to perform this work.
Client-side rendering isn't immune to this. The server APIs they hit have to render the response in JSON after hitting the same kinds of backend resources (e.g. DB).
Caching also works for client side rendering of course (you can usually cache the entire client side app so that the browser doesn't have to hit the network at all to start running client side code).
> you can usually cache the entire client side app so that the browser doesn't have to hit the network at all to start running client side code
This is also true for Web apps that do not have meaningful amounts of client-side code.
> Caching also works for client side rendering
There are obviously a lot of differences in how caching works, but client-side caching is generally strictly worse than doing so on the server. Using the e-commerce example in TFA, every browser has to maintain their own cache of the product information, which may include cache-busting things like prices, promotional blurbs, etc.
The server can maintain a single cache for all users, and can pre-warm the cache before any users ever see the page. Adding fragment caching, which allows parts of a page to be cached, a server-side caching strategy will typically result in less total work being done across the user population, as well as less work done at request time for each visitor.
As with SSR vs CSR in general, I think which is best depends on how much interactivity there is on the page. And also how much can be done entirely on the client side (it is possible to cache data client side too and make the app work entirely offline).
As an extreme example, something like https://www.photopea.com/ would be a nightmare to use if it was server-side rendered. Or consider something like Google Maps. For things like ecommerce that are mainly focussed on presenting information I agree that client side rendering doesn't make a whole lot of sense. But that isn't the whole web.
Yes, and also how much interactivity is better served by a thick browser-based client than by a round-trip to the datacenter. In practice, many Web applications we encounter daily have relatively low interactivity (where something like Google Maps or the Spotify Web player score as "high"). And then they are implemented using thick frameworks that are frequently slower than a round-trip to a server for re-rendering the entire page was even as far back as 10 or 20 years ago.
Your extreme examples, plus applications like Figma, are absolutely places where I would expect to see thick client-side Javascript. However, most Web applications that we encounter frequently are more like e-commerce, blogs, recipe websites, brochureware sites, landing pages and the like that absolutely are primarily about presenting information. Using thick browser clients is a sub-optimization for most of those Web uses.
> However, most Web applications that we encounter frequently are more like e-commerce, blogs, recipe websites, brochureware sites, landing pages and the like that absolutely are primarily about presenting information. Using thick browser clients is a sub-optimization for most of those Web uses.
I mean sure (although I'd probably make a distinction in the terminology and call those websites as opposed to web applications). I don't see many of those kind of websites using client side rendering though. I think the grey area is sites like Gmail which do have quite a bit of interactivity but would also be workable with SSR. Personally I think they're generally better using CSR. If done badly as the current gmail is then it makes things slow, but if done well (like the older gmail!) then it's faster.
The "modern" state of the web. I miss old school html with little to no javascript. It is all java in the browser all over again. Or flash. Same old same old. Very few websites need any of this stuff. It is just a bunch of junior devs wishing they worked for FB I guess ergo them guzzling react like there is no tomorrow.
Depends what you're building. If it's a dashboard app gated behind a user login, sure, have it be a static HTML file. SEO is irrelevant and you wouldn't be server rendering anything anyway.
If it's a public site and you want people to find it (ie SEO) you really should be server rendering and caching on a CDN.
HTML5 can do audio and video by itself calling native video decoders in the hosts such as FFMPEG and yet people kpt choosing crawling JS players. It's idiotic.
"Server-side rendering" is destined to rule the future purely because of control. In the future consumer devices will be simplified, much more streamlined, and completely locked down. They will be used for the single purpose of displaying streamed, pre-packaged, pre-layed-out content from servers.
- SSR is always slower than static sites
- SSR is often slower than CSR - especially when using a small and fast framework like Solid, Svelte or Vue3
- When rendering on the edge and using a central API (e.g. for the DB) SSR is always slower when interacting with the site than CSR because of the extra hop from your central API to the edge to the browser instead of from the central API directly to the browser
- SSR is always more complex and therefore more fragile, however this complexity is handled by the frameworks and the cloud providers
- SSR is always more expensive - especially at scale.
- Traditional SSR with html templates will scale your site much better, simply because traditional languages like Go or Java or C# are scaling much better than NodeJS rendering the site via JS
We owe the technology of the "new" SSR and genius stuff like islands many very smart and passionate people.
Overall, this article not balanced at all. It is pointing out only some potential benefits. It simply is a marketing post for their product.