Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't get client side navigation. It's a worse experience in every way. It's slow, often doesn't support things like command-click, it usually breaks the back button, and even if it doesn't it breaks the restoration of the scroll position.

The only thing worse is a custom scroll UI.

Why do people try to reinvent the most basic features of a webbrowser? And if they do, why do they always only do a half-assed job at it?

It's infuriating.




Because it's faster. If you don't have to download all the content again, force the browser to re-render everything, then by design you just get the new content from the server faster, if you have to download anything at all. The idea exists since the introduction of AJAX.

Furthermore, you don't lose state, which makes things much more simple.

Imagine a simple image gallery. You just update the <img> tag, update the URL with the history API and everybody is happy. If you were to navigate via links, you get the same behavior.

Of course shitty implementations exists and you only notice the bad ones. If done right, you don't notice that it's happening at all.

Lastly but most importantly, context matters! It's not a silver bullet, but it can be really useful.


> Because it's faster.

The whole point of the article is that it's not true. It's not the only article that disproves it, and honestly, it's not difficult to notice it. Just go to any blog running off a static site generator; loading times of pure HTML webpages on good connection are so fast they whole thing outruns client-side page switches even if the page is already in memory.


> The whole point of the article is that it's not true.

Only for the narrow condition of loading a whole page. A lot of Web apps make async request much smaller than loading the whole page: imagine deleting one record in a list of 50 items; the async response could be the HTTP 200 header alone.

I think it would be useful for developers to separate web applications from web sites and consciously make trade-offs (such as state management: for the projects I have worked on, adding "Undo" logic was far simpler with client-side state-management. YMMV)


Web apps making async requests should be making much smaller requests than loading the whole page - but they're not. They're loading a page and parsing out the "less than the page" bit to stick into the current page; or they're loading a JSON blob bigger than the current page to update the three visible elements on the page.

Imagine Software Engineers that actually engineered solutions - the use of small async requests would be a boon to everyone! But no one's doing this.

I know first hand of one company doing this nonsense with requesting a full JSON payload (describing the whole house that goes along with the proverbial kitchen sink), rather than requesting updates for the one property on the user's screen. I've proxy-sniffed at least two other, unrelated companies doing exactly the same thing.


> They're loading a page and parsing out the "less than the page" bit to stick into the current page; or they're loading a JSON blob bigger than the current page to update the three visible elements on the page.

This is...disgusting. I was speaking of applications I have personally worked on; the teams/organizations I have worked with were concerned about performance (and measured it with regression testing). I suspect the oversized responses and cherry-picking are a result of back end and client being responsibilities of independent teams and client team is told "use this pre-existing kitchen-sink endpoint".


It’s certainly possible to write good client side code, I’ve worked on teams that really care and are given room to do it (because they were able to demonstrate its worth after so much effort.) and on teams that don’t.

JavaScript, as she is spoke, is just an awful language. It has brilliant ideas and if applied correctly could make everyone’s lives better but that’s just not how it’s used.


So your argument is that many teams don't care and do a bad job and it is the fault, somehow, of the language they use?


It's a powerful argument. Defending client-side rendering here (or SPAs) is almost no-true-Scotsmanish. It is technically possible to do a good job, but it's almost never done. Yours and sangnoir's teams may care about performance and do actual software engineering - but it doesn't help me much when my bank doesn't do it, the places I shop don't do this, big sites like Reddit don't do this, and seemingly none of the SPAs I've visited in the past 5 years do it.


Have you used the banks mobile applications, and are they fast? I suspect you're right to say there's a cultural issue, but it's not with the language, but with the organization building on the platform. If they don't care about the user experience on the web, they wouldn't care about it on a native desktop or mobile application either.


Yeah, but how is this the fault of the language?

There's nothing inherent in JS to say "you must do a shitty job of optimising your page speed"

I'm working on replacing a PHP app that is currently taking 20 minutes to refresh the index page because their SQL doesn't scale. Is that the fault of PHP, SQL, or the developers who wrote it?

You can write crap code in any language (even Rust!).


It's probably not the fault of the language per se, just the culture surrounding the use of that language.

It could be argued that the same language, given another chance, would produce a similar culture, though I'm not 100% convinced of that. Anyways, what we need is a reboot of web development culture.


Maybe it's a fault of Sturgeon's law - I sort of wonder if the "necessity" to have so much web output - so many applications, so much new development does not create a situation where there is a pressure to make 95% of everything crap, because you need to get a lot of developers to make things and some of those developers are going to be crap, and you need to make lots of decisions and some of those decisions are going to be crap, and you need to do a lot of changes in short periods of time and that results in a lot of crap.

It just seems more likely to me than any culture about a language per se.


Might be, but I wouldn't discounting culture as a mechanism reinforcing it. People aren't working in isolation; they build on each other, and enshrine "best practices" that are often enough the sources of these problems.

But thinking of it, Sturgeon's law may be at play. PHP used to suffer from similar reputation to JavaScript, and only started regaining its status as a proper server choice once the masses moved to greener pastures. Sure, the language was a "fractal of bad design" and had footguns galore, but it wasn't that bad, and most of the traps were avoidable when you had half a brain and used it. The web may be crap very well because it's where anyone fresh to programming can find a high-paying job, and you can become a "senior engineer" after one year of job experience.

But that, still, is a problem. Outside of programming, there are quality standards on the market - often enforced by governments. Even if 90% of chairs are crap, you can't go and sell that crap to the public. Quality standards filter most of the crap out.

If that's the case, I'm not sure what to do. Introducing quality regulations to programming might help solve the problem of website bloat and constant leaks of private data, but it would also destroy the best thing about the web and programming in general - if you have an idea and a computer, you can make it and show it off to everyone.


PHP is considered a proper server choice these days? When did that happen? It's been a long time since I encountered a new project being written in PHP.


I don't do php but I assume any resurgence would have something to do with Laravel.


> the use of small async requests would be a boon to everyone! But no one's doing this.

Some of us are! RFC8620 is purpose-designed for building network efficient APIs: https://tools.ietf.org/html/rfc8620


I apologize for the hyperbolic “no one.” I was rushing...


GraphQL was made to fix that issue.


GraphQL's nice if you want a third option to the choice between many bespoke endpoints or few generic endpoints, but if your problem is sending a list of 400 widgets with every single page load, then you have an easier and better way to increase performance sitting right in front of you.


When I want to read an article, I don't need a "web application". I want a static page with the text and the pictures. That's it. Not a single line of JavaScript. And certainly absolutely no image lazy-loading nonsense. If my data is that limited, I'd disable images myself in my browser settings, thank you very much.

What I'm trying to say is that most of the web isn't really that interactive. It's mostly comprised of content that is static in its nature.


> If my data is that limited, I'd disable images myself in my browser settings, thank you very much.

Yeah, because most visitors to most websites can and know how to disable images in browser settings.

In fact, let me know how to do this in iOS Safari. Before you tell me to throw my iPhone and iPad in the trash, get an Android, root the damn phone, install F-droid, compile Chromium with patches, or write my own operating system, now is a good time to stop.

I seriously hope this sort of condescending “elitist” bullshit would die off.


> Yeah, because most visitors to most websites can and know how to disable images in browser settings.

Maybe they would, if the UX culture of these days wasn't removing every feature that isn't used on every interaction, and then devs wouldn't try to build increasingly complex reimplementations of browser features on each page.

In an ideal world, lazy loading of images would be something handled purely by the browsers, and users would be aware how to operate it. The site's job is to declare what it wants to show; the User Agent's job is to decide what to show, when and how. But nah, the web culture prefers to turn browsers into TVs.


> In fact, let me know how to [disable image loading] in iOS Safari

Given any browser, how do I reload a lazy-image that failed to load, without resorting to whole page refresh or diving deep into the Web Inspector?

Most browsers have a context menu option to reload regular images, but they cannot and will never handle a bunch of dynamic block elements with background-image option.


Any reasonable img lazyloading implementation should produce plain img tags once loaded. Not sure why you would end up with background-image’d block elements.


"once loaded" is the key. If it fails to load, then bummer, no img tag.

I'm also not sure about the background-image'd block elements, for what it's worth.


If it failed to load, it would leave behind an img tag that failed to load, like any non-lazy-loaded image. Unless you’re talking about JavaScript code failing to generate an img tag (e.g. from a data-src attribute), which would be bizarre.

Edit: by “once loaded” I meant once loading is triggered, if I wasn’t clear enough.


>It's mostly comprised of content that is static in its nature

Yes, but the readers of the content are not the only customers the site is built for. The people writing and editing content are also consumers of the site. The article isn't (and doesn't have to be) interactive, but the CMS on the backend is. The advertiser's portal is. The reporting dashboards are.

Just like how Ruby on Rails is slow but we deal with it because it makes programming so much faster, dynamic websites are slow but we deal with it because it makes their administration so much faster.


The authoring tools and the published article don't have to (and I'd argue shouldn't) be joined at the hip like that, though. Obviously the editing tools benefit from being JS heavy. That doesn't justify polluting the published article itself with JS (unless the readers are editing the article themselves? Even then, though; Wikipedia seems to get by just fine without trying to replace half my browser with shitty JavaScript code).

> Just like how Ruby on Rails is slow but we deal with it

Not all of us :)


In pretty much the same time it takes for the HTTP 200 to arrive I can also download a few kilobytes of HTML. Latency will still dominate.


> Only for the narrow > condition of loading a whole > page

It is an interesting future we live in, is it the best one?


Case in point, navigation on my site is pretty fast (to me, at least) and doesn't use much JS at all: https://www.stavros.io

(I promise I'll reply to your email soon)


I'll admit it is pretty fast. Assuming you are using a mouse. But keyboard navigation is non-existent. So you have to ask yourself is it worth it to go against decades of effort put in to standard web navigation for what gain? Obviously only you can answer that for your blog. I'm not having a go.

But I will give you credit for the fact that it does work with Lynx!


By "keyboard" I assume you mean TAB-key-based navigation (I don't know of any other included in the browsers)? If so, it looks to me that links are in fact TAB-stops, but they're not being highlighted. It's something that should be solvable with a CSS adjustment.


If the links are not highlighted by default, then tab navigation is basically non-existent since I cannot see where I'd be redirected and I personally bother to write custom CSS only for the websites I visit very often.

Is there an actual reason to disable highlighting? It lowers usability and accessibility, but I'm not sure what do you get in return?


Oops, I'll fix that, thank you. I'm assuming the designer thought it looked "better".

Also I use Vimium so I never noticed, since that mode of navigation is much faster.


> Is there an actual reason to disable highlighting?

Some browsers, Chrome especially, show the focus outline when elements are clicked with a mouse and some people think it looks unacceptably bad.

Focus-visible is a CSS property meant to solve that but it’s only supported in Firefox and requires browser heuristics to do the right thing.

https://caniuse.com/#feat=css-focus-visible

https://css-tricks.com/keyboard-only-focus-styles/


...also known as "directional navigation" — widely used by screen readers and browsers on Android TV or some Android Auto devices.


Yeah, but you force me to stare at blank spaces while your webfont downloads [0]. I guess I should be thankful you’re not using that font for the body text, too (most sites do!).

[0]: https://imgur.com/a/FAX6BDW


I just set browser.display.use_document_fonts to 0 in about:config.


That's conflating two different issues


It's the same meta-problem, though: modern webdev practices involving adding gimmicks for no good reason that introduce extra complexity and resource use, and then adding even more complexity and resource use trying to fix all the expected behavior and features of browsers that the initial gimmick broke... and doing it poorly.


I hate all these gimmicks you are talking about (slow to load fonts, unnecessary videos, fucked up scrolling, spinners everywhere, etc.).

That being said I challenge the assertion that they are used for no good reason.

- Custom fonts are used because they help shape a brand. Properly used, the choice of fonts communicates a lot (at a subconscious level) about the company or person behind it.

- Unnecessary videos unfortunately work (i.e., help grab and retain user attention). Not for me (quite the contrary!), but for the majority of people stumbling upon a company's website.

- Same for weird gimmicks involving animations and what not. I can feel them draining my battery out and it phisically hurts, but most people like them and take away the impression that "this is a modern person/company".

All in all, many website's primary goal is not to communicate factual information, but to capture user attention and/or communicate at a subconscious level, and gimmicky things work for that purpose :S


This idea that users need to see all text on your website in a particular font face (which is usually just a poorly packaged riff off of a famous font with minor changes that will largely go unnoticed by the unwashed masses) in order to market your product is absolute BS. Aside from a very few iconic font associations (e.g. IBM), there's no actual evidence that it actually works.

I fully support using a custom font for your visual assets - that's what SVG with text exported as curves/outlines was created for. But why should I use your horribly hinted, terribly rendered, absolutely illegible webfont (and have to download it to boot) just to read the copy on your website? Why should anyone?

Look at Apple. Despite what I'm sure their design team tells them, even they don't have an iconic font. They've bounced around between Helvetica, Myriad, Lucida, and a half-dozen other sans serif fonts that share certain design traits (which people do identify and associate in general), yet each time they introduce a new font they update their website to trigger your browser to download the webfont to render the page. It's a pointless exercise in the name of job security.

Companies have had websites going back 30 years. Web fonts have existed for a long time. This trend of each company having to pay tens of thousands to commission an unrecognizable, undistinguishable typeface that all text on their website must appear in is a brand new phenomenon, and there's zero proof it does anything besides (poorly) accomplish what someone thought was a good idea.


I agree; I cheated by purposefully using both meanings of "good". The reasons you mention I consider bad in ethical sense, and I believe the world would be better off if sites didn't do it.


About as fast as mine on my underpowered machine, and mine is plain HTML without much attention paid to optimizing it further.

> (I promise I'll reply to your email soon)

(Take as much time as you need; also, I didn't expect a reply over the weekend :).)


It's not as fast as https://dev.to/, which is an SPA. I.e. client side routing.


And it took me 2 minutes clicking around to break its idea of the page state. I am partially scrolled down the home page, and it just decided to deactivate scrollbars and the ability to scroll.

A great example of how it's quite difficult to reimplement stuff that works perfectly well on traditional pages.

(At least they seem to haven gotten rid of some of the dark patterns they had in the past, that's nice to see)

EDIT: and within a minute more found another state bug :D

Yes, you can make perfect SPAs, but many people fail and it's a good question if the effort required to do it properly is worth it.


I've never found a bug on there, and I've been on it many times.

I'd love if you can show me how to reproduce this bug.

I just don't have this experience with SPAs breaking. I actually have no idea where it's coming from.


I just reproduced it following the commenters instructions -- clicked the sidebar link, got a popup, pressed the back button and I have no scroll bar.

I can see what is happening there: the popup removes scrolling (because of the overlay) but the back button doesn't restore it.

This certainly does lend to the conclusion that managing page state in a SPA is not trivial.


It doesn't happen 100 % of the time, but right now going to the homepage, clicking one of the listings in the "newest listings" box, and then returning to the homepage through the browser back button triggered it.


I am able to reproduce this bug with Safari on a Macbook. I clicked on an item in "newest", then quickly pressed Cmd-Left to return to the previous "page". The front page reappears, but I'm unable to scroll with arrow keys or trackpad. An additional press of Esc returns the expected functionality.

It seems to be a fast and responsive site when it works, though.


> It seems to be a fast and responsive site when it works, though.

LOL.

Y'all, I love speed as much as anyone, but your development priorities should be 1) it works and 2) it's fast.

Using HTML links where 1) is never in doubt and all focus can be placed on 2) seems like good engineering to me.

Reinventing browser navigation is like building a rocket. You should be really, really sure that you need to do it before you try.


Ah, the good old “footer at the bottom of an infinitely long page.”

Facebook and Google used to be guilty of this, but it’s been a while since I ran into that particular brand of user-hostile web design.


Addendum - I may have been too harsh here. The page most likely is not infinite. You should able to scroll through the entire backlog of dev.to content to access the footer.

Seriously though, you can't just bolt infinite scrolling into the middle of an existing page if you have content at the bottom.

If anyone's curious, the footer contains: Home, About, Privacy Policy, Terms of Use, Contact, Code of Conduct, DEV Community copyright 2016 - 2019

They've duplicated most of that (but not the copyright) in the sidebar's "Key Links" box, so it's not as big a problem as I've seen on other sites.

If they hadn't, I wonder about the legal implications of making your privacy policy, terms of use, and copyright notice completely unreachable. And why keep them in the footer if you never leave it on screen long enough to click it? Just a "not my job" issue with whoever implemented the continuous scroll? Clearly someone thought about it long enough to put the links somewhere reachable, but not long enough to get rid of the old ones?


It is fast, but it breaks even faster. Managed to get myself in locked up state in some 15 seconds. Couldn't go back, displayed page was incomplete but no scrollbars were to be seen.


Good: it detects when you’re offline and displays a fun error page.

Bad: this only works on every other click. Half the time the links just silently fail....


For me, it always displays the "you're offline" page.


A good demonstration of one of the major hazards of reinventing basic functionality like this: it’s really easy to end up with something that behaves differently on different browsers or just for different people.


You just reminded me that slate.com frequently tells me I'm offline after clicking the back button.


How do I access the footer on mobile? It just keeps scrolling it down as soon as it appears


Scrolling is extremely laggy on latest mid range Android using Chrome. Almost unusable.


That definitely seems faster, I'll give you that:

https://i.imgur.com/bx1LUZD.png


Scroll, click a story, click a link off the site and then hit back button twice. You will jump to top of homepage incorrectly.


Sure, I mean all you're doing is basically loading up static content. In which case SSR HTML will perform just fine, but you do happen to use client side logic for your comments section which also makes sense to me. A SPA for your site isn't really the use case for SPA.


Scrolling on the hamburger buttons is in my opinion much smoother on your site than on sites that force you to click their links in the page itself. I'll have to check out the source when I get home, its overall a great UX with a focus on functionality over form, but the form is still nice enough to get the job done!


Your site is great and the perfect usecase for traditional links, but it's literally like 99% content, with very little markup. The difference is more stark on design heavy pages.


... loading times of pure HTML webpages on good connection are so fast they whole thing outruns client-side page switches even if the page is already in memory.

Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one. Most of the push behind static site generators is to get as much of the code necessary to display all the website in to your browser as fast as possible, so it's there no matter what happens later. In cases where the user has a fast, robust connection that may well be slower than loading each page on demand, but in cases where the user's connection is slow and flaky (eg on a train) static sites generators do work better.

Perhaps the next generation of websites will take your connection in to account better. It depends on whether browsers and users will be willing to give up that information though. As far as I'm concerned I will use everything I can to improve the user experience on the sites I build.


Well. I can speak from a pretty solid experience here. I’ve travelled the US by train, the length of the UK by train and large swathes of Europe by train.

The site that tends to be the best to use is the ones that don’t take much to load. Connection tends to be spotty, you get bouts of “some” data and then you’re dry again for a while. If you can squeeze a page load in there it’s infinitely better than a half opened page.

Your first page load is -incredibly- important here. It’s the difference between a usable site and an unusable one.

The sites that work the best are the ones that do not try to do very much fancy stuff, because that fancy stuff only half-loads most of the time; leaving you to keep refreshing and hoping the spotty connection finally lets you bring in that 2KiB that will allow the page to actually load.


Your first page load is -incredibly- important here. It’s the difference between a usable site and an unusable one.

This is the point I was making. If the server can send the user enough data on the first load to make the whole app/site usable then the user won't need to wait for the network if they're in a tunnel. They've already got the necessary resources (which shouldn't be everything, just what's necessary). In that scenario client side routing beats server side completely because server side rendering just doesn't work when the user doesn't have a network connection.

That said though it's wasteful and entirely unnecessary if the user has a good connection. Really websites should have a good mechanism for testing. The Network Information API doesn't have particularly good cross browser support and it isn't especially reliable yet.


If. There are two failure points here, both of which are so frequent that I can't even recall seeing an exception.

One, if your first page load tries to load a full page, instead of just some JS that bootstraps the process of loading of the rest of the page, which lets the first load finish before execution. Better yet, it should load the absolute minimum bit of JS site kernel. Then, the first load is likely to succeed on a slow/spotty connection, and we can skip to problem #2 below. This isn't being done correctly in most of the sites I visit for some reason; the first page is either attempted to be downloaded in full, or the "skeleton" of the UI is the piece that always loads the longest.

Two, loading UX. You have a loaded UI skeleton with boxes that need to be filled via further requests. Or, I've clicked on something and a subsection of the site needs to be refreshed. What happens is either nothing, except the SPA getting unresponsive, or I get the dreaded spinners everywhere. If the requests succeed, the spinners eventually disappear. If they don't, they don't. Contrast it to the pre-JavaScript style: if something needs reloading, my page is being rendered essentially top-to-bottom, complete bit of contents popping up as they're loaded; the site is usable in partial state, and if anything breaks, I get a clear error message.

Can this two problems be solved correctly in client-side rendered code? Yes. Can SPA be faster than full page loading? Yes. Is it usually? No, because web development is a culture. When a company decides "let's do SPA" or "let's do client-side rendering", they unfortunately inherit almost all the related dysfunction by default.


I think you're not making the same point as me at all.

I'm going to take the common case of a news article;

Imagine for a second, you're on a train and you have low bandwidth internet, when it works, which is rare. Now you're on hackernews and you've loaded a whole comment thread, you're reading through and someone posts a link to your article.

Now, the article can load with client side routing, but will take longer. And depending on implementation might not actually have the whole article.

The page which is pure html with minor javascript is going to load, in full, and I don't need subsequent requests. And, it's guaranteed to be smaller than the one that you're over-engineering.


I have a family friend who lives in a part of the US where the only options are dial-up and satellite. He thus uses dial-up.

Without fail, the sites that rely heavily on JS to do page loading end up performing significantly worse (and in fact outright bugging out, and often failing to load entirely) than sites which just send ordinary HTML docs. A disturbingly-high number of the JS-heavy "web apps" out there seem to have little regard for actually handling failures on a sketchy connection.

Your point would make more sense in the context of an Electron app or something with a permanently-locally-cached copy of the site. That would at least give my elderly friend the means to predownload it when he piggybacks off the public wifi when he goes into town.


> Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one.

Surely some web apps need to work offline. But most web pages do not, and I don't want most sites I visit to store a bunch of data on my machine on the off chance that I'll use them offline.

"Offline first" seems really misguided to me as a rallying cry for all things on the web.


The article only tested one site with one browser, hardly a good test. It may be correct anyway but it IS not a proof.

It may also be much harder to implement something like a comment section that is fast and correct with only static html and a server backend.


I don’t understand why doing things the non-JavaScript way wouldn’t be correct. Surely you still need to do all the correctness things on the server anyway even if you do some in JS because client side validation won’t stop other people (or spammers) from sending invalid requests to the server. When I think of a correctness problem it would be in keeping the js-rendered comment section synchronised with the server-side comment section which seems harder than making it work with no js.


> webpages on good connection are so fast

Not everyone has this, espicially when your someone like myself who writes informational websites for people who won't be connected to any internet for hours at a time.

Super fast response times don't cut it when your response time is non existent at the moment.


If your internet connectivity breaks constantly, there is a good chance that a JS heavy client side app is going to irrecoverably break, require a hard refresh and take much longer to load because of your crap internet.

I should know, my internet connectivity sucks, and single page applications are almost without exception, a completely awful experience.


I don’t think blogs are a good example. How much complex logic do they have, and how many database queries and data processing do they do before they serve their content? Furthermore, how much interactivity do they have? Not much.


I'd guess as much as 90% of pages out there - fetch a blob of data from the server, render it, and have most interactions not touch the server at all.

You can view e.g. ecommerce sites as blogs with one post per combination of (search query, filter switches, page selected). This necessitates frequent shots to server, but the site otherwise transfers roughly a page's worth of data per viewed page. I've never seen an on-line shop that was made better by being an SPA, over an old-school page reload every click.

Similarly discussion forums - there's Discourse that's arguably more gimmicky with its client-side magic; beyond that, if you want to see what would happen if you turned HN into an SPA, look no further than the dumpster fire the new Reddit design is.


> Because it's faster.

In my experience sites with these kinds of navigation are typically extremely slow with initial page loads taking anywhere from a couple seconds (bad) to 10-20-30 seconds, sometimes even a minute (on a 100 MBit/s connection) and subsequent navigations are often slow as well.

It can be hypothetically faster, because you can theoretically get away with less data transfers and less client work, but in practice the exact opposites materialize.


Reminds me of JIT compilation, with the large initial load cost and the theoretical-but-mostly-unmaterialized reasons it could be faster.


HN is very fast. Stackoverflow is very fast. Lots of other well engineered sites are very fast. As the article shows, browsers are well optimized and really don't download all the content again. In most cases it's just the HTML, which is streamed and rendered as it comes in. All the assets are cached, and scripts are even stored in compiled state to skip reparsing.

Some sites might be slow at generating that HTML but then they would be equally slow at generating whatever JSON/API responses used in a SPA, along with loading all the heavy JS in the first place to render it all.


This is what I don't really get. It seems like some people are under the impression that generating and parsing HTML is slow or takes a lot of resources. In almost all cases it's going to be faster and less resource intensive than generating JSON - especially if you are just using a templating language to interpolate some values. I agree that JSON could lead to less data being transferred over the wire, but that assumes your client already has cached the megabytes of JavaScript needed for you SPA. For something like a news site it doesn't make sense.

On the client parsing is fast, but the slow part is the browser laying out the page and fetching new resources - but that is going to be slow anyway, even if you do client side rendering. To make that fast you need do something more intelligent than just rendering a different React component, as well as prefetching resources in the background. But how many SPAs actually bother trying to do that?

I agree that SPAs have their place, and they have a lot of advantages over what we had before, but I just don't understand how it has seemingly become the default for any kind of web development - with such disregard for performance.


> I agree that JSON could lead to less data being transferred over the wire

I haven't benchmarked the difference, but I bet HTTP compression removes most of the difference

> but that assumes your client already has cached the megabytes of JavaScript needed for you SPA

A cache that will need to be busted every time you deploy new code. You do deploy often, right?


The idea exists but it's clearly not true, except perhaps in specific, niche uses like re-rendering a continuously refreshed graph. Even for your image gallery, it makes for confusion. Back button does what? Shift-refresh does what? Just let the browser do what it does. If you want the images to render fast, use HTML 2.

Writing a web app with server side pages forces you to think about where the state lives. This is a beneficial discipline.


Google Maps would be the classic example of client-side refresh working so well that it's now the universal choice. At the time, it was a revelation, as the Mapquest-ish predecessors (if I recall) required a click and server-side refresh to scroll or zoom the map.

Of course the revelation here was that <a> tags weren't what we needed to move a map, but rather a click-and-drag plus scroll-wheel behavior to explore a huge image at various levels of detail. If the server-side page-by-page navigation paradigm is a lousy fit for information delivered over the internet, then it may make sense to re-invent the page load.

To use the language of a sibling comment, this brought things to a much more app-ish behavior. And eventually Internet maps have become, especially on mobile devices, an app. Hence the need to break server-side navigation may have foreshadowed the need to break out of the browser.


I think this can be generalized as: if you need to break out of "click and wait a moment and see a changed screen" paradigm, for something like a continuously scrolling map or a smoothly flowing server load graph, then you can make good use of client-side loading.

If you are just trying to re-create it, don't.


Exactly. But why a blog platform or news site or other content-focused website would feel the need to do that is beyond me. Not everything on the internet needs to be an app.


The back button would do the same as it would do after you click a [next image] link :)

Not sure about your point, the history API improves UX if you do it right:

https://developer.mozilla.org/en-US/docs/Web/API/History_API

It does everything as hand written, pure HTML page would, just do it faster.


"You don't have to download all the content again" is also true if you version your assets and use a CDN with far-future expiry headers.

If you need an HTTP connection to download a section of HTML for a new part of an SPA it won't be that much different from a full page of HTML, presuming you compress the transfer as you should.

"Of course shitty implementations exist" is true of a non-SPA setup too.


While it's true that the browser won't have to download the content again, it will have to re-instantiate various resources (eg execute all JavaScript over again..., restart gifs). If implemented correctly, JavaScript navigation should seamlessly appear like normal navigation. Not supporting streamed requests is a serious drawback.

Of course browsers have actually gotten pretty good at AJAX-like loading instead of completely re-rendering the page. These systems tend to rely on heuristics though, and I don't think there's any documentation for these, but even when these fail the browser tend to be more competent than even the best JavaScript solutions.


I think there are good use-cases for SPAs (Google Maps, for example), but the majority of cases that I've seen aren't good ones, and the extra complexity involved in managing state etc. far outweighs any marginal gains in not re-instantiating JS.


> While it's true that the browser won't have to download the content again, it will have to re-instantiate various resources (eg execute all JavaScript over again..., restart gifs).

Your JavaScript shouldn't be blocking page load anyway. Defer, defer, defer.


JS is cached in its compiled state in modern browsers. There is no download or parsing step for repeated loads.


without a unique key in the js name/path, or any server action to enable it? If so I would like to read about this particular development - can you point me to an article on how they're doing it?


Yes, Chrome uses V8 which has Isolates (also used by some FaaS platforms like Cloudflare Workers), and adds more optimizations on top like disk-based caching to share across processes. The script is keyed from a hash of its contents.

https://v8.dev/blog/code-caching-for-devs


Thanks! I guess will have to see if FF and Safari support same thing. Perhaps in another year can remove cache busting from builds.


They do, it's linked in the blog:

https://blog.mozilla.org/javascript/2017/12/12/javascript-st...

https://bugs.webkit.org/show_bug.cgi?id=192782

Also did you mean http caching? Not sure why would want to remove that. It's still important for the browser to get the latest script content before the bytecode caching happens.


How can the download step be skipped then if you are using the hash of the content as a key??


That's what HTTP caching is. Browsers use headers and heuristics.


What if there's no content to download? The client could have the same algorithm that the server could render.

For example create a melody with seed: 4564342

The client can render it and if you access it from the server the server does the rendering with the same seed.

Caches also exists, now with PWA-s offline modes would benefit from the History API.


>>Imagine a simple image gallery. You just update the <img> tag, update the URL with the history API and everybody is happy. If you were to navigate via links, you get the same behavior.

Please, don't! Just give me a page with thumbnails that are direct links to the original pictures! That's million times easier, works blazingly fast and I wouldn't need to spend so much time clicking multiple times to save each pic, which is the best case! The worst is serving me blobs of PNG instead of original pics. That's pure hostility.


> Because it's faster

That was the initial benefit. (Gmail, for example)

Then everyone starting using the client-side model as the de facto architecture for every new website and web app.

The old, "give a man a hammer" adage.

Like any tool, it depends on the job.

But sometimes a tool becomes so popular that it takes courage to choose not to use it.

"Err...well... sure I can explain why I chose to go with plain old HTML.. (gulp)"


> Furthermore, you don't lose state, which makes things much more simple.

Or not... I think about infinite scrolls with no proper paging.


New Reddit.. shakes fist


It's probably not the network round trip or the actionable content making reloading the site so slow. It's almost certainly tracking scripts, unoptimized database queries, and unoptimized assets.


Sure, web performance is an afterthought at many places and the more people work on a certain project the worse it gets because each team has its own motivation, but they all have the same target to shoot at.

I think there's a connection between the organizational structure and the bad frontend experiences and this is almost always overlooked in these discussions. This is no surprise of course, we only see the crappy end result and blame the technologies, but this is superficial.

It's usually not the certain technology that's problematic (SPA, history api, react, angular, webassebmly...) but the use of them without understanding the problem first. That's why I find it funny when I constantly read general comments here like "SPA-s are the worst".


SPAs are the worst, because companies deploy them to avoid separation of responsibilities and turn every employee into easily replaceable "full-stack developer".

Unfortunately, a lot of people write terrible applications regardless of chosen technology. When this happens to purely server-side applications, the company is forced to optimize them to keep the hosting bill low — a positive feedback loop in action. SPA applications cause the opposite — companies move everything to client-side in order to reduce Amazon bills, and don't care if those client-side scripts are poorly optimized and contribute to global warming by causing hundred thousands machines to spin up their CPU fans.

Clearly, there should be a heavy tax on single page apps. They are

1) addictive — by making more Javascript devs, who in turn write more Javascript websites

2) act as luxurious goods ("Look guys — we have created a new version of our website. It looks so cooool (but loads a bit slow)!")

3) have ugly externalities, completely ignored by most of their creators


> companies deploy them to avoid separation of responsibilities and turn every employee into easily replaceable "full-stack developer"

SPA-s are much-much harder to develop if more teams are working on it. So your first sentence makes little sense.

> companies move everything to client-side in order to reduce Amazon bills

This is never the reason why it happens. Seriously? The cost are not saved, just moved around. SPA-s are developed because they could provide a much better UX. As a side benefit, server side development becomes simpler by providing some REST or GraphQL API. You don't want to be in a place where tens of thousands of lines are generated backend side by backend developers.

> client-side scripts are poorly optimized and contribute to global warming by causing hundred thousands machines to spin up their CPU fans.

I appreciate your sense of humor :D


Yes, but equally so, replying to this email comments with "but page loads are slow" is not helpful.


CSS doesn’t require javascript, and allows for nearly all the features of a SPA.


Like plotting an equation to canvas? Editing video? Handling drag and drop events?

I've seen that blog post where a guy demonstrated that many UI elements can be done with CSS, I like that. I try to do that myself as much as I can, but let's not pretend that CSS is a programming language and it can replace ANY JavaScript.


It is debatable that CSS is or isn’t Turing complete, but as long as the task isn’t totally automated without user input, CSS could replace most Javascript.


> Furthermore, you don't lose state, which makes things much more simple.

Nah, it makes it more complicated. You still have to handle reloads and back/forward, but now it's on you.


It is not faster, ever. It is always snappier to render everything server side and avoid as much JS as possible. If you can consolidate the whole page down to a half dozen total HTTP requests or less that's ideal.

The narrow case where you make an XHR call to just reload a small slug of data instead of the whole page effectively does not exist in the real world. I mean I'm sure a few developers have implemented that exact pattern a few times, I think I remember even doing it myself probably? But it's not something that actually happens.

What's actually happening instead is people are just adding more and more intricate tracking and analytics to all these SPAs. The XHR call to reload a blurb of text kicks off three other XHR calls to register the event with the various analytics/advertising tracking partners. Oh and we're adding a new ad partner next week so make sure you refactor all the AJAX to register event handlers for all these new interactions. And can we add mouse position tracking too?

I'm starting to see some pages break into the tens of megabytes of JS scattered across hundreds of files, nobody is paying any attention to what's fast.

I'm not sure how you fix this, I've largely given up on the web. It's just a thoroughly terrible experience from top to bottom, and effectively unusable if you're a layperson.


All major browsers cache the relevant assets (images, CSS, JS) in-memory between navigations in the same frame/tab and origin (at least).


Try a blog not built on client side tech: https://www.lukew.com - the page loads are insanely fast, just click around.


In my experience for whatever reason github's turbolinks often take longer than opening the link in a new tab.


Github is simply a very slow website.


> Furthermore, you don't lose state

I'd count breaking my back button or otherwise mucking with my browser history as "los[ing] state". Maybe we have different definitions of "you" in mind.


Do you have an example of a site which does it well?


You can try PhotoStructure (disclaimer, I'm the author). It's an image gallery built with Vue and vue-router (for your images). If you look at the vue-router documentation, they've got some examples to follow.

The back/forward buttons, command-click, and ctrl-shift-t work on all modern desktop and mobile browsers.

I started with traditional page loads, but even with minimal css and no js, screen flashing between pages was prominent (especially on mobile), and visual transitions between pages (like swipe navigation where both prior and next page content is concurrently visible) is difficult.


There are many good ones, but after a quick bookmark search, this shop is done really well, imho: https://www.shopflamingo.com/


This is what makes it worse.

If you scroll down on the page, and click on the link, it takes you to a new page. Works great.

If you use browser back button, it takes you to the previous page, but scroll position is lost.


It requires special attention to do it right. It should be more about what works best for your product.

https://reddit.premii.com - Uses client side navigation. Try it on mobile and then desktop. Its not perfect, but works really good for what I want. Its hosted as a static site. I make request to reddit directly to get the content.


The blog of Svelte, the framework/compiler that was featured on HN some time ago works really well in my opinion: https://svelte.dev/blog


I don't know if their javascript is to blame, but I got this when I tried to use the back button: https://imgur.com/a/YcRtDFj


>Because it's faster

Did you read the article? The entire point was that “its faster” isn’t actually true.


Do modern browsers really re-render everything without optimizations?


A media gallery website is a good example of a use-case for client-side routing.

I worked on a porn site that was basically an endless-scroll video gallery. Clicking a thumbnail opened the video in a modal overlay. All pages on the site were modal on top of the gallery in the background. You could deep link to a page and the gallery would load in behind it.

It worked really well and had great UX.

This generalizes to any website that has some sort of overall state between page transitions, like soundcloud.com playing a track as you click around.


> I worked on a porn site that was basically an endless-scroll video gallery.

So, if you're a horny teenager, you scrool down for a huge amount of time to find "the video" that will get you off... and you hear your mom coming up the stairs, Control+w (close tab), and when she goes downstairs again, you press Control+shift+t (reopen last closed tab), you're back at the beginning, and have to search for that video again? That sucks.

Endless scrolling sucks. You go down and down and down, and something breaks (eg. bad wifi), and you lose your position, since refresh takes you back to top.


I hate infinite scroll with a passion. I've often been quite a way down someone's interesting Twitter feed and lost my place somehow, then just given up and gone somewhere else in frustration rather than trying to scroll down a few hundred tweets, waiting each however-many tweets for the next batch to load, just to get back to where I was.


Also the slowdown. Didn't matter whether I had 8, 12 or (currently) 32 GB of RAM; couple minutes scrolling down a Twitter or Facebook feed and the whole page slows down so noticeably that I simply give up.

Also: something breaks, you press F5, and now the feed is gone, or is in a completely different place than it was before refresh.

Infinite scroll should be labeled as dark pattern. Its only benefit is to the companies exploiting intermittent rewards; for users, it's just bad ergonomics and bad experience.


Funny. I agree with you that infinite scroll brings a bunch of UX issues.

But a dark pattern? Definitely not. I have had multiple projects this year where the feedback from UX workshops has overwhelmingly been to use infinite scroll. This is feedback from real users, customers, and clients.

We need to be careful to align the website UX to the correct target users. Are you building something for a very technical market or power users, such as software engineers? Sure, ensure you don't interfere with the experience.

However, if you're targetting business or social users, you need to base your decisions on their priorities. This means the optimal path for their primary use cases. This means optimizing for the 98% of the times the user just scrolls down the feed, not the 2% of the time they scroll a bit and refresh.


> you're back at the beginning, and have to search for that video again? That sucks.

and yet, that's the same behavior of the social media sites; you will spend more time (=more ads shown to you) there because you are searching for that damn video again


>you're back at the beginning, and have to search for that video again? That sucks.

1. browsers have some sort of cache[1] that allows them to restore closed/previously visited pages without doing a page reload. granted, it's not very reliable, but it'd probably work most of the time as long as you're not memory constrained or visiting too many pages in-between.

2. if the infinite load mechanism also updates the url (via the history api), then this wouldn't be an issue.

[1] https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Rel...


Discourse has the only implementation of infinite scroll that I've ever seen consistently preserve state over a reload. Either everyone but them is incompetent, or it's much more difficult to do well than it seems.

(I still don't particularly like Discourse's implementation of infinite scroll because their custom scroll thing is awful to use.)


Media galleries (carousels) were client side things at least since the original xhr implementation in IE5. Maybe well before that if changing the src of a img let the browser load the new one from the server. I can't remember.

But sites were small and even loading all the page again was not that bad.

By the way, Rails Tutbolinks [1] are a way to get the same result with the server rendering only the html body and the js code in the browser swapping it with the current one.

[1] https://github.com/turbolinks/turbolinks


IIRC, Turbolinks work by loading pages in the background in response to a mouseover event - by the time a click has registered, the remote content has already been downloaded and just needs to be injected into the page. The speed-up comes from anticipating clicks, not from JavaScript tomfoolery.


I checked the README and it never mentions mouseover

> Turbolinks intercepts all clicks on <a href> links to the same domain. When you click an eligible link, Turbolinks prevents the browser from following it. Instead, Turbolinks changes the browser’s URL using the History API, requests the new page using XMLHttpRequest, and then renders the HTML response.

I don't have a Turbolinks application to check but I found this https://github.com/turbolinks/turbolinks/issues/313

and this

https://www.mskog.com/posts/instant-page-loads-with-turbolin...

The behavior you describe is possible but it's not the default and requires adding other libraries.


I think you're thinking of https://instant.page/ , which is pretty much as simple as "when link is moused over, tell browser to load the page in the background".


What happened to http prefetch or server push? Does anyone use those things?


> And if they do, why do they always only do a half-assed job at it?

Oh, that's the easy one. It's half-assed because it's hard as hell to do a good job replacing the browser navigation. It shouldn't be surprising, since browsers are huge pieces of software made basically for displaying content and navigation, it's not reasonable to expect every web page to competently replace half of that job.


Just use the history state pushing API and it's not that hard. Put state information in the url and load the right content on refresh.


Scroll restoration is very difficult to reproduce perfectly. Particularly when content changes between navigating from one page to another and then navigating back, or when the user closes and re-opens the tab/browser.


I have no idea. I do a lot of front end work in React, and the assumption that an SPA is a better experience for people because you don't have to do a page reload to see a new page is really baffling to me. It's widespread too - across industries and disciplines and age ranges, as if the people suggesting these things have never used SPAs.


Because page reloads are jarring and discontinuous experiences. They run counter to a good user experience. That's not to say that every SPA is a good user experience, but just that a page reload is not part of the recipe for a good user experience.


HN uses the "old fashioned" approach of rendering everything server side and every link forces a page reload and I wouldn't describe the experience as "jarring and discontinuous".

I'd rather have a fast full page reload than looking at a spinner while complex client side stuff does its stuff.

[NB I really like React and when good SPAs are very very good - but a lot aren't].


On that note, it means HN supports for free one feature too-often forgotten about in single-page apps: middle-click, or, right-click-open-in-new-tab.


Well, the up/downvote transition is client side, but otherwise, yes.


AFAIK it works without js as well :-)


I occasionally suggest to people who complain on HN about the bad old days of table based layouts to do a "View Source" ;-)


The "bad old days of table based layouts" weren't really that bad, as evidenced by every generation of web developers reinventing tables in weird new ways. From "semantic" div soups through flexbox to CSS grid, it seems to me that most of layouting work is just building tables without using the <table> tag ;).


There were real arguments against layout tables back in the day (though the situation may have changed): https://stackoverflow.com/questions/83073/why-not-use-tables...

One thing for sure: layout tables are undeniably powerful, that's why people want to recreate them but without the penalties that come with real <table>.


Yeah, they were. Though arguably not in the link you posted - those are mostly clichés, as correctly pointed out by the original poster. Especially the "separation of content from layout" and CSS Zen Garden were obviously[0] nonsense, and you can observe how SPAs of today go against both.

Tables had performance problems when they got large and content got potentially dynamic. That I learned only many, many years later - I never did sites big enough to run into such problems then.

The accepted answer in this post is cringeworthy. So much rationalization these days, makes you wonder what we're rationalizing today.

--

[0] - I admit I bought the CSS Zen Garden for a while; it took me some time and experience to realize that, really, no one does that in practice, and it requires ridiculous amounts of either forethought or afterhacks to do it.


There's definitely parts that are awkward though. I can't see the context of your comment in my reply as it's on a completely different page for example. HN has never really been a great UI though, it used to be a massive set of nested tables that didn't render properly on a mobile, it has some tiny fonts and hard to click buttons, and has unliked comments just less accessible by lowering contrast.

It's good enough to read the content and the content is the vital part, but I wouldn't point to HN for a good user experience (beyond the content and lack of dark patterns).


Yet even this crucible of anti design is more usable than many a designer blessed SPA. Thus is the power of server side rendering.


It's better than many server side rendered pages, the things that work have nothing to do with where the rendering happens.


I feel hn experience is awesome. Everything is accessible with one or two clicks. I never wait for something to load, never pest against it because of some obscure behavior. It's simple and efficient. The content is perfectly served. No frills. Even on mobile I don't really feel the buttons so hard to click, even if they are tiny. Maybe we don't use it the same way.


HN is not what I would consider an “app” though. The content is mostly static, with a few interactive bits (upvote/downvote etc.) sprinkled in.


Page reloads are not jarring, they are expected, well understood, and often add that subtle hint that something has indeed changed. People want reliability and familiarity over speed, and speed from the lack of heavy JS and wonky click handling is a bonus.

Compare the site you're on right now (HN) to Reddit's new SPA frontend. Which one is faster to browse?


A page reload is part of the expected experience on a traditional site when you're navigating to a new page. I expect it to look like I'm going to a new screen, not simply replacing the content on the existing screen. Not every website is an app or should act like one.


I believe the only upside is that site framing (headers, navbars, side menus, etc etc) doesn't flicker or jump around reflowing as you navigate across pages. Which is I think why almost every "webapp" out there does this - to provide visually continuous experience, even if this client-side intra-screen navigation is slower.

Surely, on a fast-enough connections this flicker is essentially invisible, but if your connection is far from perfect (crappy hotel WiFi or poor cellular reception), it is certainly well noticeable.

Also, it's easier to persist some state across the navigations. Like making it trivial for those (annoying) on-page support chat overlays not losing message history.

This rationale only applies to web "apps", not documents, of course.


I much prefer getting normal pages to behave more like spa, than spa trying to recreate the web browser experience. Probably most importantly, automatically degrading back to normal web behavior, if the transitions arent supported or js doesnt load.

like https://github.com/turbolinks/turbolinks


For a company I worked for, we tested client side navigation and non-client (traditional) navigation for the admin interface. And then we asked the admins who use the site in question: all of them loved the client side navigation.

So you see, I suspect most users are not like the HN crowd and don't really care about command-click or the back button.


I've got a customer with a complex UI where a lot of things happen. Basically a desktop application in a browser. We built a SPA for that.

Then there are the AWS and Google consoles where nearly every click loads a new page and I don't complain, because it's ok. And Amazon, the shopping site. Every click into a product loads a new page. They seem to be doing pretty well.

I would build SPAs where it's difficult to give a different URL to what's on screen after every click. Server side rendering in every other case.


Not every project is the same. I suspect there are many applications where an SPA is a much better experience than a traditional website. In my opinion, a documentation site is not one of them.


Its also really not that hard to make cmd click + back button to work properly with client side nav.


Can you support iOS's "peek" menu as well?


Yes, I tried it just now on a random website I built in Vue. Any sane implementation of client-side navigation uses the History API[1]. It is indistinguishable from "real" navigation in the browser UI, apart from how fast it is.

If on the other hand, if you encounter code like the abomination below (which breaks cmd+click, peek and forward/back), you should not (imo) conflate it with actual client-side navigation. I suspect that's what some comments here are referring to.

    <a href="#" onclick="loadContentIntoPane('contact')">Contact Us</a>
1. https://developer.mozilla.org/en-US/docs/Web/API/History_API


Is there anything special needed to support that? I’m pretty sure you just need an <a> tag with an href. With any SPA framework/library I’m aware of, you have to go out of your way to make it not work.


But you can support both with client side navigation


I certainly can. It was just not necessary because the people who are using the site didn't care.


I think it's possible but it's hard to execute well.

I tried 4.5 years ago with https://vivavlaanderen.radio2.be/ - disclaimer: the experience isn't great on mobile (design issue, not tech) and the JS/HTML is massive (it was my first JS project ever, so I messed a bit with Webpack etc).

One of the tricks I used is partial rendering. If you click an artist page (the square/rectangular people photos with a name) and have JS enabled it'll first only render the header, then add a few body items, then the rest of the body items. Since we used a horribly inefficient handmade Markdown to JS thing with a renderer in old naive React it took way too long to render them all at the same time, it'd easily lock up the browser for two seconds on a large page.

Another ridiculous thing was to preload enough data to render the top part of every single artist page while you're on the homepage. A complete waste of data, but otherwise navigation would need a request before being able to render something useful and that defeated the point.

I did pay a lot of attention to making it feel like real navigation though. Almost any meaningful interaction modifies the URL, and the site is practically fully functional without JS. Navigation and Cmd-clicking should all work perfectly. Including scroll positions handling. SEO worked really well too.

So for a classic website format, to make client side rendering work, you need:

- a URL for every view change, with regular old <a> elements that have those URLs as href.

- server rendering that actually handles all of those URLs

- something that restores scroll positions on navigation

- phased/batched rendering on click if your initial new view doesn't render in say 100ms (basically faking progressive rendering)

The development experience was a bit frustrating back in the day, and I don't think it paid off. Which is why for the next project (https://www.klara.be) we decided to go back to server rendering, but using React on the server. Some parts of the page (like the coloured box top right on desktop) are rendered as a skeleton on the server, and then React on the client reuses the same exact code to make these parts interactive again. Partial universal React basically, and it worked very well. I think klara.be feels like a nice and snappy site and it was way easier to develop for.


> with regular old <a> elements that have those URLs as href.

This is the most common problem I have. I middle-click and nothing happens (or it loads in the same page)

That being said this is partially an API issue since the website needs to guess what a "regular click" is. It would be nice if there was an "onnavigate" event which would trigger for regular clicks (or other ways to follow a link in the same page) but not for things such as "open in new tap" shortcuts.


It's justified if you run a web app that needs to maintain complex state between clicks, such as a text/graphical editor.


I don't think anyone disputes that. Maintaining a whole editor state in server side sessions or re-rendering everything from localStorage would be rather silly indeed. This is about changing pages, such as Github annoyingly does when browsing a repository.


Sadly I think this is true for a lot of sites. However my experience has been that many (particularly in the last 3-5 years) support command-click, back button navigation, and scroll position restoration.

More often than not, I've seen that tracking scripts break this more than anything else. Particularly command+click.

Still - it'd be great of there was an API to preserve UI state moving into a new entry in browser history. I don't want to lose scroll position for a sidebar if I'm drilling down into a large list of data.

Being able to add hints through markup for when to preserve UI state could go a long way towards making page-refresh based form and navigation behavior competitive again.



It also breaks a lot of disability features, like text to speech (in many instances) or navigation using keyboard keys instead of a cursor.

On the plus side, not having to reload a new page for each click can have some interesting benefits. These systems remind me of flash content, except they are perhaps less terrifying and intrusive.


I see lots of responses to this article asking "why client-side navigation?". I can share my own experience with building an app a few months ago, and how/why I switched to a client-side single-page app..

The app is this: https://osmlab.github.io/name-suggestion-index/index.html

It is a worldwide list of brands that have been seeded with OpenStreetMap data, and which volunteers have linked to Wikidata identifiers. We use this data in OpenStreetMap editors to help people add branded businesses. Pretty cool!

1. We had data in `.json` files (but not too much data) and we wanted to show it to people working on the project so that they could review the brands.

2. I spent a day or two and built a static document generator. It took our data and spit out an `index.html` and few hundred `whatever.html` files. This worked really well. As the article says, "browsers are pretty good at loading pages". A side benefit - Google is really good at indexing content like this.

3. Then users made the obvious request: "I want to filter the data. Let me type a search string and only show matching brands. Or brands that appear in a certain country".

4. OK, SO.. If your data is spread out over a few hundred files, short answer - you can't do this.

5. But the data is _really_ only a few megabytes of `.json`. I spent a few days to learn React and switch to a single-page client side app so that we can filter across all of it. The new version uses hooks to fetch the few `.json` files that it needs, `react-router` to handle navigation between the index and the category pages. It works pretty ok! Most people would stop here.

6. The first version with client-side filtering performed good enough, but not great. The reason was because, as users type these things happen: The filters get applied, the React components get a new list of brands passed in as props, and React re-renders these new lists to the virtual DOM, and eventually, slowly, the real DOM.

7. It's really easy to build React code like this, and many people do. But it is better to avoid DOM changes in the first place. I changed the components so that the lists stay the same, but filtered things just get classed as hidden `display:none` instead of being added and removed from the DOM, and performance is much better now.

Anyway hope this is helpful to someone!


I believe that a multi page app/website would have the data in a database rather than distributed across html files. The server would then populate the html files before serving them up to clients.

It sounds like you started by trying to build a static website and then decided you wanted something more dynamic...so shifted to react where you are doing the dynamic data manipulation on the client as opposed to a server..not a like for like comparison


The problem isn't client side apps. YouTube is a single page app and it works very well for me. No issues with ctrl clicking or anything like that. The real issue is poor SPA implementations.


Curious, how would you implement Facebook's infinite scroll with only static pages?


Don't ask me how to solve a problem that doesn't need to be solved in the first place! :p #deletefacebook



Dude! #scrollhijacking is the worst!


Slow? I'm quite sure the whole point of client side apps is for them to be substantially faster - and most are.


I have yet to encounter a site where “client side navigation” is faster if there’s any real content.

Page losing is dominated by two things:

* the network. Browsers are written knowing that networking is terrible, so aggressively optimize their use of any data, including starting layout, rendering, and even JS before the page has completed loading. That’s not possible for client JS that is using XML and JSON data as neither will parse until the data is complete (this is part of why XHTML was so appallingly slow and fragile).

* JS execution - people love shoe horning JS into early page rendering. If you care about your page performance the first thing to do after networking is pushed all JS out of the path-to-initial-render. By design “client side” navigation blocks all content rendering on executing JS to build out the html/dom that a browser is much faster handling directly.

As a side benefit to making your site faster by not manually reimplementing things the browser already does for you and does better, you all get better back/forward, correct interaction with the UI, work correctly with any accessibility features that are operating, correct scrolling behavior, correct key handlers, ...

The benefit of client side rendering is you get to say you spent a lot of time doing something that already worked.


Did you read the article?


I read the article.

Perhaps I'm misunderstanding, but did they do this test on a 128 kilobit cellular Internet connection?


I made the video in the article using Chrome's "low-end mobile" throttling preset, which simulates a ~300 kbit connection IIRC. But I saw very similar behavior on my actual phone with an actual 128 kilobit connection in Canada.


Thanks for the clarification. Do the conclusions hold up when testing with a broadband network connection?


As I mention in the article, I'm not really able to tell the difference between old MDN and new MDN on the fast network connection I usually use. They both load pretty much instantly for me.


I sometimes wonder if website authors actually test their pages on slow connections before declaring that they’ve improved the experience for them…


Do all MDN users have low-latency, high-bandwidth connections?


I’m a bit lost. How would using a broadband connection affect the relative speed between the two versions ?


For one, latency has a proportionally higher impact on "time to render" at lower connection speeds.


I’m guessing no?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: