... loading times of pure HTML webpages on good connection are so fast they whole thing outruns client-side page switches even if the page is already in memory.
Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one. Most of the push behind static site generators is to get as much of the code necessary to display all the website in to your browser as fast as possible, so it's there no matter what happens later. In cases where the user has a fast, robust connection that may well be slower than loading each page on demand, but in cases where the user's connection is slow and flaky (eg on a train) static sites generators do work better.
Perhaps the next generation of websites will take your connection in to account better. It depends on whether browsers and users will be willing to give up that information though. As far as I'm concerned I will use everything I can to improve the user experience on the sites I build.
Well. I can speak from a pretty solid experience here. I’ve travelled the US by train, the length of the UK by train and large swathes of Europe by train.
The site that tends to be the best to use is the ones that don’t take much to load. Connection tends to be spotty, you get bouts of “some” data and then you’re dry again for a while. If you can squeeze a page load in there it’s infinitely better than a half opened page.
Your first page load is -incredibly- important here. It’s the difference between a usable site and an unusable one.
The sites that work the best are the ones that do not try to do very much fancy stuff, because that fancy stuff only half-loads most of the time; leaving you to keep refreshing and hoping the spotty connection finally lets you bring in that 2KiB that will allow the page to actually load.
Your first page load is -incredibly- important here. It’s the difference between a usable site and an unusable one.
This is the point I was making. If the server can send the user enough data on the first load to make the whole app/site usable then the user won't need to wait for the network if they're in a tunnel. They've already got the necessary resources (which shouldn't be everything, just what's necessary). In that scenario client side routing beats server side completely because server side rendering just doesn't work when the user doesn't have a network connection.
That said though it's wasteful and entirely unnecessary if the user has a good connection. Really websites should have a good mechanism for testing. The Network Information API doesn't have particularly good cross browser support and it isn't especially reliable yet.
If. There are two failure points here, both of which are so frequent that I can't even recall seeing an exception.
One, if your first page load tries to load a full page, instead of just some JS that bootstraps the process of loading of the rest of the page, which lets the first load finish before execution. Better yet, it should load the absolute minimum bit of JS site kernel. Then, the first load is likely to succeed on a slow/spotty connection, and we can skip to problem #2 below. This isn't being done correctly in most of the sites I visit for some reason; the first page is either attempted to be downloaded in full, or the "skeleton" of the UI is the piece that always loads the longest.
Two, loading UX. You have a loaded UI skeleton with boxes that need to be filled via further requests. Or, I've clicked on something and a subsection of the site needs to be refreshed. What happens is either nothing, except the SPA getting unresponsive, or I get the dreaded spinners everywhere. If the requests succeed, the spinners eventually disappear. If they don't, they don't. Contrast it to the pre-JavaScript style: if something needs reloading, my page is being rendered essentially top-to-bottom, complete bit of contents popping up as they're loaded; the site is usable in partial state, and if anything breaks, I get a clear error message.
Can this two problems be solved correctly in client-side rendered code? Yes. Can SPA be faster than full page loading? Yes. Is it usually? No, because web development is a culture. When a company decides "let's do SPA" or "let's do client-side rendering", they unfortunately inherit almost all the related dysfunction by default.
I think you're not making the same point as me at all.
I'm going to take the common case of a news article;
Imagine for a second, you're on a train and you have low bandwidth internet, when it works, which is rare. Now you're on hackernews and you've loaded a whole comment thread, you're reading through and someone posts a link to your article.
Now, the article can load with client side routing, but will take longer. And depending on implementation might not actually have the whole article.
The page which is pure html with minor javascript is going to load, in full, and I don't need subsequent requests. And, it's guaranteed to be smaller than the one that you're over-engineering.
I have a family friend who lives in a part of the US where the only options are dial-up and satellite. He thus uses dial-up.
Without fail, the sites that rely heavily on JS to do page loading end up performing significantly worse (and in fact outright bugging out, and often failing to load entirely) than sites which just send ordinary HTML docs. A disturbingly-high number of the JS-heavy "web apps" out there seem to have little regard for actually handling failures on a sketchy connection.
Your point would make more sense in the context of an Electron app or something with a permanently-locally-cached copy of the site. That would at least give my elderly friend the means to predownload it when he piggybacks off the public wifi when he goes into town.
> Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one.
Surely some web apps need to work offline. But most web pages do not, and I don't want most sites I visit to store a bunch of data on my machine on the off chance that I'll use them offline.
"Offline first" seems really misguided to me as a rallying cry for all things on the web.
Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one. Most of the push behind static site generators is to get as much of the code necessary to display all the website in to your browser as fast as possible, so it's there no matter what happens later. In cases where the user has a fast, robust connection that may well be slower than loading each page on demand, but in cases where the user's connection is slow and flaky (eg on a train) static sites generators do work better.
Perhaps the next generation of websites will take your connection in to account better. It depends on whether browsers and users will be willing to give up that information though. As far as I'm concerned I will use everything I can to improve the user experience on the sites I build.