Hacker News new | past | comments | ask | show | jobs | submit login

The slippery slope that scares me (as a React developer) about htmx (or Hotwire.dev, in particular is the one I was looking at), is that you start making the assumption that the client's internet is fast.

There was demo that showed it normally takes ~100ms to click a mouse, and if you attach to the on-mouse-down, then by the time the mouse has been released (100ms later), you can have already fetched an updated rendered component from the server.

And while that's very cool, my second reaction is "is it ever acceptable to round-trip to the server to re-render a component that could have been updated fully-client side?" What happens when my internet (or the server) is slow, and it takes more than 100ms to fetch that data? Suddenly that's a really bad user-experience. In the general case this is a subjective question. I personally would rather wait longer for the site to load, but have a more responsive site once it did.

There's not a perfect solution to this, because in a complex site there are times that both the server and the client need to update the UI state. But having the source of truth for the UI located in a server miles away from the user is not a general-purpose solution.

(I'm not advocating for the status quo, either. I just wanted to bring up one concern of mine.)




> you start making the assumption that the client's internet is fast.

The most common trajectory for react and other SPA framework apps is to also make this assumption, waving away the weight of libraries and front-end business logic with talk of how build tools are stripping out unused code so it must be light, while frequently skipping affordances for outright network failure that the browser handles transparently, oh and hey don't forget to load it all up with the analytics calls.

But maybe more crucially: what's the real difference between the overhead of generating / delivering markup vs JSON? They're both essentially data structure -> string serialization processes. JSON is situationally more compact but by a rough factor that places most components on the same order of magnitude.

And rendered markup is tautologically about the data necessary to render. Meanwhile JSON payloads may or may not have been audited for size. Or if they have, frequently by people who can't conceive of any other solution than graphql front-end libraries.

Whether you push html or json or freakin' xml over the wire is a red herring.

Heck, "nativeness" might be a red herring given frequent shortcomings in native apps themselves -- so many of them can't operate offline in spite of the fact that should be their strength because native devs ALSO assume client's internet is fast/on.


I think you're talking past each other: the problem isn't assuming the client's internet is fast, the problem is assuming the client's internet is stable.

If you replace most interactions that could be resolved client-side with a network transaction, you're betting on the client's internet being not just reasonably fast but also very stable. When I'm on the go, my internet is more likely to be fast than stable.


> The problem is assuming the client's internet is stable.

Yep. This is the major drawback of backend-dependent interactions. This is what scares me away from amazing technologies such as ASP.NET Core Blazor Server where I can code my frontend in C# instead of JavaScript.

If only Blazor Wasm wasn't so heavy. 4mb of runtime DLLs is a bit off-putting to any use but intranet LOB applications.


Recent versions trimmed it down to about 1mb.


Your comment dovetails with my primary point: how an app serializes or renders data is entirely trivial compared to planning for network availability issues when it comes to app function and user experience.

GP asks: "is it ever acceptable to round-trip to the server to re-render a component that could have been updated fully-client side?" This is a question that's oriented around what is generated on the server and pushed over the wire rather than the fact that there is a network call at all.

If the network is not stable, a typical 1st-load-heavy SPA-framework will make... a tenuous network call returning JSON with iffy chances of success instead of a tenuous network call returning an HTML fragment with iffy chances of success.


It may be common when starting out, but we do have paths to optimize out of it.

We can do code splitting, eager fetching js when page is idle, optimistic rendering when page is taking time etc. Unlike what a lot of people like to believe not every spa runs 20 megs of js on page load.

Also the initial load time being a few seconds and then the app being snappy and interactive is an acceptable compromise for a lot of apps (not everything is an ecommerce site).

When most fragments need to be server rendered it manifests as a general slowness throughout the interaction lifecycle that you can't do much about without adopting different paradigm. The hey-style service-worker based caching hits clear boundaries when the ui is not mostly read only and output of one step very closely depends on the previous interactions.

I joined a place working on larger rails+unpoly+stimulus app which started off as server rendered fragments with some js sprinkled in, but after two years had devolved into a spaghetti where to figure out any bug I'd typically need to hunt down what template was originally rendered, whether or not it was updated via unpoly, whether or not what unpoly swapped in used the same template as the original (often it was not), whether or not some js interacted with it before it was swapped, after it was swapped etc. .... all in all I felt like if you push this to use cases where lot of interactivity is needed on the client, it is better to opt for a framework that provides more structure and encapsulation on the client side.

I am sure good disciplined engineers will be able to build maintainable applications with these combinations, but in my experience incrementally optimizing a messy spa app is generally more straightforward than a server-rendered-client-enhanced mishmash. ymmv.


> Unlike what a lot of people like to believe not every spa runs 20 megs of js on page load

This is not a new take, it's exactly what every die-hard SPA dev says. While 20MB is an exaggeration, the average web page size has ballooned in the past decade from ~500KB in 2010 to around 4MB today. And the vast majority of those pages is just text, there is usually nothing really interactive in them that would require a client-side framework.

Others will say 2MB, 4MB is not that bad, but that just shows how far out of touch with the reality of mobile internet they are. Start measuring the actual download speeds your users are getting and you'll be terribly disappointed even in major urban centers.


On a transatlantic flight I recently had the displeasure of browsing over a satellite connection. A lot of sites simply never loaded, even though the connection speed was reasonable. The multi-second latency made these sites that loaded tens to hundreds of resources completely unable to render a single character to screen.


For a real world example of this, GitHub uses server-side rendered fragments. Working with low latency and fast internet in the office, the experience is excellent. Trying to do the same outside with mobile internet, and even with a 5G connection, the increased latency makes the application frustrating to use. Every click is delayed, even for simple actions like opening menus on comments, filtering files or expanding collapsed code sections.

I'm actually worried about developers in developing countries where mobile internet is the dominant way to access the Internet and GitHub is now the de facto way to participate in open source, that this is creating an invisible barrier to access.


I love HTMX and similar technologies but I think GitHub is a particularly telling example of what can go wrong with these techs. The frontend is so full of consistency bugs that it's appalling: https://youtu.be/860d8usGC0o?t=693


Github does? Maybe that's the reason why I often get the error message: "page took too long to render" after ten seconds of waiting.

example: https://github.com/pannous/hieros/wiki/%F0%93%83%80 this is admittedly a complicated markdown file, however it often fails on much simpler files.


I hate it when devs implement their own timeouts. That’s handled at the network level, and the socket knows if progress is being made. I was stuck using 2G data speeds for a couple of years and I loathed this behavior.


Sometimes the infrastructure causes this. For a long time (and now?) AWS Api Gateway has a hard cap of 30 seconds, so the sum of all hops along the way need to remain under that.


A timeout at that level should mean “no progress” for 30s, not that a request/response needs to finish in 30s. An naive timeout that a dev randomly implements might be the latter and would be the source of my past frustration.


that's a good reason to invest in self hosting! https://git.jeskin.net/hiero-wiki/file/%F0%93%83%80.md.html


oh, thanks for cloning! but [[links]] dont work and other (internal)[links] don't link to markdown.md.


gah, you're right. perhaps that could be fixed with a few clever grep/sed incantations. very interesting repo if you're the author, by the way.


Side note: in Thailand and the Philippines, at least, mobile internet is blazing fast and not more expensive.


As someone living in one of those countries, I beg to differ. Cheap maybe, but mobile internet is neither fast nor stable.


I'm guessing that's Philippines :-] It's still been good enough for me to get work done, video calls, etc. And mostly better than hotel/coffee shop WiFi.


It's mostly the metal roofs everywhere blocking the signal.


That's not universally true in all areas for both countries though.


In Vietnam it is also fast.


As someone who has been writing code for 30 years and has been developing "web apps" since the late 90s, it's really funny to me how things come full circle.

You just described the entire point of client-side rendering as it was originally pitched. Computation on the server is expensive and mobile networks were slow and limited in terms of bandwidth (with oppressive overage charges) just a few years ago. Client-side rendering was a way to offload the rendering work to the users rather than doing it upfront for all users. It means slower render times for the user, in terms of browser performance, but fewer network calls and less work to do server-side.

In other words, we used to call them "Single Page Web Applications" because avoiding page refreshes was the point. Avoid network calls so as to not consume bandwidth and not make unnecessary demands of the server's limited computational resources.

Now things might be changing again. Mobile networks are fast and reliable. Most people I know have unlimited data now. And while computation is still one of the more expensive resources, it's come down in the sense that we can now pay for what we actually use. Before we were stuck on expensive bare metal servers and we could scale by adding a new one but we were likely overpaying because one wasn't enough and two was way overkill except for peak traffic bursts. So we really scrambled to do as much as we could with what we had. Today it might be starting to make sense to make more trips back to the server depending on your use case.

To address your concern about latency or outages, every application needs to be built according to its own requirements. When you say "there's not a perfect solution to this", I would say "there is no one size fits all solution." We are talking about a client / server model. If either the server or client fails then you have failed functionality. Even if you can get yourself out of doing a fetch, you're still not persisting anything during an outage. The measures that you take to try and mitigate that failure depend entirely on the application requirements. Some applications strive to work entirely offline as a core feature and they design themselves accordingly. Others can accept that if the user does not have access to the server then the application just can't work. Most fall somewhere in between, where you have limited functionality during a connection interruption.


People always take a good idea to far.

There's nothing wrong with loading a page and then everything on that page loads data from the server and renders it.

Where the issues come in is that modern SPA claims loading a new page is unacceptable and that somehow doing so means you can't fetch data and render anymore.

It's just not true.


I think the term SPA is somewhat confusing. Why can't an SPA, or parts of it be rendered on the server as well.


> we used to call them "Single Page Web Applications" because avoiding page refreshes was the point

I wonder if the problem was really that the whole page was reloaded into the browser which caused a big "flash" because all of the page was-re-rendered. The problem maybe was not reloading the page from the server but re-rendering all of it. Whereas if you can load just parts of the page from the server the situation changes. It's ok if it takes some time for parts of the page to change because nothing gets "broken" while they are refreshing. Whereas if you reload the whole page everything is broken until all of it has been updated.


The problem was there was no concept of reusable components. IMO htmx is not the headline here but django-components (https://pypi.org/project/django-components/) is. Managing html, css and JS in component-reusable chunks on the server used to be extremely awkward, especially when you begin to need lifecycle events (HTML appeared on the page, lets attach all the necessary event listeners, but only to the right element - even in a list of 5 elements; internal HTML changed, lets see which things need more events etc).

I would try this approach out in a typechecked language, if I'm certain a native mobile app isn't going to be needed.


I think your explanation makes it very clear.

The difficulty with web-development is there are 3 different languages (HTML, CSS, JS) which all need to make some assumptions about what is coded in the other languages. The JavaScript refers to a DOM-element by id, it assumes some CSS which the JS can manipulate in a specific way.

The ideal goal much of the time has been: "Keep content separate from presentation, keep behavior separate from the content etc.". While this has been achieved in a superficial level by keeping CSS in a .css -file, content in a .html-file and behaviors in a .js -file, they are not really independent of each other at all. And how they depend on each other is not declared anywhere.

That means that to understand how a web-page works you must find, open and read 3 files.

Therefore rather than having 3 different types of files a better solution might be to have 3 files all of which contain HTML, CSS, and JS. In other words 3 smaller .htm files, each embedding also the CSS and JS needed.


We had this middle ground of returning html fragments and updating subsections of the page. For posting a comment, for example.

There were plenty of sites doing that in the mid 2000s.


> There was demo that showed it normally takes ~100ms to click a mouse, and if you attach to the on-mouse-down, then by the time the mouse has been released (100ms later), you can have already fetched an updated rendered component from the server.

I think what you're describing is a form of preloading content but it's not limited to React.

For example:

The baseline is: You click a link, a 100ms round trip happens and you show the result when the data arrives.

In htmx, Hotwire or React you could execute the baseline as is and everyone notices the 100ms round trip latency.

In React you could fetch the content on either mouse-down or mouse-over so that by the time the user releases the mouse it insta-loads.

But what's stopping you from implementing the same workflow with htmx or Hotwire? htmx or Hotwire could implement a "prefetch" feature too. In fact htmx already has it with https://htmx.org/extensions/preload/. I haven't used it personally but it describes your scenario. The API looks friendly too, it's one of those things where it feels like zero effort to use it.

Hotwire looks like it's still fleshing out the APIs for that, it has https://turbo.hotwired.dev/handbook/drive#preload-links-into... for pre-loading entire pages. There's also https://turbo.hotwired.dev/reference/frames#eager-loaded-fra... and https://turbo.hotwired.dev/reference/frames#lazy-loaded-fram... which aren't quite the same thing but given there's functionality to load things on specific events it'll probably only be a matter of time before there's something for preloading tiny snippets of content in a general way.


I actually have a module that I built ti loads pages using such an approach (prefetch). In my take, I refined Pre-fetching to be triggered a few different ways. You can fetch on hover, proximity (ie: pointer is x distance from href), intersection or by programmatic preload (ie: informing the module to load certain pages). Similar to Turbo, every page is fetched over the wire and cached so a request in only ever fire once. It also supports targeted fragment replacements and a whole lot of other bells and whistles. The results are pretty incredible. I use it together with Stimulus and it's been an absolute joy for SaaS running projects.


Sounds really good, do you have that code published somewhere? Any plans to get it merged into the official libs?


I do indeed. The project is called SPX (Single Page XHR) which is a play on the SPA (Single Page Application) naming convention. The latest build is available on the feature branch: https://github.com/panoply/spx/tree/feature - You can also consume it via NPM: pnpm add spx (or whichever package manager you choose) - If you are working with Stimulus, then SPX can be used instead of Turbo and is actually where you'd get the best results, as Stimulus does a wonderful job of controlling DOM state logic whereas SPX does a great job of dealing with navigation.

I developed it to scratch an itch I was having with alternatives (like Turbo) that despite being great are leveraging a class based design pattern (which I don't really like) and others which are similar were either doing too much or too little. Turbo (for example) fell short in the areas pertaining to prefetch capabilities and this is the one thing I really felt needed to be explored. The cool thing with SPX which I was able to achieve was the prefetching aspect and I was surprised no-one had ever really tried it or if they did the architecture around it seemed to be lacking or just conflicting to some degree.

A visitors intent is typically predictable (to an extent) and as such executing fetches over the wire and from here storing the response DOM string in a boring old object with UUID references is rather powerful. SPX does this really efficiently and fragment swaps are a really fast operation. Proximity prefetches are super cool but also equally as powerful are the intersection prefetches that can be used. If you are leveraging hover prefetches you can control the threshold (ie: prefetch triggers only after x time) and in situations where a prefetch is in transit the module is smart enough to reason with the queue and prioritise the most important request, abort any others allowing a visit to proceed un-interruped or blocking.

In addition to prefetching, the module provides various other helpful methods, event listeners and general utilities for interfacing with store. All functionality can be controlled via attribute annotation with extendability for doing things like hydrating a page with newer version that requires server side logic and from here executing targeted replacements of certain nodes that need changing.

Documentation is very much unfinished (I am still working on that aspect) the link in readme will send you to WIP docs but if you feel adventurous, hopefully it will be enough. The project is well typed, rather small (8kb gzip) and it is easy enough to navigate around in terms of exploring the source and how everything works.

Apologise for this novel. I suppose I get a little excited talking about the project.


This looks extremely similar to Unpoly to me.


Never heard of Unpoly, but seems really cool. I will need to have at look at it more closely but from the brief look I'd say SPX is vastly different.

In SPX every single page visit response (the HTML string) is maintained in local state. Revisits to an already visited page will not fire another request, instead the cache copy is used, similar to Turbo but with more fine grained control. In situations where one needs to update, a mild form of hydration can be achieved. So by default, there is only ever a single request made and carried out (typically) by leveraging the pre-fetch capabilities.

If I get some time in the next couple of weeks I'll finish up on the docs and examples. I'm curious to see how it compares to similar projects in the nexus. The hype I've noticed with HTMLX is pretty interesting to me considering the approach has been around for years.

Interestingly enough and AFAIK the founder of github Chris Wanstrath was the first person to introduce the ingenious technique to the web with his project "pjax" - to see the evolution come back around is wild.


>In SPX every single page visit response (the HTML string) is maintained in local state. Revisits to an already visited page will not fire another request,

Unpoly does this.


If the click causes a state change, it would be complicated to pre-render (but not apply) it before click.


> I personally would rather wait longer for the site to load, but have a more responsive site once it did.

If react sites delivered on that promise, that would be compelling. However, while my previous laptop was no slouch, I could very often tell when a site was an SPA just in virtue of how sluggish it ran. Maybe it's possible to build performant websites targeting such slower (but not slow!) machines, but it seemed that sluggish was quite often the norm in practice.


I wouldn't assume a fragment is any bigger than the raw data when it's compressed.

  { "things": [
    { "id": 183,
      "name": "The Thing",
      "some date": "2016-01-01",
    },
    { "id": 184,
      "name": "The Other Thing",
      "some date": "2021-04-19",
    },
  ]}
Vs

  <tbody>
    <tr><td>183</td><td>The thing</td><td>2016-01-01</td></tr>
    <tr><td>184</td><td>The other thing</td><td>2021-04-19</td></tr>
  </tbody>
They seem extremely similar to me.


The issue with this model is that many state updates are not scoped to a single fragment. When you go to a another page, you’ll likely want to update the amount of results on one or more detached components. That’s way more natural to by getting the length of an array of data structures than on a html fragment.


Possibly, yes, although many SPA sites seem to hit for updates every (visual) page change anyway, to get the latest results. It's rare that a UI will hold all the records to count locally, unless it's a small and slow-changing data set, so it's asking for a count whether or not it's getting it wrapped in HTML or JSON.


We have a set up like this at my job. The servers are based in the Pacific Northwest. We have users in Europe and India.

You can guess how awful the user experience is.


It’s almost as if developers are treating latency as having a kind of Moore’s law as with memory or cpu.


Would this not be a concern for React (and other SPAs) as well? I'm no UI expert, but from what I've seen of React/Vue UIs in previous companies, you still have to hit the server to get the data, though not the UI components. The difference in size between just the data in, say, JSON, and the entire HTML component would be very minimal considering both would be compressed by the server before sending.


There are frameworks that let you apply changes locally and (optimistically) instantly update the ui and asynchronously update server state. That is a win.

On the other hand, I have seen implementations of spa “pages” that move from a single fetch of html to multiple round trips of dependent API calls, ballooning latency.


Even worse for Chinese users who have to browse many US sites with a VPN (e.g. me)


> is it ever acceptable to round-trip to the server to re-render a component that could have been updated fully-client side ?

htmx doesn't aim to replace _all_ interactions with a roundtrip; in fact the author is developing hyperscript (https://hyperscript.org/) for all the little things happening purely client side.

But in any case even an SPA does round trips to get data stored on the server. The question becomes: is it better to make a backend spout json and then translate it, or make the backend spout HTML directly usable ?


If the internet is slow it will be horrible for the user on first load to download a full blownup JS bundle. It will also not removr the fact that any resource change will require a roundtrip to the backend and back forth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: