>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.
Replacing innerHTML wasn’t working out particularly well—especially for the highly interative Ads product—which made an engineer (who was not me, by the way) wonder whether it’s possible to run an XHP-style “tags render other tags” paradigm directly on the client computer without losing state between the re-renders.
HTML is still a document format, and while there's a lot of features added to browsers over the year, we still have this as the core of any web page. It's always a given that state don't survive renders. In desktop software, the process is alive while the UI is shown, so that's great for having state, but web pages started as documents, and the API reflects that. So saying that it's an issue, it's the same as saying a fork is not great for cutting.
React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.
But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.
>I don't see the advantages of having JSX representation of JSON objects on the server side.
That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).
I'm pointing out that this particular pattern (Server Components) is engendering more complexity than necessary.
If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.
HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.
Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.
As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.
If it is new to the DOM it will get added. If it is present in the DOM (based on id and other attributes when the id is not present) it will not get recreated. It may be left alone or it may have its attributes merged. There are a ton of edge cases though, which is why there is no native DOM diffing yet.
it happens because people really want to participate in the conversation, and that participation is more important to them than making a meaningful point.
I don't think it would do justice to the article. If I could write a good tldr, I wouldn't need to write a long article in the first place. I don't think it's important to optimize the article for a Hacker News discussion.
That said, I did include recaps of the three major sections at their end:
Look, it's your article Dan, but it would be in your best interest to provide a tldr with the general points. It would help so that people don't misjudge your article (this has already happened). It could maybe make the article more interesting to people that initially discarded reading something so long too. And providing some kind of initial framework might help following along the article too for those that are actually reading it.
That is not a good reason to make the content unnecessarily difficult for its target audience. Being smart also means being able to communicate with those who aren't as brilliant (or just don't have the time).
Yet because of that the issue they were concerned about was shown to the thread readers without having to read 75 pages of text.
Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.
>The old way was to return HTML fragments and add them to the DOM.
Yes, and the problem with that is described at the end of this part: https://overreacted.io/jsx-over-the-wire/#async-xhp
>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.