Even with tooling getting better, you're still faced with creating an API to the backend that has a high chance of your frontend being the only ever consumer of it. You're also faced with duplicating logic on both front and backends in many scenarios (though of course this can be mitigated with a node backend? I don't know... I've never used node).
As for latency, this is over websockets. It becomes a problem if you are connecting to a server across the ocean but otherwise, it's incredibly fast and feels just as snappy as JS. I actually also haven't used Stimulus Reflex, just Phoenix LiveView so I'm not sure if they are exactly the same, but the TL;DR is that they are sending VERY tiny payloads over websockets. This video explains it well (though it's also a demo of LiveView): https://www.youtube.com/watch?v=MZvmYaFkNJI
Oh definitely, totally agree the status quo of duplicating logic is unnecessary.
And I realize this is using websockets, and that the payloads are tiny, but you can't fight the speed of light ;), so it's more of a latency concern (vs bandwidth). So in order to truly keep all the state and rendering with the server, even the smallest UI interaction would require a round-trip, and unless you'll have end nodes all across the country, that seems like it will noticeable.
With some actions like "search", "save" or "reload" I would expect a spinner and delay anyway, so that's not a problem. But if opening a dropdown menu or adding an item to a list (like in the example) feels slow that might not be the best user experience. But perhaps it's not a problem in practice, I haven't seen too many examples.
Yep, it definitely can be a concern! It takes a lot of distance before latency starts to be noticeable so it all depends on our use-case. These tools aren't one-size-fits-all. Lots of people are building things that aren't going global—at least not right off the bat—so these tools provide a way of moving very quickly out of the gate (no need to build an API for yourself, no need to duplicate any frontend and backend logic). If you do go global, you're probably in a good place to start serving your global customers from servers closer to them—the company I work, for example, for has to do regardless. And that fits under "a nice problem to have" which can be solved after your business has been validated.
But yes, there are many cases where this will not work. It's just a verrrrry attractive option when it can.
> the TL;DR is that they are sending VERY tiny payloads over websockets
If you have a server in NY and you happen to live in Germany it's going to take about 100-120ms to do a network round trip in a best case scenario (high quality wired cable connection) even with a 1 byte payload. For most websites running in 1 datacenter that means a massive population of the world is going to feel that latency.
That's why I'm not sold on using LV and websockets for everything (such as transitioning from page A to B, etc.). Hotwire Turbo Drive / Frame uses HTTP which means you can cache responses and send back 304s when the content doesn't change. HTTP feels like the right protocol to do this, and then you can save Websockets for when you need to broadcast relatively small amounts of new / updated / removed content to 1 or more connected clients. That's what the Hotwire model provides.
Yeah, am silently on your side on this point everywhere you go! Because I already said once and done a good fight.
Stateful (I mean really keep things in assigns) sounds like an inefficient caching. Not all assigns are per user, caching the same things on each process is redundant. And to share these things as one set of data is to put them somewhere else, most likely in a process under the root supervision tree. And there's less point for using LV.
Folks forgot how fast Phoenix.View rendered from controller was (a hello page is microsecond!) it's pretty damn good.
> which means you can cache responses and send back 304s when the content doesn't change.
Unless I misunderstand - that's still a round-trip to fetch and return a 304? Or are you thinking an edge cache closer to the end-user?
(I don't disagree that websockets aren't a universal solution - but if used as server push - it's difficult to see how plain http could have lower latency? Long polling would be similar, but probably (even) harder to scale if you want 10k+ open connections?).
I addressed this in another reply but TL;DR, these solutions are totally use-case dependent and they don't pretend to be one-size-fits-all (or at least LiveView doesn't).
As for latency, this is over websockets. It becomes a problem if you are connecting to a server across the ocean but otherwise, it's incredibly fast and feels just as snappy as JS. I actually also haven't used Stimulus Reflex, just Phoenix LiveView so I'm not sure if they are exactly the same, but the TL;DR is that they are sending VERY tiny payloads over websockets. This video explains it well (though it's also a demo of LiveView): https://www.youtube.com/watch?v=MZvmYaFkNJI