Hacker News new | past | comments | ask | show | jobs | submit login

TLDR:

- Took 2 months (21K LoC, mostly JavaScript)

- No reduction in user experience

- Reduced LoC by 67% (21,500 LoC to 7200 LoC)

- They increased python by 140% (500 LoC to 1200 LoC), good if you prefer python to JS

- Reduced JS dependencies by 96% (255 to 9)

- Reduced web build time by 88% (40s to 5)

- First load time-to-interactive was reduced by 50-60% (from 2-6 seconds to 1-2 seconds)

- Much larger data sets were possible than react could handle

- Memory usage was reduced by 46% (75MB to 45MB)

These are spectacular numbers that reflect that the application in question is highly amenable to the hypermedia approach.

I wouldn't expect everyone to see this level of improvement, but at least some web apps would.




I think the value of markup-based templating approaches such as HTMX and SGML comes from enabling content authors to create dynamic document-oriented sites without having to deal with dangerous tools such as JS and the endless ways too shoot yourself and your visitors into the foot (through eg injections and other security issues). The difference between HTMX and SGML being that HTMX integrates templating as markup vocabulary extension while SGML brings syntactical templating at the markup declaration level. A developer, OTOH, might choose JS-heavy approaches to avoid limitations and learn new syntax/concepts, and to re-use existing knowledge from/to other projects. I don't know that speed of development is of primary concern.


> Much larger data sets were possible than react could handle

I find this one difficult to believe. I'm not calling BS necessarily, but I doubt it applies in the general case.

I have a Django app that's server-side rendered, the largest page is very large (~200kb of content). Django on a low-tier VPS took about 500ms just to render that page. Django templates aren't faster than React. And Python isn't faster than JS.

If they're seeing performance increases I'd guess it's either that they're being more judicious about their queries (pulling only what they need from the database instead of filtering client-side), or the React app had a lot of complexity and they simplified the UI when re-writing.


I’d be a little surprised if rendering that same Django page as a server-rendered React app wasn’t even slower.

Historically, server-rendered React has been painfully slow compared to Django (I’ve been doing SSR with React since 2014, using Django since 2006, experienced the pain first hand multiple times). Usually at least an order of magnitude slower. That said, I haven’t benchmarked in a while, perhaps the worst of it has since been addressed.


Speed is the big one for me. 2-6+ seconds is insanity for anything.


Speed is definitely a big one.

I also am glad to see that everyone on the dev team became full stack developers, because I think the back-end/front-end split is often detrimental to development velocity. It's often better when a developer can fully realize an entire feature, with no front-end/back-end friction.


Oh for sure. For me programming is a hobby, so I can only get something made if I do it all.


I hear you, but YouTube takes 6+ seconds for me to load and it does not seem to hold them back. For most, not all, optimizing page load time is time probably best spent elsewhere. This is is no way to impugn htmx, because with htmx you seem to kill many birds with one stone.


Youtube is kind of unique in that nothing (currently) even comes close to replacing it for the average youtube user.


Details matter, as they learned when making things faster made their metrics slower - https://blog.chriszacharias.com/page-weight-matters


I would argue that 1-2 seconds is not impressive either.

It's like seeing people breaking rocks with hand tools being impressed with a bigger mallet.

I used to aim for 15 millisecond cold load times, which is apparently unheard of these days even for front pages with entirely static content.


I'm with you on what's impressive and what isn't. I've done considerable work on page load performance engineering in the past, getting times down to the low single-digit milliseconds as you would like while maintaining high traffic levels. I know how to make every part of the system work together to minimise response and rendering latency, which is nice for people with suitably low-latency connections, and for APIs that respond nicely in client-side applications.

Unfortunately, for myself I can't even get a ping response from the ISP upstream router in 15ms, let alone a static page over HTTPS.

None of my internet connections has sufficiently low latency - neither home nor office.

HN takes 400-600ms to load, but that's understandable due to physics. Wrong country.

I just loaded news.bbc.co.uk, which is in my country and is also well connected, and saw that DNS resolution took up to 400ms, and TLS setup took up to 650ms (though not both at the same time in a single request). Total page load time was about 2s.

Those numbers seem unnecessarily high on this connection. But 15ms is too optimistic: the network latency isn't low enough, even for a small static page.

There are a lot of people in a similar situation, living with connection latencies you would consider high, but it's all we can get.


15ms to who? I’ve never had that kind of latency on a cold connection. My pages have an LCP of around 600ms, and it’s hard to push it much lower because even static pages on a CDN end up taking 400ms to connect and download.


15 ms to anyone in the same city or on the same local network in an office setting.

50 ms to anyone in the same country, ideally lower.

Global reach is a different problem because of physics.

However, front pages of web pages tend to be largely static and can be staged in various geo-distributed regions. In other words, distributed via a CDN..

> even static pages on a CDN end up taking 400ms to connect and download.

Only if you stuff them full of megabytes of Javascript and pull down megabytes of JSON in order to display that static content.

The fact that my comment -- a factual statement about real-world performance I've achieved regularly -- is voted down and your off-by-an-order-of-magnitude reply is voted up speaks volumes about the state of the industry.

It's like a bunch of fat people being flabbergasted about the mere concept of mountain climbing. With what... your legs!? Up there!? Madness!


My site is a static page hosted on a CDN with less than 80KB of JavaScript.

If you’re testing local network times, you’re just fooling yourself. None of your visitors are seeing that time, so it’s irrelevant.

What’s your URL? I’d love to throw it into webpagetest.org and take a look.


Well yea, but start trimming with the biggest wins first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: