This is a brilliant list, I'm a particular fan of HTMX and Alpine.js.
The move back towards small dependancy free JS libs, in combination with modern JS ES Modules, is absolutely brilliant.
I learnt web dev back in the late 90s by doing "view source", and was still learning about new things that way well over a decade later. If we can move back towards that, by not having a build step, it will be amazing for new devs starting out and learning new things.
Obviously larger apps, with many dependancies, will require all the incredible work thats gone into modern JS tooling, but for so many simpler (and not so simple) sites this process really does make sense.
> I learnt web dev back in the late 90s by doing "view source", and was still learning about new things that way well over a decade later. If we can move back towards that, by not having a build step, it will be amazing for new devs starting out and learning new things.
Same here, and I couldn't agree more. Had minified library soup with dynamic page content been the norm back then it would've been much harder to get started, and there's a high chance I would've just given up somewhere in the process.
Having a build step also increases activation energy and friction which impedes the sort of in-the-moment tinkering that often sparks projects.
You guys are singing the song of my people. For years I've been confounded at the idea that we need to be building/compiling interpreted code.
And the tooling... my God! I know this makes me sound like an old man (because I am) but, it used to be all I needed was an editor, an FTP client, and a browser. write-upload-refresh. Today, I find VS Code, `git push`, and a browser still serves me very well.
I'm also against building and compiling (I work with Python and Ruby on the backend) but there were good reasons to have those steps, at least in the past and probably still now. Short and very incomplete list, feel free to reply and add points:
* slow network, especially on mobile
* reduce the size of the download by minifying
* reduce the number of calls by bundling files in a single one
* Inconsistent JS and HTML feature support among browsers
* transpiling to a common older version of JS
* add polyfills to implement missing features
* add widgets to implement fucked up default implementations or widgets that are nearly impossible to standardize (date and time pickers might need tons of features and are projects on their own)
Most of those points could be taken care of by building only when deploying though, skipping the phase during development. I think most of the annoyance with a build step comes from how you can’t do anything unless the tooling for it is set up, as well as how it gunks up the tweak → reload cycle.
Irony is that javascript handlers on elements were frowned upon in 90s/00s because of separate of concerns that HTML should be HTML and JS should attach it's events in a separate file. Now all of these libraries have an onclick that mirrors those original JS event handlers in the same html with no separation!
Only thing missing is inlining your css styles on each element, only a mad man would try that though... [cough]... https://github.com/samwillis/x-style
Tailwind is awesome and gets so much right. However, it requires a build step using local dev tools, and it has a complete DSL that reimplements most of css as classes.
Utility classes for uniform padding, spacing, border, colours etc at brilliant. But my suggestion with x-style is that placing the actual css on an element may be better.
Did you even click the link? The "Why?" section explains the problem with tailwind and the code sample shows it's doing a different thing, closer to htmx.
If implemented incorrectly, these onclick handlers are a security hazard, because they prevent you from using a strict content security policy. https://www.w3.org/TR/CSP2/#directives
I'll never understand CSP not allowing onclick handlers. Having all your javascript in a separate file makes it very hard to diagnose and understand what is causing the event on the element.
onclick and other in-HTML handlers have some unsafe eval-like behavior for old compatibility reasons (with ES1 and the old web/old DOM) and I feel like the CSP designers were overly-cautious of XSS exploits via DOM manipulation that are hard to do in practice, but still in theory a major security concern.
I wonder if there were a better way to opt-in to "use strict" (and maybe even ESM friendliness) in onclick handlers if that would have fewer CSP concerns. I doubt there are any current proposals to build such tools for HTML, though.
Unfortunately we need the JS still because the standards are so lacking, and when they’re not it takes a while to get everybody on it.
As a general example, despite three decades of the web existing across the world in many jurisdictions with more than one language, the solution to translation for a site is just some translation framework or tediously maintaining a whole copy of your site per language.
The standards are fine. The problem is that the front end web is the DOM. I love the DOM, but it scares the shit out of people. Most people would rather cut their hand off or sell their children into child slavery than walk the DOM with 3 instructions. Its super weird and gets more weird when those people are actually confronted with this irrational behavior.
The beauty of htmx is that it doesn't matter what you use on the backend.
Literally anything that'll render HTML will suffice. Even static HTML files on a web server will do. No need for an application server at all if you're clever and your needs are simple.
Not necessarily. It depends on what you need. For example suppose you were implementing an news reader with two panes. Left pane is a list of titles and previews, right pane is a full article for the selected item in the left pane.
You can make the left pane a list of links to dedicated pages, and then add an htmx attribute to indicate that when the user has JavaScript to instead fetch a html snippet with just full article and swap it into the right pane.
If you had a reasonably bounded set of articles you could statically generate all the /article/id.html and the /article/id/snippet.html files, though a backend probably makes sense.
At that point, how big is the difference really between sending the right pane only as opposed to the whole page performance wise?
I'm just not sure if this approach is worth it for me.
In that case you need to have everything rendered server side and perhaps get creative with the usage of forms firing off events that can be intercepted by the server and then return a new page.
Certain things like Nav menu drop downs and folding text can’t be done with native CSS and HTML.
Maybe! I was messing with different UI approaches to relay this data and this made sense to me, but I'll see if switching to rows is more clear. Thanks for the idea!
The column layout is basically impossible for me to meaningfully navigate on my laptop -- the scrollbar and descriptions don't both fit on the screen (appreciate the repeated titles at the bottom, but ultimately they don't say much)
On mobile, I actually think it's pretty good like this. The names of the fields should stay visible when scrolling, though. Maybe this could be made reactive?
...IE11 hasn't been relevant on the public-web for at least a decade now, and I'm fairly sure it's now entirely gone from corporate/enterprise situations now too, given MS' even-more-aggressive-than-usual campaign to kill it off in Feburary of this year: https://www.cnet.com/tech/services-and-software/rip-internet...
...why is that there? It's weird - it's like saying iPhone 14 is not compatible with 30-pin dock-connectors.
Internet Explorer 11 is still supported in Windows Server 2022, which goes out of support in October 2031. It's also supported in Server 2016 and 2019.
It's also supported in Windows 10 LTSC versions for large companies.
IE 11 mode for Edge is also supported until at least 2029 on all currently in-support Windows operating systems.
So it can be very relevant if you have legacy systems that still need to be maintained.
And McDonalds released a game about Grimace’s birthday for the Game Boy Color this week. It’s still not a relevant gaming platform for 99% of developers.
It's there because some libraries explicitly call it out as a benefit so I included it. The code is OSS (https://github.com/adamghill/unsuckjs.com) -- feel free to make a PR to be the change you want to see in the world.
Not true. There are still legacy use cases where IE11 is required for support in enterprise and regulated businesses. Ending soon? Hopefully. But not gone.
Sadly, that is a commentary on the quality and efficiency of the US healthcare system.
Before I get an onslaught of downvotes: my wife is a Nurse Practitioner and her and literally every provider she knows that I have met agrees that our healthcare system is backwards, inefficient, and anything but caring for or about peoples health.
I can do the same thing via NextJS and output pure HTML and CSS without any JS at all. Or with server components, output only the minimal JS needed. Not sure why everyone wants to write a programming language inside HTML like lit does.
The king of "use JS frameworks, output pure HTML" is Astro. (https://astro.build/) It's basically an SSG where you can use a number of frameworks to write your components, and shipping any JS is explicitly opt-in.
Astro is pretty nice too, I was going to write my blog with it but I liked the instant page transitions that NextJS provides out of the box for which I couldn't find a suitable solution in Astro.
Doesn’t NextJS do that with client-side JS though?
I recently moved from Gatsby, which did that — and the transitions sure are instantaneous — to Astro, which outputs plain HTML pages. Each navigation is a request to the server. Yeah, they take a touch longer. But the response is tiny.
I used to use Gatsby, but the non standard plugin system as well as forcing GraphQL (I use GQL in my own projects but it's just not necessary for a blog site for example, which Gatsby markets itself as being for) put me off Gatsby and onto NextJS, especially when NextJS introduced their static site rendering features. Now with server components, there doesn't seem to be a real need to use Gatsby anymore.
I tried preload, it doesn't work as well as with NextJS which, yes, uses client side JS. But critically, the pages still work without JS if one so chooses. It's progressive enhancement which Astro does not yet have.
Astro is a metaframework that among others can use Svelte for both static and interactive components. Plain Svelte without a metaframework is not suitable for normal websites with multiple pages, but Astro is.
As for Astro vs SvelteKit, I generally prefer Astro's approach to filesystem based routing (no +page etc.), I prefer it's MPA approach to the clientside routing SvelteKit and others use OOTB, I like that it's very tailored towards content and being an SSG (content collections, MD and MDX support ootb, currently experimental automatic image optimization), and the "integrations" are a huge time-saver. Some common ones you can add with `astro add`, and many others are just a quick config edit away.
I recently built a pretty performant portfolio site with Astro in a single day using DecapCMS and UnoCSS, both as integrations, all the client had to do was accept the invite from Netlify Identity and start adding content to their site!
You can use lit components within Next, or within anything really. If you’re building a single app, use what you’re comfortable with. At my job, we’re often creating components that need to be used within an increasing number of frameworks because every client is using something different. Having a custom element instead of needing to load React on every client’s website is huge.
Interesting, at the workplaces I've been at, we only used React, nothing else, in order to make the code-sharing problem much easier, as well as not have developers learning many different frameworks just to get work done.
But wouldn’t you be happier if each and every component on the page paid the price of a full templating system?
No?
Oh yeah, I forgot that Web Components are a terrible idea that are only marginally successful because they can claim to be a “standard” when they’re not any more standard than any other JavaScript.
You only "pay" for as many template systems as you use, not one for every component, and modern template systems are far smaller than the major frameworks.
Each independent script you import is another load blocker. You could have scripts A and B both import template C, but then you're slowing things down even more, and no "HTTP2 push!" is not the answer.
The answer is that they're from ~2013: before React etc. taught us how to make SPAs. It was an early attempt, so it sucks. JS is best when it "paves the cowpath" and takes something we could already do slowly in browser and gives us a better native API for same thing, like querySelector or IntersectionObserver or how there was no built in way to parse a query string before URLSearchParams. Anytime JS goes first, the API is crap, like the DOM Node APIs themselves.
Yeah and an operating system has 10s of millions of lines of code. Does that mean everything else comes without a cost? Also point here is you got options for a much lighter framework, which you bluntly missed.
That's all on the backend so it doesn't really matter to me as long as no JS is sent on the frontend. For the benefits NextJS gives me, like TypeScript, I'm fine with that.
I have the full React framework and ecosystem if I so choose to use it. For example, I could use something like react-three-fiber to add 3D or Framer Motion to add animations yet fall back to not having those when JS is disabled. Sure you could do the same in Lit but those libraries aren't present or as robust as in React.
NextJS sends all JS to the frontend even if it's static and server rendered. It renders once on the server, sends a React+JS bundle to the client, and re-renders the whole page on the client (which is exactly the same). As far as I could tell, there's no mechanism for Next.JS to say "this isn't dynamic" and send only HTML content. If there's a setting I missed, feel free to let me know.
> It renders once on the server, sends a React+JS bundle to the client, and re-renders the whole page on the client (which is exactly the same)
The new paradigm in Next.js is to have "server components" by default, which only render static HTML on the server side; and opt-in to "client components", which render on both server and client.
> With Server Components, the initial page load is faster, and the client-side JavaScript bundle size is reduced. The base client-side runtime is cacheable and predictable in size, and does not increase as your application grows. Additional JavaScript is only added as client-side interactivity is used in your application through Client Components.
There are a bunch of server side apps that turn JSX into static HTML with no JS. It’s just a template language that compiles into JS but you can do anything with it you could do with other template languages.
Sort of unrelated but I find it ironic this website is written in Python. It's a single HTML page. Make it a static site and host it for free/low-cost on S3 or something.
Why do you need to host this website on a server, use Docker, etc?
My personal blog is a static site using Next.js, I pay $0 to host it on S3.
To me this is less about simplicity and more about being anti JS ecosystem, and being different just to be different.
Any legacy site probably has some kind of Javascript framework, jQuery, or something set up where adding another library on this list adds complexity. Any new site that requires a decent amount of interactivity would probably be better with a battle tested framework like React. I've tried many of the libraries listed here, have tried the view engine + Alpine approach, etc, and time and time again I find it's simpler from a development perspective to just use Next.js.
That is to say, for any hobby project, use whatever you want. Try new things. But for production apps, just use React.
>I find it ironic this website is written in Python. It's a single HTML page. Make it a static site and host it for free/low-cost on S3 or something. Why do you need to host this website on a server, use Docker, etc?
Originally it _was_ just HTML + CSS, but I wanted each library's repository metadata (latest version, last commit, etc) to be dynamically retrieved and doing that client-side was brittle and way too slow. So, I used it as an excuse to see how far I could push my own personal static-site framework (https://coltrane.readthedocs.io/en/latest/).
Yep. Just today I was working on a project that I started as a simple set of forms with HTML, CSS, and light vanilla JS + jQuery which over time has naturally accreted a bespoke event-handler framework (really not a bad one, all things considered), wishing I had just done it in React from day 1. Now we are battling complexity every day to support a highly interactive app and we're going to have to try convince the client to give us some time to refactor for some breathing room.
Did you ever run into performance problem with Mithril? I like how simple it is to use but the idea to run / diff the entire component tree on every user interaction kinda scares me
I've been using Mithril since 2017 or so. The answer is: no. To give you a production example, Mithril is used in the video game Guild Wars 2 to render the marketplace in-game and the lead web engineer reported that it was performant enough for their use-case [1]. (I've played Guild Wars 2 and never noticed any issues with it, so good enough for me).
In most cases, your bottleneck won't be Mithril (or React for that matter), but instead what expensive computations you're doing in your components. While React has React.memo, Mithril has the `onbeforeupdate` hook [2] you can use to memoize components if you need it.
Not requiring NPM was one of my original requirements for anything on this list. All of these libraries should be available from HTML directly -- let me know or make a PR if that isn't the case.
That's because the vast majority of the JS ecosystem installs libraries via npm.
We can't really put all of the different ways to use Lit on the front page, but we document how to use Lit from a CDN right on the getting started page: https://lit.dev/docs/getting-started/#use-bundles
No it doesn't, for some of these at least you can render the page server side and the reactive components become reactive a bit later on, isn't that how next would work as well? It's a blend of server side and front end interactivity right?
It's becoming increasingly more dangerous to use anything in the GPL family for anything but the narrowest of use cases. It's to the point that, even now, if it's GPL I refuse to incorporate it, even in personal projects. And I'm not against open source... Everything I write is MIT.
The GPL carves out exceptions for the use of system libraries.
But, as for the "dangerous" part:
* I work in an industry in which software patents are required to survive. Personally, I hate software patents, but it is a reality until the law changes. As such, the GPL invalidates patents, making anything GPL completely off limits. That means that I would not have a job, and then there would be fewer competitors in the marketplace, which would make prices rise considerably. Too bad, too, because we would donate back improvements that are general enough to benefit others. Rather, now we just use proprietary solutions wholly because GPL is a minefield for us.
* I have personal projects. I write code that I want to use myself as well as allow others to use. I use MIT because GPL discourages that somewhat (the point above). I may want to use my personal projects commercially some day. MIT allows that; GPL makes it impossible. In other words, if I choose GPL today, I have poisoned that code from myself for the rest of my life.
In short GPL is not "free" as in freedom, because it comes with strings attached. It's communist in philosophy, but I hesitate to say that because then everybody jumps to some *a priori* conclusion of what they are now convinced that I'm saying, without listening to what I'm actually saying.
GPL says, in short, that the software is freely available for public use, but not for private use. (Yes, there are nuances to that, so much so (and so confusing) that most people either just ignore it, or say "IANAL...") It demands that 100% of the work of anyone in the future must be donated back to the collective for free. That is not freedom.
Create as much GPL code as you like. I create MIT. You can use mine (and I'm happy for you to!), but I can't use yours because of the restrictions you put on it. We can let the future decide which is more "free".
> if I choose GPL today, I have poisoned that code from myself for the rest of my life
If you're the copyright holder, the terms of the GPL don't apply to you. It's a license that you give to other people. You can't revoke the license, of course. But as long as you haven't accepted contributions from other people, you can "fork" your own project and take it closed-source.
(I agree with your analysis in general. Most people don't understand the restrictions placed by the GPL.)
IIRC, it's not about private use but about distribution. We publish libraries under MIT, since other corporate users would very likely need to distribute those.
But the higher-level Apps we publish under GPL, so that downstream is obligated to keep it open-source (but there is no obligation to submit a PR upstream).
And there are more than a few companies that use our GPL stuff, internally and don't redistrubute and therefore don't have to make their internal modifications available under GPL -- because there is no distribution happening.
I'm not a lawyer but our decision was informed by one who has prior experience in IP, licensing and specifically FOSS-style licenses.
Suppose that my company has powerful video editing software that we sell (which is distribution). Consider that it has unique functionality and has taken a decade to develop by a team of developers, all of whom have salaries, insurance, retirement, etc. that need to be paid, otherwise the software would not exist. Proprietary code and profit are more than appropriate in this situation, as I believe that workers should reap the reward of their labor and investment.
Now, suppose that a new feature is wanted. There is a project that provides that functionality, but it is licensed GPL. Can I use it?
Absolutely not!
Because, if I do, then I am obligated to release all of my source code in addition, because it integrates with the GPL code. It is financial suicide for my business to do so (which, btw, is a political preference for many of the GPL proponents). What will I do instead? I will probably just have my developer write our own version, adding in the extra features that we need.
Contrast that with the MIT-licensed code. We can use it without fear, and we will probably even submit enhancements back to the project, simply because it makes our lives easier in the future for maintenance.
GPL poisons downstream, simple as that.
You are correct that there is no obligation to submit a PR upstream, but there is a requirement for my source code to be made available under the same GPL license. GPL is "infectious" (or "viral", take your pick of words).
The funny thing is, I believe in freedom with software, but my interpretation of "free" is vastly different than the GPL interpretation of "free". And, as I said, I put my money where my mouth is... almost everything I write (except for my job) is MIT licensed code.
It's really weird that you describe requiring reciprocation as poisoning downstream. Proprietary software also doesn't allow you to just incorporate their code in your proprietary software and distribute it. Nobody described plain jane closed source software as "infectious" even though by any reasonable definition it is vastly more so. Open source software will let you remediate the situation by removing it or open sourcing it without penalty while closed source software if incorporated without following the license will see the owners lawyers crunching your bones to suck out the marrow. If GPL is a cold then proprietary software is ebola.
In the overwhelming number of cases the party is selling something other than their software and you are equally free to negotiate a different license with the developer if you have different needs.
I'm not aware of any closed source software that supplies you with source code. (Although, in contract negotiations, it is common for source code to be held in escrow for large $$$ contracts, but that is for a different perpose entirely).
Also, suppose that you were to write something that interacted with the binaries of a closed-source system. That would not automatically remove the proprietary nature of your own software, so it is not infectious at all. You may not be able to distribute the proprietary software, but you could sell it as an add-on product without surrendering your IP. That's not ebola. That's a full quarantine and separation that respects the IP of individual owners.
GPL is particularly troublesome when it is a dependency of a dependency of a dependency of a dependency. You may not even know that you are using GPL code because the dependency that you are intending to use is not GPL. Again, the words "viral" and "infectious" are used because of this behavior... And this behavior was purposefully designed into the license as part of the political beliefs of the founders.
I'm not saying that GPL is useless, but rather that it is significant and potent enough that it is a single factor that will eliminate a project from consideration. Period. No if's, and's, or but's. So much so that I personally avoid GPL as much as possible in my own projects, and I distribute my projects as MIT.
> Also, suppose that you were to write something that interacted with the binaries of a closed-source system. That would not automatically remove the proprietary nature of your own software, so it is not infectious at all. You may not be able to distribute the proprietary software, but you could sell it as an add-on product without surrendering your IP. That's not ebola. That's a full quarantine and separation that respects the IP of individual owners.
Unlawfully including GPL code in your proprietary product doesn't magically make your code GPL. It can't. In fact it has the EXACT SAME EFFECT as including proprietary code it means you are committing copyright infringement by distributing the combined work. The ability to get out of jail free by complying with the GPL and releasing the combined work under the GPL or just replacing the code you never had the right to distribute without further drama or penalty are extra privileges that you will never receive if you decide to distribute proprietary software. You are confusing additional benefits for penalties and pitfalls.
Once again you don't have to release your code as GPL you can simply rewrite the portion that you didn't have a right to distribute.
> GPL is particularly troublesome when it is a dependency of a dependency of a dependency of a dependency. You may not even know that you are using GPL code because the dependency that you are intending to use is not GPL. Again, the words "viral" and "infectious" are used because of this behavior...
If you are responsible for a product knowing where the code comes from falls under the basic responsibilities of the job. Declaring in public that you own someone Elise's property because you couldn't be bothered to look is like catching a public indecency charge because you couldn't remember to wear pants that day. You put on pants and the problem goes away.
If you have followed any of the cases in which companies misused GPL code it looks metaphorically like someone following you around politely and repeatedly reminding you to cover your nudity whereas infringing on proprietary code looks more like a swat team showing and flinging flash bangs. If you are lucky if you don't end up getting shot.
Again, it's quite difficult to get access to proprietary source code, so that is a false equivalence. I'm not confusing the benefits for penalties... I'm observing the emergent behavior that the GPL license produces by its viral and infectious rules. The strength of the legal threat has not been tested in court, to my knowledge, but why risk a lawsuit? And there is still the patent issue.
In terms of marketing, GPL sets itself up to be the paragon of the open source spirit and the model that all open source licenses should follow, but it has a HUGE gotcha based on its political persuasion. It proclaims freedom, but it is most decidedly not completely free, and requires parsing of sophisticated legalese to understand the nuances. Good luck if you are a beginner, non-native speaker, or have otherwise spent substantial brainpower trying to understand the do's and don'ts. It will bite you.
You may absolutely love GPL and everything that it stands for. That's fine with me. But I don't, and I advise everyone that I know of the dangers and repercussions of using GPL dependencies. You can say that those are features and virtues, and I can say that they are dangers and liabilities, but the bare facts are the same.
But, getting back to the reason for this post, when I want to compare software packages, the license is the first thing I look at, and if it's GPL, I bail on it immediately.
The GPL isn't viral its a simple bargain you can pick up or leave without also agreeing nor disagreeing with anyone's politics. There is no "danger" of any kind whether you can or cannot use GPL software is a trivial matter. If you are distributing proprietary software then you cannot. Foregoing software as a user because it is licensed under the GPL makes even less sense. The GPL simply doesn't restrict use in any fashion. It's like saying if I find out my a brands clothes was sewn by people who drink milk I bail man I'm lactose intolerant! It's a basic misunderstanding of how things work or a weird hang up.
> I work in an industry in which software patents are required to survive
Thanks for the explanation! I wonder what the industry is (maybe finance or law or ??) It seems that in most of the tech industry software patents are primarily used as a war chest for large companies or for non-practicing entities focused on litigation. Usually execution is more important.
I work in a suprisingly innovative industry... tolling!
Seriously, there's some surprising work being done in the field, and software patents play a big part for companies that are very active in the space.
It's like the stock market. A lot of millionairs have money in stocks, but so, too, does the average Wal-Mart employee (at least it was an option when I was a cashier 20 years ago!). In other words, even small companies can gain a competative advantage with just a patent or two.
I was praising HTMX the other day when someone pointed me to Unpoly. Indeed, after I converted one of my apps to it, Unpoly is really a better version of HTMX and I think it deserves more exposure.
I haven't studied the guts of it, but I've been impressed recently with Github. It feels like a normal, page-by-page application, with human-readable URLs, and none of the usual SPA shenanigans as far as after-page loading, infinite-scroll nonsense, or broken back button behaviour... but then stuff happens like a page that's sitting open on a PR automatically pulls new comments as they are posted, and it looks identical to if that page was loaded afresh. Spiffy.
Same, it seems like the kind of thing that would be highly useful with light interactivity, but may not scale well. I also haven't really spent much time in it, so idk.
Looks great, love the list so thanks for putting it together.
In an ideal world your table would also have a first commit row that way I can better compare how well established each library is (GitHub stars mostly does this but date of first commit would help give sense of momentum too)
I didn't post the site to HN, but I did create it. Adding the first commit is an interesting idea. Or some way to measure longevity (i.e. how long has the library been worked on). I'll try to figure out a good way to do that or if you put up a PR I will take a look at it: https://github.com/adamghill/unsuckjs.com.
Nice idea! I was just surprised to see the table rows and columns opposite of what I’m used to seeing in lists where one dimension is more likely to grow much longer than the other.
i.e. I would have expected the libraries as rows, rather than columns. More like such lists are typically done in Wikipedia.
It definitely might make more sense that way! I was playing around with different ways to display the data and landed on this approach even though maybe it's a little clunky. I could flip it and see how it feels -- thanks for the suggestion!
You can do a lot with Django templates. Sprinkle in any of these and I find it unlikely you'll be reaching for React. Although if you have lots of React experience I am a believer in "use what you know". Tool familiarity > trimming dependencies
Django templates are nice, but fall flat if you want to do anything remotely dynamic with your model. The fact that you need to define custom template tags for things as simple as getting an attribute from a dict in your model is a real pain point.
I love that so many here share the sentiment. I never liked reactjs. Started to use vue for almost all frontend needs. But even knowing what i do, it‘s way too much tooling just to get started.
Now i despise all the frameworks & libraries and go vanilla 9/10.
I get these for when your stack contains python/go/ruby and your page is SSR. But more recently I've just come to enjoy the full blown Javascript Framework. I use Svelte exclusively, and the bundle sizes aren't too large thanks to the compiler. I can write TS which is way more comfortable than JS or even Python or Go IMO.
Myself I am working with HTMX and thinking about how the back end framework changes to make the most of HTMX.
For instance I have a screen that has a bunch of dropdown inputs that automatically post changes to the server when you flip them and sometimes the options change when you flip them. The dropdown is defined in a macro that can be used in a template or that can be directly sent to the server. HTMX also can update several things in one request: for instance when I add a new RSS feed to YOShInOn it has to (1) insert the item into the feed, (2) tear down a modal dialog, and (3) update the count of the number of feeds. You very much need a server-side framework that makes it routine to do all that.
Well, I like the idea of a compiler, that doesn't include "the framework" as a bundle required to run my code, but bundles up my code itself as being functional without "the framework".
one suggestion, on non-ultrawide screens when you scroll you no longer see the left labels, would be cool to have them float when you start scrolling left so they're always visible.
I don't think preact is part of the "progressive enhance html"
Progressive enhance HTML means whatever backend you use, it spits out html, and the HTML is rendered without Javascript. Preact fails this.
The libraries are meant to progressively enhance html by replacing or enhancing certain actions with small javascript. For example, with htmx, usually clicking on html link would load complete page, but with clever htmx directive, the link onclick will only load/replace what is needed.
Think of progressive enhance html as html first. As the page working even if javascript is disabled.
It's good that I know what htmx is already, because that description is kinda bad it doesn't really describe what htmx actually does to progressively enhance html.
Surprised to not see Qwik. It's probably more a framework than just a HTML enhancer, yet it fits very much in that philosophy of reducing JS to the very bare minimum
alpine.js is pretty clean, but every time I've been tempted to use it, I'm reminded that we have to load jquery because many pages are heavily dependent on datatables.net. And there is no way we are dropping datatables.net.
It sucks so much its unreal, forces OOP JS, which causes extreme amounts of boiler plate code and mutation bugs, making a simple change becomes a one week task. I'm just doing a complete rewrite of a legacy codebase made with TS, webcomponents and lit into a modern framework, which is about the only sane thing to do with it.
In react, mutating a model is a bug. It's left up to the developer to remember not to do that. Outside of react, mutating a model seems like a natural fit for the way stuff actually works and it's not considered a bug.
Web components were invented to make components more reusable independently of frameworks, nice, but turns out frameworks kept improving and now code is much more reusable between any modern framework (react/svelte/vue) than with web components. They are fully enapsulated and a pain to style, so if you don't need that encapsulation for some reason, you should not use them.
Lit is a framework for using web components, managing state and routing, it adds lots of boilerplate and uses OOP hierarchies to couple abstractions that could be otherwise separate. Lit uses object properties to detect changes and provides rerendering of the changed parts of the app.
This combo makes it work so terrible, you're orcestrating encapsulated components within inheritance chains and each part has a lot of boiler plate code and a lot of mutation. Its separating encapsulation by technology instead of concern, and enforcing this really hard. The end result is code terrible to read and very hard to change, full of mutation bugs and broken flows that don't properly update.
The move back towards small dependancy free JS libs, in combination with modern JS ES Modules, is absolutely brilliant.
I learnt web dev back in the late 90s by doing "view source", and was still learning about new things that way well over a decade later. If we can move back towards that, by not having a build step, it will be amazing for new devs starting out and learning new things.
Obviously larger apps, with many dependancies, will require all the incredible work thats gone into modern JS tooling, but for so many simpler (and not so simple) sites this process really does make sense.