I love the localfirst idea but I don't love web browsers. They're the platform everyone has to have but shoehorning 40 years of UI development and products into them seems like a mistake.
I can see why local UI development fell out of favor, but it is still better (faster, established tooling, computers already have it so you don't have to download it). I can't help but feel like a lighter weight VM (versus electron) is what we actually want. Or at least what _I_ want, something like UXN but just a little more fully featured.
I'm writing some personal stuff for DOS now with the idea that every platform has an established DOS emulator and the development environment is decent. Don't get me wrong this is probably totally crazy and a dead end but it's fun for now. But a universally available emulator of a bare bones simple machine could solve many computing problems at once so the idea is tempting. To use DOS as that emulator is maybe crazy but it's widely available and self-hosting. But to make networked applications advancing the state of emulators like DOSBox would be critical.
Actually local first, maybe the way forward is first to take a step back. What were we trying to accomplish again? What are computers for?
> shoehorning 40 years of UI development and products into them seems like a mistake.
It wouldn't be such a mistake if it was done well. For all the bells and whistles web UIs have, they still lack a lot of what we had on the desktop 30 years ago, like good keyboard shortcuts, support for tab in forms, good mask inputs, and so on.
Local first could be the file format that allows us to use devices without cloud services. Back to file first, vendors of software apps, pay for software once (or one year of updates or whatever).
I would still prefer to have my stuff synced between devices. Files are OK for data that doesn't change much (your music/photo/book library) - Syncthing works great there.
CRDTs allow for conflict-free edits where raw files start falling short (calendars, TODOs, etc). I'd love to see something like Syncthing for CRDTs, so that local-first can take the next logical step forward and go cloud-free.
The other day I was riffing on ideas on what if Browsers had a third Storage called `roamingStorage`. Keep it the simple, stupid key/value store interface of localStorage and sessionStorage, but allow it to roam between your devices (like classic Windows %RoamingAppData% on a network/domain configured for it). It doesn't even "need" a full sync engine like CRDTs at the browser level, if it did something as simple and dumb as basic MVCC "last write wins, but you can pull previous versions" you can easily build CRDT library support on top of it.
The hardest trick to that would be securing it, in particular how you define an application boundary so that the same application has the same roamingStorage but bad actor applications can't spoof your app and exfiltrate data from it. My riffing hasn't found an easy/simple/dumb solution for that (if you want offline apps you maybe can't just rely on website URL as localStorage mostly does today, and that's maybe before you get into confusion about multiple users in the same browser instance using the app), but I assume it's a solvable problem if there was interest in it at the browser level.
Look up CloudKit[1], many of these questions have been answered for Apple-native apps, but perhaps it's not obvious how to translate that to the web-world, or how to keep the object storage decentralised (but self-hosted shouldn't be a problem).
I'm also firmly in the native app camp. And again, Apple did this right. The web interface to iCloud works great from both Firefox and Chromium, even on OpenBSD, even with E2EE enabled (you have to authorise the session from an Apple device you own, but that's actually a great way to protect it and I don't mind the extra step).
It's probably harder to answer those questions if you can't build the solution around a device with a secure element. But there's a lot of food for thought here.
Then you are answering the wrong question. I want a "web native" answer and proposed a simple modification of existing Web APIs. As a mixed iOS/Windows/Linux user, I have selfish reasons to want a cross-device solution that works at the Firefox standardized level. Even outside of the selfish reason, the kinds of "apps" I've been building that could use simple device-to-device sync have just as many or sometimes more Android users than Apple device users. I've also seen some interesting mixes there too among my users (Android phone, iPadOS device, Windows device; all Chrome browser ecosystem though).
> It's probably harder to answer those questions if you can't build the solution around a device with a secure element.
Raw Passkey support rates are really high. Even Windows 10 devices stuck on Windows 10 because no TPM 2.0 still often have reasonably secure TPM 1.0 hardware.
Piggybacking on Passkey roaming standards may be a possibility here, though mixed ecosystem users will need ways to "merge" Passkey-based roaming for the same reasons they need ways to register multiple Passkeys to an app. (I need at least two keys, sometimes three, for my collection of devices today/cross-ecosystem needs, again selfishly at least.)
> Then you are answering the wrong question. I want a "web native" answer and proposed a simple modification of existing Web APIs.
I don't see why this mechanism shouldn't be available both on the web and in native apps. The libraries would just implement the same protocol spec, use equivalent APIs. Just like with WebRTC, RSS, iCal, etc. And again, ideally with P2P capability.
> [...] that works at the Firefox standardized level.
What about a W3C standard? Chrome hijacked the process by implementing whatever-the-hell they like and forcing it upon Firefox & Safari through sheer market share. It would be good to reinforce the idea that vendor-specific "standards" are a no-no.
It also just doesn't work the other way: Firefox tried the same thing with DNT, nobody respected it.
> Piggybacking on Passkey roaming standards may be a possibility here [...]
WebAuthn sounds good, that kinda covers the TPM/SEP requirement. Native apps already normalised using webviews for auth. I wonder if there's a reasonable way to cover headless devices as well, but self-hosted/P2P apps like Syncthing also usually have a web UI.
> [...] again selfishly at least.
No problem with being "selfish". Every solution should start with answering a need.
> I can't help but feel like a lighter weight VM (versus electron) is what we actually want. Or at least what _I_ want, something like UXN but just a little more fully featured.
That's basically the JVM, isn't it?
It's interesting to think how some of the reasons it sucked for desktop apps (performance, native UI) are also true of Electron. Maybe our expectations are just lower now.
I got super interested in Clojure and the java UI frameworks a year ago. But the languages on top of languages scared me. I wanted something simpler, so now I'm writing x86-16 and making bitmapped fonts and things. Probably not a good idea for anyone with a timeline but it's been educational
From something I wrote in 2021 [0] and based on my experience working on a "browser-based OS" in 2013:
What we need is to have a device-transparent way to see our *data*. We got so used to the idea that web applications let us work from "dumb terminals" that we failed to realize that *there is no such thing as a dumb terminal anymore*. With multi-core smartphones, many of them with 4, 8, 12, 16GB of RAM; it's not too hard to notice that the actual bottlenecks in mobile devices are battery life and (intermittent and still relatively expensive) network connectivity. These are problems that can be solved by appropriate data synchronization, not by removing the data from the edge.
One of the early jokes about web2.0 was that to have a successful company you should take an Unix utility and turn it into a web app. This generation of open source developers are reacting to this by looking at successful companies and building "self-hosted" versions of these web apps. What they didn't seem to realize is that **we don't need them**. The utlities and the applications still work just fine, we just need to manage the data and how to sync between our mobile/edge devices and our main data storage.
If you are an open source developer and you are thinking of creating a web app, do us all a favor and ask yourself first: do I need to create yet-another silo or can I solve this with Syncthing?
This ties in nicely with the bookmarking discussion about Pinboard. I particularly like the following quote from this article:
> Now, of course, there are many advantages to this shift: collaboration, backups, multi-device access, and so on. But it’s a trade! In exchange, we’ve lost the ability to work offline, user experience and performance, and indeed true dominion over our own data.
I’ve decided that the advantages of storing my bookmarks locally far outweigh the chance that I'll want to access them from a different device or collaborate with someone else on them. Yes, it means I've created something of a 'silo', but I'm starting to think that's not a bad thing.
> I’ve decided that the advantages of storing my bookmarks locally far outweigh the chance that I'll want to access them from a different device or collaborate with someone else on them. Yes, it means I've created something of a 'silo', but I'm starting to think that's not a bad thing.
Why don't you store those bookmarks as markdown files and then upload them to a private repo you can read on other devices and even your phone through the GitHub mobile app? If they're bookmarks, they'll work perfectly as links in markdown which you can click in the GitHub app.
Note: I pay $4/mo for GitHub private repos and I absolutely defy anyone to show me a better deal in any SaaS company. I open the GitHub mobile app at least 10 times a day. This is the only subscription service that is inarguably worth it as far as I'm concerned.
Another aspect of local-first I'm exploring is trying to combine it with the ability to make the backend sync server available for local self-hosting as well.
In our case we're building a local-first multiplayer "IDE for tasks and notes" [1] where the syncing or "cloud" component adds features like real-time collaboration, permission controls and so on.
Local-first ensures the principles mentioned in the article like guaranteed access to your data and no spinners. But that's just the _data_ part. To really add longevity to software, I think it would be cool if it would also be possible to guarantee the _service_ part also remains available. In our set up we'll allow users to "eject" at any time by saving a .zip of all their data and simply downloading a single executable (like "server.exe" or "server.bin"). The idea is you can then easily switch to the self-hosting backend hosted on your computer or a server if you want (or reverse the process and go back to the cloud version).
This looks like a great project and something that could be adapted into what I've been looking for unfruitfully (an OSS/self-hosted and cross platform version of Noteplan 3 for family use). Not expecting too much movement on your part into the crowded task management space but the screenshots and examples gave me the same feeling.
Signed up for early access and looking forward to it!
> To really add longevity to software, I think it would be cool if it would also be possible to guarantee the _service_ part also remains available. In our set up we'll allow users to "eject" at any time by saving a .zip of all their data and simply downloading a single executable (like "server.exe" or "server.bin"). The idea is you can then easily switch to the self-hosting backend hosted on your computer or a server if you want
Too few people are taking advantage of Redbean <https://redbean.dev/> and what it can do. Look into it.
I created https://sql-workbench.com a while ago, mainly to let people analyze data that's available via http sources, or on their local machines, w/o having to install anything.
A recent project is https://shrink.video, which is using the WASM version of ffmpeg to shrink or convert video in the user's browser itself, for privacy and similar reasons mentioned before.
I’m working on a kanji learning app (shodoku.app) which some might say fulfills this ‘local first’ philosophy. Currently it is hosted on GitHub pages and relies on static assets (such as dictionary files, stroke order SVGs, etc.) which requires a web connection to fetch. However when I make this a PWA (which I’ll do very soon) these will all be stored in the browser cache, effectively making it work offline.
I store the user data (progress, etc.) in an indexedDB in the user’s browser and I have to say:
> No spinners: your work at your fingertips
is not true at all. indexedDB can be frustratingly slow on some devices. It also has a frustratingly bad DX (even if you use a wrapper library like idb or dexie) due to limitations of the database design, which forces you into making bad database designs which further slows things down for the user (as well as increases the total storage consumption on the user’s device).
I also wished browsers offered a standard way to sync data between each other, even though you can share your firefox tabs between your phone and computer, you can‘t get the data from indexedDB on the same site between your computer and phone. Instead you have to either use a server or download and drop the data between the two clients.
indexedDB feels like a broken promise of local first user experience which browser vendors (and web standard committees) have given up on.
When I was at Apple (2010-2015) every app was Local First. In fact they legally had to be to be sold in Germany, where iCloud cannot be mandatory (they have a history of user data being abused).
You’ll notice when the network goes down all your calendars email contacts and photos are still there. The source of truth is distributed.
Client side apps writing to local db with background sync (application specific protocols) works excellently. You just don’t write your UI as a web page.
> But I was equally surprised by how little this was being discussed, or (as far as I could tell) practiced in the real world. While there seemed to be endless threads on Twitter about server-side React (to get the UI generation closer to the data), no-one was talking about the opposite: moving the data to be closer to the UI, and onto the client!
This, I've wondered for a while. There is plenty of talk about server side rendering, which I don't think is useful for many apps out there. SSR is quite wasteful of the resources on the client side that can be made useful. And, I've seen many apps being developed with "use cliënt" littered all over, and that begs to wonder why do you even want SSR in your app.
Wasn't the reason for SSR to have more control over security and offload work from the client to the server? Let's not forget that the majority of the worlds population is using slow ass tech. We cant simply put huge workloads to the client.
I think the main drive for the modern version SSR was SEO. The workload part? the whole point of SPAs was to move compute - especially fancy UI compute down to the client so the server only had to move data... so we're just completing the circle again.
It is exactly what I am talking about. Rendering the component on the server, takes up resources, as opposed to just sending that data to the client, and the client rendering it.
In Nextjs, "use client" is used to force the rendering to take place in the client, because many components cannot me rendered in the server. For example maps. In this case, it's unnecessary to use an SSR framework.
You should know that the website localfirstweb.dev shows as blacklisted on avast and avg viruses. It doesn't like a site reference called strut.io. claiming it is a card stealer.
Very cool. Coincidentally I just made a basic calculator with stored variables (https://calc.li/) with this philosophy in mind, though I didn't know there was a bigger movement around the idea! Mostly, I didn't want to bother with a backend or even cookies, so I just store everything in localStorage (which is criminally under-used IMHO).
The issue with local first web dev in my experience is two fold:
1. It's super hard. The problem of syncing data is super hard to solve and there is little support from modern tooling. Mostly because it differs vastly on how you should handle it from app to app.
2. Very few that live in the west or people who pay for software are offline in longer stretches. There is simply very little to no requests for making apps work offline. I would argue to opposite that it probably is mostly bad business because you will get stuck on technical details due to #1 being so hard to solve. Even people who say they will use the app offline are never offline in my experience (from working on a real app that worked offline).
I work on an app that has clear offline benefits, although, pretty much no one will use it offline in practice and where I live, people have 5G access nowadays even at places that used to be offline, aka trains, tunnels etc. Even so, I plan to make my app offline supported but only after I have success with it.
Hey browser makers, please allow file:// URLS to actually be able to load other files in the same directory without giving a CORS error. You can't even run a JS file from the same directory! That's what's really killing "local first".
If you load a file using the “file” protocol, it shouldn’t be able to load other resources (JS from the internet) with non-file protocols to protocols (http, https, etc.) to non null-string domains. because that would violate cross origin request sharing (CORS).
I too like local HTML and would love to see it restored without requiring a local server. If I spin up a local Webrick to serve anything from my file system, I’m no better off (except maybe that it’s scoped to a particular directory and children).
People expect to be able to download an HTML file and for it still to be able to load an image with an external URL. If it can do that, it can exfiltrate (eg js can put data loaded from a local file in the src for an image on a external server).
When I save an HTML page, the browser downloads other resources the page wants. This is how it should work for The Average User. Saving just the HTML should be the exception for more technical users.
or just separete what are "document" and what are "application", already, and don´t mix them up. i would not even mind if there was a separate "docjs" - JS code for which only the document is visible and only can do stuff upon it, and the "appjs" which can do all of the wild js stuff which our great browser vendors come up with. this way, in various cases, you can turn off the potentially harmful appjs, while keep the docjs for validating forms, change layout, implementing tinymce, etc...
IMO the problem rooted in co-mingling documents and applications on a web page / in a HTML file. let the user save documents in .html: then it should not be able to do any harm - it's a digital sheet of paper! and web applications in, say, .hta: then he should not expect any more isolatedness then for a downloaded .exe or .sh file; and the user client program should treat it with due care when downloading, eg. by put it in a separate subfolder, set SELinux context, etc...
Luckily, starting a local HTTP server for development is more-or-less a one-liner for a long time, in most OSes (maybe even Windows might ship with Python nowadays?):
python -m http.server 8080
If if you're stuck with python2
python -m SimpleHTTPServer 8080
At least this doesn't introduce the security issues you'd see with any file:// resource being able to load other file:// resources
> Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
So docker is "effectively" ignoring your firewall in the case of ufw. I don't see how it can be considered to not ignoring your firewall when it ignores the rules you've setup.
I dunno. If I use UFW on Ubuntu, I use it as a firewall, and applications that ignores my firewall, I'd consider them to be ignoring my firewall, regardless if the details say that it's still using NAT rules so technically it's just ignoring one firewall/something not called a firewall, even though it ignores the firewall you've setup.
To be frank, it kind of feels like the kind of technical nitpick argument I'd read from a Docker Inc employee trying to somehow defend ignoring the user's firewall.
The end result is that you setup rules in UFW, and Docker ignores them.
Yeah, that's a good callout, thanks. Not sure why they took the less secure approach for something that is intended as a development environment, but I think I feel like that for lots of decisions in Python so maybe not that weird in the grand scheme of things.
> (maybe even Windows might ship with Python nowadays?)
Windows doesn't ship with Python, but it does have a silly thing where if you type python in a command shell of your choice it will try to auto-install python. (When it works it is kind of magic, when it doesn't work it's a pain to debug the PATH problem.)
It's not exactly a one-liner, but in an off-the-shelf Windows install it is a short script to create a simple static file web server inside PowerShell.
Also there are npm and deno packages/CLIs that are easy to install/run for one-liners if you want to install something more "web native" than Python. `npx http-server` is an easy one-liner with any recent Node install.
I have made a notebook environment inside a single html file. You have to map dependencies in an importmap to blob URLs and it works. I have a single file website online. And you can click download to play with it locally
Can this be a self-contained alarm and journaling app for cellphones?
Because I can't think of a way to do that without serviceworkers. I mean a way that doesn't involve the end user reading a paragraph of instructions, based on their "OS".
Anyhow, sorry, I just can't tell what it does from the confines of that demo and a cellphone browser...
No I think you need service workers for notifications. You could do journalling but it's emits a file when saving by default. Index dB works so maybe that's a possability
You can run non-module scripts, yes, though not modules.
Though it's easy enough to run a caddy local server. It was easier in the past, in earlier caddy versions one could just place the caddy binary in PATH and launch `caddy` from any directory to auto serve a local server, now it requires more arguments or a config file.
That and, bring back the "new tab" custom URL, and, a way to securely host local servers without root but also without just praying that another app doesn't port-squat me
Localhost is an entire /8, so you have a few million possible addresses you can bind to if you want, all on whatever your preferred dev server port is.
Maybe this is an opportunity. Is there already a “DNS for ports” whereby the hosts file is propagated with application process identifiers that are mapped to localhost:port in hosts file?
hosts file (the hosts NSS database) does not speak DNS.
however I guess this line of thought comes from that web developers want a number of *.dev.myproject.net domain names to use in URLs and handle by different web listener processes. why don't just run a nginx as reverse proxy for all of your *.dev.myproject.net domains? update your port number ↔ domain name in nginx config; reload is quite cheap.
In some ways it is an inversion of inetd. Instead of something that listens to a bunch of ports and launches a port corresponding process, the user has a service that will listen to a random port and they wish to address it in a human-readable kind of way.
As far as I know, the hosts file does not afford a port number so a solution would look similar to a proxy on a known port like 443 listening for a particular domain name that hosts routes to localhost and the proxy routes to the service on port whatever. Also need to set up local CA to sign a cert for each of the hostfile domain names. . .
Isn't this "service discovery", which, at least, apple had 23 years ago "bonjour"? I'm not claiming that stuff like bonjour (I'm referring to the apparently infringing "rendezvous") and upnp did what we're discussing; but you could hack em to do it.
I know I generally use fing in a pinch, or nmap, or ask the DHCP server for all the hostnamr<->ip mappings and then nmap -A - t5 (that's threat level five - the system I use)
It's amazing to me all of the fancy new ways eventually evolve into the thing they were trying not to be in the first place. We see this time and time again. I've been around long enough to have seen it in several situations to the point that at the start of the trend someone looks like the old dog refusing to learn a new trick, then eventually people realize they could have saved a lot of time/effort/money by taking the old's advice and experience.
I have said it before and I am unsure if I stole it: you can recreate nearly any "service" the web/internet has to offer today with only the RFCs prior to April, 1995, with the caveat that I'm technically fine with HTTP; I haven't given it much thought.
I think it was LogStash or Summit Patchy Project, that let you use IRC as a sync for redirecting logs or making a copy. So I made a PCI compliant logging service, immutable ircd (ro fs) that logged to itself via eggdrop locally. All little cattle VMs would have thier logs sourced and synced via logstash to an IRC channel for the vertical the VM "belonged to". All the irc stuff was on an append-only mount on the ircd server, so if audit just snapshot that mount and hand it over.
NOC was in the channels. I think some teams set up dashboard widgets that triggered on irc events.
All that was a POC done completely to prove that it could be done using only stuff from pre-1996 RFCs. I'm sure the company I built it for went with some vendor product instead and I don't blame them, I wouldn't have maintained that - you ever set up ircd? The volume of traffic just for logs was already fun to deal with.
Pedants: I'm working from 15 year old memories and I changed a few details to protect the innocent. I can't remember the project name other than it was a play on "BI" for business intelligence.
I'm mostly resorting to self-hosting tools on my tiny server these days, but I would love it if I could run local web apps on a synced folder (a la Dropbox), and access them on my computer and mobile phone alike. With the CORS bypasses, it would be so convenient to have your own personal kanban or finance apps running and synced across multiple platforms, and remove the barrier of entry for many.
Why not use data: or blob: URI's instead? That ought to allow you to load resources from the "same" file, obviating security concerns from loading external resources.
(Alternately you could bundle multiple files into a single ePub, though that requires a few adjustments compared to simple HTML. It's a widely supported format and allows for scripting.)
There really must be something less bloated for local static pages. Embedded SQLite and react just for a few nested table queries? Come on, js has maps and hashtables. React for displaying generated content? My DOM inserter is half a page, and loads instantly
The "static page" is referring to https://localfirstweb.dev/. TinyBase, the tool with embedded SQLite and React support (among other things) is for reactive (non-static) web applications. Native JavaScript language feature like maps are not an adequate replacement for the functionality offered by TinyBase.
The react module for TinyBase is optional, and if you're just using their store module you only add 5.3kb gzipped to your final bundle, hence the name TinyBase.
I also don't think you understand the complexity of the features that TinyBase is offering. It's possible you don't personally need these features, but critiquing the software for not being totally minimalistic is a bit silly.
how do you say, implement full-text search in a local first manner? How about vector search? (don't even know if it's a thing yet, sounds possible these days). Imagine saving a local copy of a docs site (a sizable set of pages) and have search and stuff working perfectly
Build the index as a static file and query it using js? This has been my approach so far — admittedly, this isn't with millions of pages, but I'll continue to use it for as long as it works. When hosted on the internet, the search is still blazingly fast, way better than most other sites I come across.
For a very similar scenario I'm currently looking to use PGlite: https://pglite.dev/ which is a 3MB WASM build of Postgres which also includes pgvector.
I don't understand, offline-capable software is now being pitched as something profound and genius? There was software before the widespread Internet access.
I built and offline friendly app, was not offline first, but it was my first PWA, the idea was that a pilot could have our web page open at high altitudes, potentially with bad service, and they could still interact with anything, once back online, it would push any changes they made. I implemented it with vanilla JS.
The only annoying part was detecting when the browser was back online, I used some hacky solution since there was no native API for it for any browser at the time, but it worked.
> You see, connectivity is generally good on a boat - wifi, cell coverage, and satellite options abound - so we survive. But when it isn’t good, it really isn’t good. And then suddenly, it dawns on you just how much of your life is beholden to the cloud. Your documents don’t load. Your photos don’t sync. Your messages don’t send. Without necessarily consciously realizing it, we have all moved most of our online existence to other people’s computers!
Welcome to the world a huge chunk of the population lives in. It drives me crazy how quick people were to jump onto cloud computing without ever asking the question "how well does it work when my Internet connection sucks?"
Well, it's always amusing how they discover that applications can run locally without depending on someone else's servers and maybe without pulling unverified code from 50 random repos.
Next they'll discover native applications! Innovation!
Maybe after that they'll even discover that you can give a shit about the user's battery/power consumption and ram consumption again!
The key in local-first is the _first_. The stated goal is to give people the benefits of local applications (no spinners, data outlives the app) with the benefits of cloud applications (low-friction collaboration).
This is unnecessarily combative. Local-first does not mean local-only, it means that the app should also work on the internet, meaning that you need to solve sync, which is why it took some time for the concept to gain traction. CRDTs for example are relatively new.
As satisfying as it is to point at local first software and say they forgotten history, it’s important to remember that a lot of development happens where the friction is lowest.
The target platform for many local first apps is browser because you don’t have to mess with EXE/DMG/AppImages.
The goal is to ship, not to ship the most efficient application possible.
I never made anything fancy, but I never made anything for the web that I can't run locally. If I can't just sync the changed files to the web server and overwrite the DB (which for my use cases takes seconds), I'm not interested. If it needs to be in the domain root, I'm not interested. A bunch of files using relative paths and a config that checks for running on localhost and points to the local DB or the production one, respectively. That's it (okay plus the domain and a few other things so cookies etc. work, but describing it would take longer than making it), that's all I want out of the web kthxbai. I'm basically stuck in 2000 and I love it.
Nah I feel like this is the right approach. Apps got too complicated to maintain safely. Web development should be accessible and have good standards. It should be easier to build webapps over the years but its becoming harder and harder.
I say it's easier today, if you keep it simple. Everything I did 20 years go still works, or works better, or became simpler to do, or is now built into the browser or the language or what have you.
Like this little PHP script I had to spit out custom gradients before CSS had them, it's weird how fondly I remember that... it wasn't even special or complicated, but it was my grd.php and I used it everywhere :) And on some old pages I never got around to replacing it, I'm sure! Once it works, it just works.
Stay away from frameworks and pre-processors, but also study them (!) and make your own and you'll be fine (that is, you will have a lot less pointless churn). If in doubt, don't put sensitive info on it and don't allow visitors to. There is so much you can do where security really literally doesn't matter, because it's just something cool to make and show people.
I can see why local UI development fell out of favor, but it is still better (faster, established tooling, computers already have it so you don't have to download it). I can't help but feel like a lighter weight VM (versus electron) is what we actually want. Or at least what _I_ want, something like UXN but just a little more fully featured.
I'm writing some personal stuff for DOS now with the idea that every platform has an established DOS emulator and the development environment is decent. Don't get me wrong this is probably totally crazy and a dead end but it's fun for now. But a universally available emulator of a bare bones simple machine could solve many computing problems at once so the idea is tempting. To use DOS as that emulator is maybe crazy but it's widely available and self-hosting. But to make networked applications advancing the state of emulators like DOSBox would be critical.
Actually local first, maybe the way forward is first to take a step back. What were we trying to accomplish again? What are computers for?