Ideally the model would be run locally in the browser, so the author isn't paying whatever they're paying. But the web standards to do complicated stuff locally aren't there yet and probably will never be.
That's not a practical answer but it's my two cents.
I wish I could give him two cents without having to try. HTTP status 402 with micropayments or something needs to become a thing. The platforms do it... (subs, tips, donations, rewards etc etc.) Why can't the web.
Maybe, but WASM still has its limitations and pains. If you compile with emscripten you're still using thousands lines of generated javascript to glue the wasm and javaecript together.
> ESA projects are a bit demo-like and limited in scope
I am kind of confused by that statement, what more would you expect from the Copernicus Programme? Isn't it a technical improvements over NASA's LANDSAT programme?
> 1) very stable due to rolling-release producing small changes
Having very frequent updates to bleeding edge software versions, often requiring manual intervention is not "stable". An arch upgrade may, without warning, replace your config files and update software to versions incompatible with the previous.
That's fine if you're continuously maintaining the system, maybe even fun. But it's not stable. Other distributions are perfectly capable of updating themselves without ever requiring human intervention.
> 2) the skill barrier to getting a full system is “basic literacy, to read the wiki”
As well as requiring you to be comfortable with the the linux command line as well as have plenty of time. My mom has basic literacy, she can't install ArchLinux.
ArchLinux is great but it's not a beginner-friendly operating system in the same way that Fedora/LinuxMint/OpenSUSE/Pop!_OS/Ubuntu/ElementOS are.
> Having very frequent updates to bleeding edge software versions, often requiring manual intervention is not "stable". An arch upgrade may, without warning, replace your config files and update software to versions incompatible with the previous.
12 in the last year if you used all the software (I don’t many people are running dovecot and zabbix), so probably actually like 3 for most users: https://archlinux.org/
That’s not too dissimilar from what you’d get running stable releases of Ubuntu or Windows. And of course plenty of windows software will auto upgrade itself in potentially undesired ways, windows users just don’t blame the OS for that
I don't just mean the types of manual intervention mentioned in the news. ArchLinux ships bleeding edge software to users with very little downstream changes. ArchLinux also replaces config files when upgrading. This is inherently different behavior from stable release distributions like Ubuntu.
ArchLinux is not an operating system where you can do an unattended upgrade and forget about it. That's not "bad" or "good", that's just a design choice.
Arch replaces _unmodified_ config files when changing. It’s not an uncommon behaviour in software to update defaults to the new defaults.
If you have a modified config file, it puts the new default one in a .pacnew file for you to compare, which seems strictly better to just deleting the new default one.
Huh you're right, I must've confused myself by removing/installing instead of upgrading recently.
Anyway I think the discussion boils down to semantics. ArchLinux is not "unstable" in the sense that it is prone to breaking. But it also delivers none of the stability promises that stable release distros or rolling release distros with snapshotting and testing like OpenSUSE Tumbleweed deliver. To call ArchLinux stable would make every distribution stable, and the word would lose all meaning.
Most distributions promise that an upgrade always results in a working system. Instead moving the manual maintenance to major release upgrades.
> without warning, replace your config files and update software to versions incompatible with the previous.
This is just nonsense, pacman doesn't do this. If you'd modified a config file, it will create a .pacnew version instead of replacing it. Otherwise you'll get the default config synced with the version of the software you've installed, which is desirable.
It's pretty rare to modify any config files outside of ~/.config these days anyway. What few modifications I have at the system level are for things like mkinitcpio, locale, etc and they never change.
> idk why Arch doesn't invest in whats standard in every other major distro
Simplicity, among other reasons. Installers force the users hand and need maintenance. Having no installer but rather a detailed installation guide offers unlimited freedom to users. Installation isn't difficult either, you just pacstrap a root filesystem and configure the bootloader, mounts and locale.
ArchLinux does now have an installer called archinstall, but it's described more as a library than a tool. It allows you to automate the installation using profiles.
Just to paint an example, if I am installing Arch I like to have:
* A user configured through systemd-homed with luks encryption
* The limine bootloader
* snapperd from OpenSUSE with pacman hooks
* systemd-networkd and systemd-resolved
* sway with my custom ruby based bar
* A root filesystem in a btrfs subvolume, often shared across multiple disks in raid0
If you were to follow the installation guide it will tell you to consider these networking/bootloader/encryption options just fine. But trying to create an installer which supports all these bleeding edge features is futile.
Also if you want 'Arch with sensible defaults' CachyOS is basically that, people think of it as a 'gaming distro' but that's not an accurate characterisation. I use it as a daily driver on my personal machine mostly for non-gaming work and it's an excellent distro.
There is though the TUI installer, not like it used to be where the commands were typed in following the wiki. Not that there was anything wrong with the 'manual' mode, it gave you insight into the basic building blocks/configurations right from the start.
It's been a very long time since I moved to Arch, but I swear that something like 12 years ago it did have some form of menu-driven installer.
Nowadays, there are so many ways to partition the drive (lvm, luks, either one on top of the other; zfs with native encryption or through dm-crypt), having the efi boot directly a unified kernel image or fiddle with some bootloader (among a plethora of options)...
One of the principal reasons why I love Arch is being able to have a say in some of these base matters, and would hate to have to fight the installer to attain my goals. I remember when Ubuntu supported root on zfs but the installer didn't it was rather involved to get the install going. All it takes with Arch is to spend a few minutes reading the wiki and you're off to the races. The actual installation part is trivial.
But then again, if you have no idea what you want to do, staring at the freshly-booted install disk prompt can be daunting. Bonus points for it requiring internet for installation. I would have to look up the correct incantation to get the wifi connected on a newer PC with no wired ethernet, and I've been using the thing for a very long time.
That's assuming you do system upgrades through paru/yay. However, you may not want to upgrade the packages you've obtained from the AUR and so you upgrade using pacman. That may cause the updated libalpm to become incompatible with the installed yay/paru.
Iirc it was to force the extra step necessary for the user to acknowledge that the AUR can bootstrap malware if used blindly.
This seems to be a relatively consistent discussion surrounding AUR helper development; for example, adding UX to incentivise users to read PKGBUILDs, lest the AUR becomes an attractive vector for skids.
No one wants the AUR to become NPM, and the thing that will incentivise that is uneducated users. Having the small barrier of not having helpers in the main repos is an effective way of accomplishing that.
That's not a practical answer but it's my two cents.
reply