Often, great engineering products start out crisp and clean until the product manager, sales and marketing guys, growth managers and evangelists step in and start adding antipatterns. I am not surprised Ubuntu did it. I am surprised they did it so late. They were slow-cooking this for the last 5 years. And for anyone unwilling to see it: Ubuntu wasn't about building a solid engineering product like RHEL, Debian or Slackware. It was always cashing on making a fork (i.e Debian unstable) more usable. Their core interest is increasing number of installs everywhere, not bringing the next engineering marvel. Flatpak serves that cause beautifully. They only have to worry about sandbox & containers part for software others build.
Ubuntu has tried bold moves - like trying to replace Xorg with Mir. That failed because Mir was raw. But this one might just. Making deb packages second class citizens in a Debian derived OS is so unfortunate.
And honestly, for desktops, Debian testing is plenty stable IME and stays quite up to date. It's what I've run on my laptop for years with absolutely no issues.
Debian Testing is the least secure Debian distribution:
- "Please note that security updates for 'testing' distribution are not yet managed by the security team. Hence, 'testing' does not get security updates in a timely manner."[1]
- "Compared to stable and unstable, next-stable testing has the worst security update speed. Don't prefer testing if security is a concern."[2]
- "[Testing's] security updates are irregular and unreliable."[3]
Most Debian users should use Stable. If a user wants a newer version of some software, they should write a Bash script to install it from source. When I used Debian Stable on the desktop, my installers allowed me to have the latest versions of all of the software of which I wanted the latest versions.
If a Debian user wants up-to-date software but they don't want to write their own installers, they should consider using Fedora instead.
Yes. I know. For a desktop, bluntly, no, security is not a primary concern (I've also turned off spectre mitigations and so forth), and stable is waaaay too stable for desktop usage.
Frankly, this kind of purist ideology--to the point of suggesting people use a different distribution--is simply ridiculous.
For servers I'm running on the open internet, yes, you are absolutely right. But in that case I just run stable.
The choices aren't some purist notion of "secure" versus "not secure".
Security is a spectrum of practical choices informed by threat models, and it's only one (certainly important!) aspect of the complex choice of selecting an operating system.
For example, I would absolutely advise my mother in law to write complex passwords on sticky notes. She's far more likely to fall victim to credential stuffing than to have her apartment broken into and her passwords stolen, and I accept that trying to get her to use a password manager would certainly fail and she'd just fall back on reusing simple passwords.
A security purist who thinks in terms of "secure" or "not secure" would scoff that this. Writing down passwords! That cannot be done!
But given the threat model and an acceptance of expected user behavior, it's a perfectly valid choice.
If I'm running a Linux desktop, I've already made a more secure choice by getting out of the firing line of typical untargeted malware.
With some additional basic security hygiene, the greatest threats are a) phishing/social engineering, for which zero days aren't the primary concern, or b) targeted attacks where clearly they are.
As I'm not a target of interest, I'm not too terribly worried about the latter. As for the former, distro choice doesn't make much of a difference.
So yeah, given that threat model, I'm comfortable waiting the few days it takes for security fixes to trickle down from sid to testing. And if I really cared, I'd follow the guidance mentioned in one of the links you posted, and just pull patches down from sid on an as-needed basis.
Switching to a completely different distro, by contrast, would be a ridiculous overreaction give the context and associated trade-offs.
Pretty darn up to date. Tbh I think it's not unusual for it to be more to date than the current Ubuntu stable (which I thought started out as a snapshot of Sid).
As for proprietary drivers, the non-free repos have traditionally carried everything I need.
All of Canonical's most hilarious (and sad) mistakes came directly from the top. But in fairness, so did the initial insight (the gap in the distro market).
In my mind, Ubuntu Desktop, and desktop linux in general, has always been a hobbyist project masquerading as a real production-ready environment. There's no money in building a desktop distro, so nobody really cares what happens there.
The real power of Ubuntu has really always been as a server distro. I used to run desktop Ubuntu when I was younger and had more spare time to tinker with this nonsense, but nowadays I administer enough ubuntu servers at work, I don't need to do it at home too. Ubuntu server is great -- everything is very well-documented, LTS is stable, snaps are sometimes pretty useful (I recently had to quickly spin up redis temporarily for a project and it was a one-liner), and everything just works in general.
I used to run CentOS (rip) servers, and Ubuntu, whilst noticeably worse, is still great to administer. It's not all sunshine and rainbows for sure, but things generally work the way you expect them to and things are generally sane.
What would you consider a "real production-ready environment" for the desktop?
I don't know if I'm just lucky, but my last 4 HP computers (3 laptops and a desktop) have all worked perfectly on Linux since day 1, whereas Windows has been a complete shitshow, having to wait as long as 1 year for some peripherals to be supported. In one case, I didn't care, since it was the webcam, which I never use. In the other, it was a PITA since it was something graphics-driver related, and I couldn't reliably use an external 4k screen without doing a plug-unplug-replug-just-right circus. Sleep is hit-and-miss, with one laptop being almost guaranteed to wake up and crash at night while left asleep. The desktop PC still has unreliable thunderbolt support.
Then there's the UI that lags like no tomorrow, the windows jumping around when switching virtual desktops, new browser windows never showing up where you expect them, windows seemingly in the foreground not getting input, inconsistent input field behavior, etc. Oh, and half the dialogs are still from the windows 95 era, so it's not even "prettier" or more "cohesive" than my i3 / gnome frankendesktop. Also, the start menu search, aside from being slow, has a good probability of not finding the app I'm looking for.
Debian is not really an alternative. There’s no maintenance guarantee, no support contract, and is usually not validated to work with packages the way Ubuntu and RHEL is. These are the kinds of things that matter for production.
Problem is Debian is usually so far behind on software and it's harder to install new ones that are not from source. That was the great thing about Ubuntu. You could get an LTS and pick and choose what you wanted to have newer.
If I tried to run FreeBSD in prod I might have actually been fired. None of the BSDs are production ready: basically nothing is validated to work with them, there’s no commercial support, less available sysadmins, no software officially supports it, etc. It’s a hobby system, nothing more.
It's theoretically possible to use it in prod; I know that some companies (Netflix?) do it. But it's certainly not common and takes a lot of engineering resources to make it work.
It definitely depends on your needs. FreeBSD absolutely does support production workloads. Netflix as you mentioned, WhatsApp (Meta), and several ISPs and hosting providers make use of it.
Any competent Linux sysadmin could learn to admin a BSD in no time. The FreeBSD handbook and the man pages are excellent. In my opinion they are far better quality than their Linux equivalents.
You also won't have any issues if your server runs on an open source stack such as Python, PHP, Perl, Ruby, Java (OpenJDK), NodeJS, etc.
You're absolutely right about FreeBSD not having first party commercial support. If you need that, you're basically stuck with RHEL, SUSE, Ubuntu, or Oracle.
It's not just administration, it's everything else living on your server. It's the difference being surprised when package x doesn't support ubuntu and being pleasantly surprised when it does support BSD. It's the difference between being the most well-tested and well-supported configuration of all your tools and a niche thing that a couple volunteers got working.
Many of the things I did as a traditional university sysadmin long ago probably works on FreeBSD, and if you wanted to deploy a LAMP (FAMP?)/Django/Ruby on Rails/etc stack it would probably be good enough, but the world has moved on since then. Once you want k8s, ansible, distributed filesystems, HPC, scientific, etc. it starts to break down really quickly. Many of the things I listed above may work, but it's the less documented, less supported, more annoying, and more likely to break path. The extra time spent getting it to work on FreeBSD and maintaining it is not time well spent unless there's a very specific reason FreeBSD is better, just grab RHEL/SLES/Ubuntu and skip all the hassle.
FreeBSD has its place in very limited scenarios and where you have the engineering resources to deploy it, but it should not be the default choice for a production deployment.
I work at the FreeBSD Foundation. There are a couple points I'd like to make. First, FreeBSD support in Ansible and Salt is very good. There is work to do on K8s, but runj represents a great start and there are a bunch of resources available on how to use FreeBSD for cloud native. Last, I think your final point that "[FreeBSD] should not be the default choice for a production deployment" could be worded better. I gather your meaning is "...for a general purpose enterprise deployment". Assuming that’s what you mean, FreeBSD limitations that make it difficult for general purpose enterprise use is something I have heard and that I am personally involved in trying to improve. To me, that's different from "production deployment". There is a long list of production deployments of FreeBSD at the infrastructure and device layers - routers, VPNs, firewalls, storage systems, hosted security solutions, industrial control systems, CDNs, payment networks, and embedded devices.
How do you deal with init scripts that try to uninstall packages, but ubuntu having unattended upgrades immediately, so at times apt-get is locked at first boot?
So why does Ubuntu push Snaps so hard? They've been doing this for year and I still have no idea, since pretty much everyone I know would rather a deb file (myself included). What's their gain?
Damn, that was a depressing read. If that's how things are going, then it really is past time for me to stop using Linux entirely. This is feeling like the final straw in a series of things that have been pushing me away.
Oh, I have quite a few of them, from increased resource usage, proprietary aspects, through forced updating and more. The forced updating is particularly unacceptable.
But, the 10,000 ft view is that they reduce the amount of control I have over my system. They offer restrictions I chafe at while giving me no benefit that I care about.
I'm not going to sit here and say they shouldn't exist. That they're not to my taste doesn't mean that others who like them shouldn't have them. But they're not to my taste at all.
After seeing systemd getting widely adopted (also not to my taste), if snaps (or flatpacks, although I am less allergic to those) join the party then that's just a clear indication that the Linux world and I have diverged too much and I need to move on.
Alpine feels a lot like a BSD to me, and I’m very happy with it.
But this bloatification is happening all over the place. Firefox is starting to have more and more dependency on flatpak’s daemons, even if you don’t use flatpak. These daemons are becoming the de facto standard for some interfaces too sadly. This also means that a lot of software is a lot less portable.
I'm saddened to hear that FF has dependencies on flatpack daemons, but I'm not surprised by it. The direction Linux is going seems very clear to me, and I expect more of that sort of thing as time goes on.
Meh. There’s just too much technical debt all over the place. Not even BSDs suffice from the POV of what an OS can be. A new modern OS written today with lessons taken from the past would solve the inherent pains of all existing ones. My dream is to do that. Just a dream rn tho.
It's a quite powerful idea to run every package in its own container (sandbox).
However, it depends on the implementation and Snap just sucks.
(Note that the default Unix assumption is that no users can be trusted but all applications can be trusted, which is wrong imho. Containers provide a way out of this, but things get messy very fast.).
I haven't run a desktop linux for years so this might be completely off, but what I got from it was there will two majors paths:
- you're a hardcore oriented distro, you assume most things will be built from source, follow all of your dependencies and maintain the glue for your distro. Gentoo/Slackware style。
- you value convenience and go the snaps/flatpack route.
And you can still go the convenience path while building some specific apps from source, but there will be a bigger gap to bridge and it won't make sense formost applications.
I get the shift, as we're already seeing it, even outside the cloud, I think it's still the best of both worlds. I compare that to how I'm running a natively compiled postgres version but a containerized mysql because it was too much of a pain to match all the dependencies.
Personally mine are the fact that this isn't really "zero trust", but more "infinitely diffuse trust" where every user has to trust every application. None of the packaging alternatives I'm aware of seem to yet have their security story in line where they are either secure enough (without breaking most software) to not have to trust every application or provide some level of assurance themselves comparable to the debian maintainers.
Snap makes it easier to distribute closed-source software like skype - but people running linux on the desktop generally have no great love of closed source software.
For open source software, snap is the same software, but slower, more broken and with worse upgrades.
Snap's changed the firefox update process so I now have to run 'sudo snap refresh' and wait for a download, where before I just closed and reopened it. Maybe it'll make my running application's dock icon disappear, hope you always use alt+tab instead of the dock. Snap can install ffmpeg - but I can't feed a screen recording to vaapi for compression because whoever set up the sandboxing forgot to allow that. Good luck sharing anything from, say, ~/.config/ on, say, discord - you get a silent unexplained failure, because hidden folder access is blocked by the sandbox. Installing a browser? With snap you get three copies; you can adjust refresh.retain down to only keep 2 copies - but 1 copy is out of the question.
There's a reason canonical has to force snap down people's throats, and it's because nobody uses it by choice.
This model destroys any reason for software to be open source. What's the point of having source code if you just run the binary provided by some party?
Reminds me of the early days of the tensorflow where everyone used whatever binary package worked an no one could run anyone else's code because people kept getting stale binaries somewhere in the stack.
What's the snap equivalent of "apt-get source"? Failing to find that meant for me, it was time to start purging snapd from new installs entirely (and to start fretting about ubuntu starting to be philosophically incompatible with what I want out of a system.) Fortunately even with 23.04 (server) that still seems to leave an entirely working system.
That's not an answer to my question. For example, I see no evidence Debian is moving toward app images as the standard way to distribute software. Same goes with Arch and I'm sure many others.
And btw SteamOS is absolutely not a sandboxed environment. It just has a read-only OS filesystem so they can safely blow it away upon upgrade.
Debian and Arch are exceptions. I believe maintainers of every other mainstream distro are exploring immutable distros or at least shipping confined apps.
The package hosting protocol is relatively trivial. I believe a couple of alternative implementations have been written, but I'm not sure they're maintained because there's no point. Anyone can publish to the Snap Store. Because snaps are sandboxed at the client end, there's no gate except automated checks.
Canonical sells access to the snap store. It's part of their Ubuntu IoT strategy.
Uploading public applications is free. Uploading proprietary software is a business model.
It also pretty much locks Ubuntu users in, because no other project uses snaps, so if an app chooses snap for their distribution method (certbot, for example) it suddenly becomes a lot easier to just download Ubuntu than it is to install Snap for a foreign platform.
> it suddenly becomes a lot easier to just download Ubuntu than it is to install Snap for a foreign platform
That's an exaggeration. It's about as easy to install snap support for a non-Ubuntu as it is to install any other third party app on that platform. And the latter case is exactly what you're trying to achieve by trying to install a third party app on that platform in the first place, so by definition it can't be significantly harder than any alternative method.
Debs work really well when they are shipped as part of the distribution deb archive itself. They work really badly when third parties try to use them to provide add-on software to an existing distribution release.
The main problem that Snaps solve is this latter case: when third parties are trying to ship directly into someone's distribution installation[1].
Often third party debs appear to work OK, but then break future distribution upgrades. By then users have forgotten about the third party software that has hacked itself into their system, and blame the distribution for their upgrade failure. The problem is that distribution debs are designed to provide metadata about what has changed so that the package manager can accommodate. But it's not possible for distribution debs to be aware of the third party debs to handle those changes. So things break.
The breakages caused by debs aren't just limited to future distribution upgrades. A bad third party deb package can break your entire system. We routinely get reports where it turns out that this is what happened to our users!
There's also the problem of dependencies. If a third party app needs a bunch of dependencies, then they can't realistically bump those dependencies on the system as a whole without regressing all the other apps that need older versions of that dependency. So they have to bundle their dependencies, and this is something else that regular deb tooling doesn't handle well. You can theoretically construct a deb that bundles all its dependencies, but then that's exactly what snaps and snap tooling handles better - that's half the point. Nix only partially solves the problem by better supporting concurrent installation of multiple versions of those dependencies. But those concurrent versions would still each have to be maintained; that's something that distributions try to avoid by picking one version of each dependency and making the entire distribution release work with just that one. Moving that maintenance responsibility to each third party app developer, and having the packages bundle their dependencies, is the other solution. This was already happening with deb packages like for Firefox. Firefox upstream bundles nearly everything, and the debs (eg. in Debian) do the same for most of their dependencies. Snaps just call a spade a spade and are designed around it.
You also wouldn't really expect a third party app to have access to everything on your system. Say for example you download and install some game app to try out. Do you really want it to have access to your online banking browser session? The game developer might not secure their development infrastructure as if someone's trying to steal their users' money, because that's expensive and they're only shipping a game. But if you install their game, then that's what you risk. That game developer's infrastructure is suddenly an attack vector for an adversary that wants to get to your online banking session. On iOS and Android, each app only has system-mediated access to everything outside its sandbox. Debs fundamentally cannot provide this separation, so if you install third party debs then you're giving all those third parties access to everything, which really is unacceptable in modern security practice. Snaps give you that sandboxing.
So that's what snaps are for: 1) bundling dependencies, because that's necessary in world of third party software that ships independently of the distribution; and 2) sandboxing, because that's necessary in a world of third party software if you don't want to give all those third parties and their adversaries root on your system.
If you don't want third party apps, and only want what your distribution ships in a curated manner, then you don't really need snaps. But consider that Firefox is essentially a third party, non-curated app, regardless of your distribution or how it was packaged! See [1] below.
Snaps are also immutable, which really helps with stability and upgrade and revert cases. This is more relevant for snap-only systems like in IoT, not the Ubuntu desktop. In an IoT deployment you can't tell the user to run "apt-get -f install" to fix up the system because power got interrupted during an upgrade - because that's how debs work.
In years gone by, Ubuntu tried really hard to make third party debs work. But it didn't work for various reasons. In the meantime, it became the norm for third parties to ship debs together with all of their problems, since there was nothing better possible at the time. This is what snaps solve.
Disclosure: I work for Canonical. But here I am speaking for myself, not my employer. I'm not involved in the design of snaps, but as a distribution developer, it's clear to me what problems they solve.
[1] Packages like Firefox also use snaps because even as a deb it's really not the case that Firefox is curated by the distribution and mostly unchanged after release any more. So it suffers from mostly all the same problems that third party debs do.
In my reply I focused on why we need a bundled, sandboxed packaging format in the first place, since many people are sceptical just about that. So you're right in that I didn't go into any of the differences.
Snaps solve a bunch of problems that are out of scope of Flatpaks, such as CLI apps and the packaging of kernels and other hardware-level pieces. This allows for an entirely snap-based system that is immutable and atomically updated, which is essential for IoT devices to be reliable.
If you look at the history of snaps, they are an evolution of click packaging, which were created for the Ubuntu phone that is no more. But the history goes back further than that - back to Ubuntu's App Review Board and (deb based) third party app packaging system, which failed for exactly the reason that deb is an unsuitable packaging format for third party apps.
If you look at the history of Flatpaks, you'll find that their initial release was at a similar time to snaps. But given that they don't solve a bunch of problems that snaps do, what was/is Canonical supposed to do? Ditch snaps entirely and drop their IoT support? Integrate IoT support into Flatpaks? Do Flatpak upstream even want that? Look at the history of Canonical wanting to ehnance GNOME, and the history of how Unity began, to see how that doesn't seem like it would be a practical way of delivering functionality to users in a reasonable period of time.
Consider that it is Ubuntu that is at the centre of this third party app packaging problem. It's Ubuntu that everybody targets first with their third party debs. For the same underlying reason, it's Ubuntu that gets more user support requests because of bad third party debs than every other distribution. It makes sense that Ubuntu developers are best placed to understand the problem and develop a solution.
So yes, we've ended up with two parallel contenders. But I don't think it's reasonable for Canonical to have done anything differently in regard to Flatpak. It appeared at the same time while Canonical had already been working away at the problem for years, doesn't solve or even try to solve all the problems that snaps are designed to solve, and those problems appear to be out of the scope of Flatpak anyway. As much as critics frame them as like-for-like options that Canonical could just decide to switch over, they simply aren't and thus "simply switching" doesn't even make sense.
As for "another one already exists", that statement could just as well apply the other way round: look at the timelines!
> Nix only partially solves the problem by better supporting concurrent installation of multiple versions of those dependencies. But those concurrent versions would still each have to be maintained
How so? Every dependency of every dependency is in the nix store and they’re all immutable. A nix package installed will work forever because its environment can’t change. Am I not right?
Bugs and security issues are found from time to time. When that happens, every occurrence in use needs an update. If ten different incompatible versions of a package are in use, they all need to be replaced. That's more work than one per traditional distribution release.
My hypothesis is that if everyone used snaps, it would reduce the complexity of packaging by reducing or eliminating dependencies that need to be installed separately.
> since pretty much everyone I know would rather a deb file (myself included).
Not me, for programs that are not from the repos I do not like debs.
I needed to install dubious apps like Skype,Slack, Some Pdf Editor etc.
With deb I would give them root , so I either find an app image or snap, or try to unpack the deb file and attempt to run the binaries without root.
I also installed one a server some CLI snaps, they did their job and I did not had to build them from source.
I agree from things that are in the repo I prefer the debs 99% of the time.
I’m a long time Debian user but I have recently fallen down the rabbit hole that is NixOS. I’m still ways away from moving my entire dev environment to NixOS, but I definitely see it’s appeal.
Debian fell out of favor because stable is too damned stale and ubuntu had a lot of small polish and quality of life improvements that debian didn't have at the time.
I used ubuntu for years, but finally gave up on it when they started pushing snaps hard. I'm on PopOS now now but I'll go to mint debian edition or something else if I have to.
Right, stable being a couple years out of date near the end of the lifecycle is somewhat annoying. I feel like the Docker world probably made people care less about this; it's rarely the Linux kernel that prevents your application from running (but Node/Python runtimes are a huge problem). But for desktop stuff people read a blog post about some new feature and want it today, which is a problem when the priority is stability. So I can see that being annoying for the average end user that's not looking for a server OS.
I compromise and install most of the software I use on a daily basis from Homebrew. So I have some crufty version of `ls`, but the latest version of Emacs.
Is Pop!OS a viable alternative to Ubuntu? I don't like Snaps but I like the fact that all my hardware (even my nVidia GPU) is supported. I think rolling release would be better, though, even if stuff breaks once in a while.
Yeah, popOS is basically ubuntu without snaps, with the latest nvidia drivers, and a scheduler that gives better gaming performance (gaming on linux has really come a long way since the steam deck has been released).
Before this, I used arch, until a simple update rendered my system unbootable. When I asked an arch dev why they were shipping a master branch grub build, I was told that if I couldn't easily recover a corrupted grub install I should consider another distro. I installed pop the next day. I've been very happy with it from that point forward. If I had to use something else it'd probably be mint.
Hmm, why is it an ISO, rather than getting the drivers from a server afterwards? I have a few devices that need proprietary drivers, do I need to do the same for the others?
I have zero horse in this race other than every time I've tried snap firefox it has been unpleasant (slower, weird memory, weird hangs that sometimes OOM), so I install it in some silly manual way
in theory I like that snaps have a permission model w/ default confinement, in practice I keep getting asked to install in 'classic' mode
I have never understood why the pushed snaps so early and so hard as they have. Feature parity isn't always a necessity, but you'd think they would have made sure that Snap apps perform at least "ok".
Ubuntu died for me when I upgraded to a new LTS and they replaced my deb firefox install with a snap version that used some non-standard mouse cursor.... Only in firefox. It apparently took them ~2 weeks to fix it, but in that time I rage-wiped by ubuntu install and switched to linux mint, which I also regret, but is snap free.
I used Ubuntu Desktop in the past, I think they had the right idea with Unity, however it felt like they didn't push hard enough.
I was using Ubuntu Server everywhere, I stopped using it the moment they started forcing snaps. Snaps are inferior technology, I don't want to install a daemon just to install software on my server, I don't want an unpredictable auto upgrade technology and I don't want millions of mount points.
DEB and RPM have been around since forever, I can deploy software using those technologies and expect it to work in 10 years, the same cannot be said for Snaps, there is nothing there to guaranty that it won't just end up like: Upstart, Mir and Unity.
I can ship my software as a Docker container and have it work everywhere or I can put in a lot of work to ship a snap and have it only really work on Ubuntu, I don't even know what was Canonical thinking here. I don't care that I can install snapd on most distros, if it's not actively supported by the distro it does not count.
Ok, so ubuntu is going south, but redhat is stopping to support rhel, will this affect fedora?
I use ubuntu on personal and servers and fedora for work,so picture me concerned.
I need a better distro for sure, because ubuntu lands with unattended upgrades enabled, which causes the server setup from my scripts to be unpredictable (sometimes is doing updates when booted the first time, but not always. Sometimes is also very fast). Recommendations for server distros that require minimal setup? I have only small applications running
Then, I was already thinking of moving to fedora on personal too, with KDE, but I am now concerned.
Any recommendations for personal desktop usage? I'd like low maintenance distros, so I'm not interested in massively recompiling things. And I have nvidia, which is why I chose ubuntu with kde and fedora where I would like to switch to kde. Gnome plugin policy is so bad that I don't upgrade fedora because of them
I think I'll try Linux Mint out. Or maybe it's time to go back to my beloved Debian, which I abandoned because of Ubuntu when Ubuntu was still not too filled with crap.
Try Manjaro. It's the best user desktop experience, together with Arch way better than any point release distro like Ubuntu or Mint in terms of new software availability, hardware support, and stability (yes, ironically, rolling release distros are more stable than point release ones). If you're running servers, stick to Debian.
I use Ubuntu Server. Should I be moving to Debian? I really like Ubuntu (user since Breezy Badger back when Mark Shuttleworth shipped CDs to everyone) but the whole snap thing has been a pain. They change around on you, don't work well (docker or something installed as a snap for me and failed in a strange way), and are quite slow (visually apparent load time).
I really don't want to worry about this stuff on Ubuntu Server. Has anyone else moved to Debian?
Thank you. I liked Arch but the rolling release thing does make this less useful in my case (on-prem compute cloud). I want to maintain this thing as little as possible and I know from experience that if you let arch get too far ahead of you, pacman can fail to bring you up.
Ubuntu has tried bold moves - like trying to replace Xorg with Mir. That failed because Mir was raw. But this one might just. Making deb packages second class citizens in a Debian derived OS is so unfortunate.