I don't understand why people would choose a non rolling-release distro (e.g. Debian) over a rolling release distro (e.g. Arch). Because the former come with the implicit guarantee that every few months, you have to update EVERYTHING all at once. Seems like a needless risk and pain.
Genuinely curious here. There must be an upside to this model, can someone tell me what it is?
I switched from a rolling release distribution (Chakra) to Debian because I got tired of the rolling release system constantly breaking things. If you fall too far behind on updates in Chakra, the maintainers basically tell you you're screwed. I finally complained about it in their forums and was politely told to go away.
My computer is just a tool. I just want it to work so I can get things done. Debian has been good about that in the past and enjoys the support of a very large, dedicated, active developer community.
Maybe Arch is better than Chakra at managing their updates (although Chakra was basically a modified Arch distribution), but my experience with Chakra left a really bad taste in my mouth for the rolling release approach.
FWIW, Debian's "update EVERYTHING" every few months never broke things as badly for me as Chakra's frequent smaller updates did.
Yeah, Debian stable is basically what I'm on now. (Sort of. It's complicated.)
If I can be unabashedly honest for a moment, I think the problem with Linux advocacy isn't the software any more, it's the advocates. The advocacy seems to be coming from a lot of people that are always after the latest, newest, greatest thing, people who are actually uncomfortable when features aren't changing all the time. They tend to be very vocal about recommending anything that comes with frequent updates, whether it's browsers or Linux distributions.
But that's not what the larger untapped home market wants. They just want things that work, and they want things not to change on them all the time. They only have enough space in their daily lives to learn new features every once in a while; any more than that, and they get frustrated. I hear this from our customers all the time, and as I get busier, I'm really starting to understand their point of view better. (As an example, we've recently been seeing more of our customers switching back to Internet Explorer on Windows 8 systems.)
So as far as signposting goes, I think boring, stable, slow-moving distributions like Debian stable should be the default recommendation, and then Arch or similar recommended as the really cool cutting edge thing for hobbyists and early adopters. That is, Linux by default should be marketed as stable and safe and not changing all the time, with the option of making it a bit more fun.
We almost opted to recommend Linux to a bunch of our customers who were moving off of Windows XP this Summer, but the feedback wasn't great on a couple of experiments and finally we decided we were too small to really afford the support costs. It was a big missed opportunity though and I'm going to take another look at doing it next year.
Well, it's already pretty well signposted; the three most well known consumer distros are redhat, suse and ubuntu, all of which have this stable release model.
Arch suffers from the same problems, or at least it used to. I had a system that was out of date, and it became essentially impossible (or at least very, very difficult) to get it up to date.
I run a rolling distro (Debian Testing) on personal-use machines (laptop; work development desktop; personal server).
I run fixed release distros on production servers/devices. It's important to be able to install a new package on a machine and not have a hundred dependencies need upgrade, break or conflict. And lots of packages are available in backports when you really do need a newer kernel on Wheezy (https://packages.debian.org/wheezy-backports/kernel-image-3....).
> I run fixed release distros on production servers/devices. It's important to be able to install a new package on a machine and not have a hundred dependencies need upgrade, break or conflict.
Isn't that an argument for rolling release? I don't see how this would support using fixed release distros.
Not when you have a few hundred servers. Then, you want consistency. Consistency between servers, between datacenters and between pre-production and production environments.
Now, you can get there with your own repos and a rolling distro. Or you can accept a test/upgrade cycle for a fixed release distro every half year or every year. I personally think the latter is less error prone.
That is what distros like Manjaro / Chakra are effectively. They just pick a date, grab all of Arch, test it for a few weeks, and push it to end users in batches.
I am unfamiliar with Manjaro and Chakra. My experience with Arch is limited to my Cubox-i. It's a nice distro, with an excellent community. I liken it to Gentoo in its glory days.
Having defined my limitation in the observation, I'd just like to point out that, when it comes to distribution stability, for large server deployments, there is safety in numbers. Should a problem occur in a "stable" package, the odds that you are the first one to find the error are smaller with popular server distros (RHEL, Centos, Debian) than with less popular distributions. It's not a statement regarding the intrinsic quality of the distribution. It is a statement regarding the overall quality of the distribution + installed base.
All in all, for a distribution to dislodge entrenched players, for this use case, it will have to be an order of magnitude more stable.
Using a fixed release only requires that you apply the update once every few years, so you can plan for it. You can do it at a quiet time when you are available to fix any problems. With a rolling release you have no idea if something is going to break at an inconvenient time.
Debian Testing branch is always rolling except for a couple weeks/months before a freeze for a release.
I suppose if you really needed to keep rolling you could temporarily switch to unstable.
"Seems like a needless risk and pain."
And the purpose of the freeze is to iron all that out so it doesn't happen.
(edited to add, I'm sad seeing OP get downvoted. His post history shows hes an Arch guy and likely genuinely doesn't understand release tagging. As a Debian user since '97 I am not surprised that there exist both people who don't know the peculiar arcana of release tagging and there are people who are experts at it, so his confusion adds a little value to the conversation. Down arrow should mean a decrement of net worth of the conversation. People are learning things from his mistake, a down arrow is not an assessment that OP got a technical test question wrong. And I had to edit this about ten times to phrase it correctly.)
I don't know much about Debian. I may have slightly mischaracterized things because I don't understand. That's why I asked.
I am getting the impression that Debian Testing isn't really used in the sense of, "Be a nice volunteer and run this thing that has problems so we can fix them," which is what (to me) is implied by the name "testing." Rather, it seems to be "here is a (mostly) rolling release version of Debian, if that's what you want." In other words, it almost seems like "Debian Rolling" might be a more apt name, in a sense.
The thing is, as a user of Arch Linux (without a ton of other experience), I just don't have problems with a rolling release. For me, things don't break, and it just works. So it feels to me like Debian's whole release philosophy is based around the idea that things have to break all the time and be fixed carefully, and that just doesn't sync up with my experience. So I'm trying to figure out what is missing from my worldview.
Thanks for defending me. I think might be somebody (maybe multiple) who doesn't like me and just downvotes all my comments, but I don't have any hard evidence.
I think (could be wrong) that Debian Sid is the nearest to your idea of Debian Rolling. Debian Testing is a testbed for the next release, especially when the freeze happens (and just before freeze when people are trying to get patches in).
You generally want to test changes to your environment before rolling them out, and you generally want to keep disruptions in production as minimal as possible. That's why. It's a need that makes Red Hat alone a billion dollar company.
If you're going to wait a few months to update, you are much better off on an actual non-rolling release distro than Arch. Arch frequently does not test upgrades for packages more than several versions out of date, whereas distros with proper releases will test upgrading from the previous stable version.
Debian testing, however, isn't a proper released distro -- it's sort of a mishmash between a perpetual beta and a rolling release. Debian stable has proper releases, and Ubuntu was started largely as a result of people who wanted more frequent Debian releases.
> If you're going to wait a few months to update, you are much better off on an actual non-rolling release distro than Arch.
I think that's a good point and that makes sense. And I agree with that from my experience. However, I generally think that one should not wait months to do updates.
To be clear, the whole idea with Debian stable is that you get security patches all the time, but no new (or depricated) features/apis. So you update every night, but you upgrade only when a new release comes out.
Compared to that testing does both: typically similar frequency of security-related patches (but not guaranteed!) as stable -- and also migrations of new packages from unstable as soon as they "settle down" (and in "reasonable" sets, so that dependencies work).
So, you want a backported fix for the bash bug, in bash 4.2, but not upgrading to bash 4.3 -- possibly breaking somehting depending on 4.2 behaviour (something other than an exploit for shellshock, that is).
(Now, bash is pretty stable, so may not be the best example -- but the point remains).
If you're running testing, in addition to apt-listbugs, you want to have a look at "aptitude safe-upgrade/upgrade" vs "aptitude dist-upgrade" (or apt-get upgrade vs dist-upgrade). A dist-upgrade can be a little bit more invasive, and typically warrants some more vigilance than a mere "safe-upgrade". I don't think I can remember a "safe-upgrade" ever breaking anything in my ~14 years of using Debian. It's pretty safe to script to run automatically, unless you have very strict policies on uptime/predictability.
I run Debian sid, and I upgrade almost every day, with a few exceptions (like "during or immediately prior to travel"). Doing so lets me catch issues before others hit them, file bug reports, obtain the latest fixes and improvements, and participate in the community that shapes future architectural decisions for the distribution.
> implicit guarantee of "you have to change EVERYTHING
> all at once every few months"
This is a strange conclusion to draw. What makes you think that debian/stable requires updating everything all at once every few months? To be honest this does not even sound like a fair characterization of running unstable.
Genuinely curious here. There must be an upside to this model, can someone tell me what it is?