Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is full of nonsense. The Linux desktop push isn't failing because it has experiences and apps that are similar to Windows and macOS. Being able to run Windows apps on Linux is a benefit, not a failure. As for religious wars over init systems, desktop environments and package managers, competition is making the options stronger, not weaker. Competition is a reason why package management on Linux is far better than equivalents on Windows and macOS.

The main reason for Linux not taking off on the desktop is because most users don't care about what OS they run, they just want a computer that works. If the PC they buy comes with Windows out of the box, they're going to stick with that. Until you get manufacturers shipping PCs with Linux as the default OS, you're mainly going to see desktop Linux as an enthusiast-only option. It's no accident that one of the devices helping to spread Linux (the Steam Deck) comes with Linux as the default option.



> As for religious wars over init systems, desktop environments and package managers, competition is making the options stronger, not weaker.

Competition can definitely improve things, but it's not universally positive. In particular, endless competition in parts of the operating system makes it hard to build anything on top of them. E.g. if you want to distribute an application for Linux, do you build a Flatpak, or a Snap? Or take a more traditionalist approach and make RPMs, DEBs, etc.? You either pick your favourite and leave out a large fraction of Linux users who disagree, or you have to do more than one of these. This is definitely a drag on the ecosystem.

I agree that most users don't care about the OS, though.


Generally most Linux distributions are literally the same thing underneath. I have recently done an LFS build (using version 12.3 of the book). The same files were in the same directories in Debian, Arch and LFS for the most part.

I even had a look at source code for makepkgs in Arch and they are literally the same commands in the script that the book has you manually type in.

The packaging critique comes up over the years but it is a bit of an overblown.

Building packages for different distributions isn't super difficult. I've build Arch Packages using the ABS, DEBS and RPMS and they are all conceptually the same. Having a quick skim of the Flatpak instructions it doesn't look that different either.

If you don't want to bother with all of that. You can just have a script that drops the installation either in /opt or ~./.local/. I am pretty sure Jetbrains Toolbox does that, but I would need to check my other machine to confirm and I don't have access currently.


If you build an application, The Right Way™ has always, and probably always will be a tarball. Leave to distributions the hassle to distribute your software.


This is absolutely not a solution. It more or less works for a few big widely used applications like Firefox & Libreoffice, but more niche applications can't realistically get someone in every major Linux distro interested in packaging them. And if someone does take an interest once, there's no guarantee they stay interested to package updates. Distro maintainers need to be highly trusted (give or take AUR), so it's not easy to add many more of them.

On top of that, some of the biggest Linux distros will only release application updates with their own OS releases, on a cadence of months or years. Software developers expect to be able to deliver updates in hours to days (at least in the consumer space - highly regulated things like banking are different).

There are good reasons why RedHat and Canonical, the companies behind some of the biggest Linux distros, are pushing app distribution systems (Flatpak & Snap) which aim to be cross-distro and have upstream developers directly involved in packaging. There are absolutely downsides to this approach as well, but it would be nice if we could stop pretending that traditional distro packaging is a marvellous system.


This is a complete and utter non-starter for most software developers.

On pretty much every other operating system out there, I as the application author have control over the release cadence of my software. On some platforms, I can simply release an installer and be done. On others, I send the app through some form of certification process, and when it's complete it appears on the App Store.

Linux is rather unique in that it has tasked the distro maintainers to be tastemakers and packagers of the software that appears in their repositories. They control what version is used, what libraries it's linked against and their specific versions, how often it's updated, which additional features are added or removed before distribution, et cetera.

From the outside, this is complete and utter insanity. In the old days, you had to either static link the universe and pray or create a third-party repository per specific distro. Thank goodness these days we have numerous ways to cut distros out of the loop - Docker, Flatpak, AppImage and of course Steam if you're a gamer.


Its not about competition. RedHat employees pushed their idea of things and the volunteers either ate it up or left.

RedHats and Canonicals paying enterprise customers are whats keeping the Linux ecosystem alive. No one else brings the required manpower to the table.


> Its not about competition. RedHat employees pushed their idea of things and the volunteers either ate it up or left.

This is a narrow view about how innovation happens in Linux and related software. Yes, Linux-focused companies are driving many of the changes, but there is plenty of exploration of ideas that happens outside of those companies.


I was thinking about maintaining and keeping things running (like you would do with cars and houses and anything else except software) and less about innovation and change. I doubt there is a shortage of ideas that are being explored.

Like, innovative and fresh stuff is cool, but at the end of the day you need to keep your business running and not breaking down.


Dunno about not caring about the OS. My mum who's not techy got persuaded to get a Macbook after Windows and is finding it a big learning curve. I remember when Walmart sold Linux machines they gave up because the buyers returned them when they found their Windows stuff didn't run. I'm a fairly normal user and certainly care if it's Mac, Windows or Linux. I wouldn't run linux as my main os as I spend a lot of time in Excel.


People definitely care about applications they use a lot, and MS Office is a big one - even if LibreOffice would work just as well for many use cases, people are hesitant to give up what they know works.

The OS does make some difference, but I think that if all the same applications were available, a lot of people could switch without much difficulty. In some ways going from Windows to Linux might be easier than Windows to Mac, because many Linux distros are happy enough to conform to UI conventions that are familiar from Windows.


> do you build a Flatpak, or a Snap?

.appimage


AppImage is, in fairness, a clever idea. But it's also yet another option. It only solves the mess of competing formats if it wins, if it becomes the normal way to publish user-facing software on Linux.


It's because the software they use is not available. Put aside there are alternatives. There are not alternatives for some software. I would use Linux if Autodesk made Revit and AutoCAD for Linux.

I would venture to guess that kids know the difference between Mac and Windows and probably prefers one over the other.


Even more so users don't care about what kernel their OS is running, nor do they even know what a kernel is.

It's entirely possible for a large enough brand to ship a Linux based desktop OS to mass adoption. It has already been done once with ChromeOS.

Linux will never be the name users remember, and it's not meant to be.


> Being able to run Windows apps on Linux is a benefit, not a failure.

It is a massive moral failure though. It shows that after two decades of work, the Linux community has been unable to build a simple sane functional stable development environment better than Win32.


Sane here is bearing a lot of weight. Developing on Linux is far easier than developing on Windows. I've never seen a windows project as simple as nq[0] or dwm[1].

[0]: https://git.vuxu.org/nq/

[1]: https://git.suckless.org/dwm/files.html


Huh? Does the mountain of software written for Linux to the point where Windows added Linux support to attract devs mean nothing?

Surely WSL is not a moral failure for Microsoft.


Neither has Microsoft, Google nor Apple.


A random macOS binary is more likely to run on another macOS install from anytime in the last half decade than a Linux binary on the same distribution.

Even Apple’s famously fast deprecation is like rock by comparison.


I'm not sure why you think this is a good metric; the space of "random Mac binaries" is far smaller. There's probably something to be said for this "curation," but you pay for it, both literally with money and in limited selection.


... which is much less of a problem for Linux than closed source ecosystems.


I don’t know; you don’t think having Win32 be the unofficial API is a problem?

It literally means Windows will always exist - as the preferred IDE and Reference Spec for the Linux desktop. It also means all evolution of Linux will be ironically constrained by Win32 compatibility requirements.


It isn't and won't be. So no.


Meh. It shows a good part of software (namely, games) is written for Windows, because the userbase is Windows, because Windows is the lion's share. And it shows people on Linux want that software to run. It's an admission, but not a moral failure.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: