Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Flatpak 1.0 Released, Ready for Prime Time (flatpak.org)
254 points by alexlarsson on Aug 20, 2018 | hide | past | favorite | 204 comments


Flatpak and Snaps are a great step forward for Linux packaging and usability.

I think there's a vocal segment of the Linux community that doesn't understand what a major roadblock it is for the general user having different distributions having completely different ways of packaging, distributing, and updating applications. Don't worry, no one's taking away your apt-get, pacman, rpm, eopkg, makefiles, etc.


Snap needs to fix their non-standard directory placement. They ignore the FHS [1] by creating /snap at the root level. And they ignore XDG [2] by persistently creating ~/snap in your home directory. I've heard Flatpak is better in this regard.

I'm sure it could help with application delivery so long as it's constrained to that. I don't want half of my standard libraries or utilities suddenly becoming sandboxed. For full blown applications sandboxing would be welcomed.

[1]: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html

[2]: https://specifications.freedesktop.org/basedir-spec/basedir-...


Flatpak uses /app so I’m not sure how it’s really “better”

Problem is they both need to create an area with a full separate file system which doesn’t really fit in with the FHS.


For flatpak, the installed apps go to /var/lib/flatpak for system-wide, ~/.local/share/flatpak for user-specific.

Then there's the question of app-created data, that would normally go into some dot-directory. Flatpak by default redirects all XDG dirs into ~/.var/app/${appid}, but as a part of the required permissions, the apps can request access to the user homedir or specific subfolders.

Some apps abuse it. Yes, I do mind, when an app liters into ~/Documents/. I'm looking at you, Viber.


An /app isn't any better. If they insist on having a fully realized file-system, they can put it under /opt. Or, they could use symlinks to construct it under /run/{app,snap} similar to a chroot setup.


/app is not visible on the host system, only inside the container. It is chroot on steroids.


I don't mind that behavior.


/app is never visible on the host though


They just create ~/.var if you install program for single user, if as root it goes somewhere to /usr/share


Does it matters?


Do standards matter?

Yes.


depends if they are holding back a better future, then no


I installed Ubuntu 18.04 the other day and was prompted to install some common software such as Slack, Discord and Spotify . I installed all of them, and it mostly worked (navigating to another page in the software centre cancels an ongoing install with no warning).

When I ran mount, I noticed that every package I'd installed had it's own entry in the mount list. After I rebooted the machine, these mounts were no longer listed and the software I'd "installed" was no longer available through the gnome menu.

I'm not against finding new and better ways to package software but this experience left me with the expression that the technology is not quite ready to be rolled out and adopted en masse.


The technology has existed since the 80s, it's just that the stuff people are building today is ridiculously over complicating the solution. MacOS classic had simple single-file programs you just dragged and dropped anywhere you wanted on a disk and ran them. The same was true of DOS (it was a folder, but otherwise the same), RISCOS, NeXTStep and modern MacOS via NeXTStep.

Even Linux has had several: NeXTStep style Application Bundles in GNUStep, AppDirs in ROX filer, and AppImage today. They're just not over-engineered enough for the Linux Desktop community to embrace or something.


What you call “over-engineering”, the developers of Snap/Flatpak call “security.”

Portable app bundles are not the goal here. The goal is portable app bundles that the user can install from arbitrary sources on a whim without putting their OS at risk of malware or their data at risk of exfiltration.

Y’know, like on phones.

But also, since these designs aren’t for consumer electronics appliances with a central vendor, but rather are following the FOSS philosophy, you can’t just use a weak sandbox powered by nominal capability manifests signed by a central code-signer, where the app could do arbitrary things outside of its capabilities and the sandboxing wouldn’t catch it (i.e. the macOS approach.) Instead, you have to have a strong sandbox that will actually enforce the capability manifest to restrict what the app can do.

Oh, and the bundles can update by self-modifying their contents, creating new executable binaries in the process, so your sandbox can’t be based on signing the initial contents of the sandbox, but rather has to sandbox whatever is attempting to run in there right now.

If you can come up with a design that fits those criteria and is simpler than Snap or Flatpak, by all means, share.


" The goal is portable app bundles that the user can install from arbitrary sources on a whim without putting their OS at risk of malware or their data at risk of exfiltration."

This is not now nor will this ever be a thing. If you can run software on the host there will always be a way to compromise the host. It is not even 100% safe to run software in a vm because exploits have been found to break out.

Maintaining security on your phone requires the app store owner to invest time in proactively screening manually or automatically for malware or reactively removing it when found. It requires users to avoid stuff that looks like scammy bullshit or software from unofficial sources.

Both of these layers leak and when they do automated protections usually do as well because malware authors can easily test against existing protections, learn from one another, and distribute what works.


> If you can run software on the host there will always be a way to compromise the host. It is not even 100% safe to run software in a vm because exploits have been found to break out.

That's... inaccurate. More valid statements would be:

- In a sufficiently complex OS, it is unlikely you will be perfectly safe running untrusted executables even with reduced permissions.

- Popular modern OS desktop distributions are very complex out of the box.

- Sandboxing prevents certain classes of attacks effectively, but should not be relied upon as a sole line of defense.

- Vulnerabilities have been found in VMs and containers, but they afford greater protection and isolation than running a process directly on a host system.

IOW, don't go No-True-Scotsman on security. A vulnerability does not invalidate all benefits of an architecture.


> If you can come up with a design that fits those criteria and is simpler than Snap or Flatpak, by all means, share.

So a well packaged app utilising SELinux|AppArmor profiles?


What SELinux/AppArmor profile would allow an application to read exactly the one file in $HOME that the user selects in the file picker, but nothing else?


It isn't really that hard, just give the user the ability to determine if and how an application is sandboxed. Then it doesn't matter if the binary changes during an update, the user's level of trust in the vendor who provided that update has not changed (else they'd have disabled the updates), so there's no reason to change the sandbox permissions. You only really need to sandbox stuff you're unsure of, or stuff that misbehaves but you need to use anyway.

This condescending idea that users can't be trusted to determine this stuff for themselves and therefore we need a centralized signer and a bunch of complicated management framework to deal with it is part of the reason the FOSS philosophy has yet to produce a desktop anyone cares about.


I can determine whether an app is trustworthy, sure. But you know what? Sometimes I actually want to install an untrustworthy app. Sometimes an untrustworthy app is the only app that does what I need.

Your argument, by analogy, is “you should trust people to know not to sleep with people with STDs.” Well, you know what? Some people want to sleep with people with STDs. Sometimes those are their significant others. They still don’t want to catch something.

In both cases, the answer is the same: a condom.

A sandboxed App Store is, basically, a brothel where condom use is enforced. You can meet strange apps, play with them, and not worry about it. Because of the brothel’s policy, nobody the brothel hosts is risky. Your safety is enforced at the level of choosing the source.

Whereas something like Ubuntu’s PPAs, are more like a bar. Who knows what you’ll catch? Any individual app might decide to “wrap it up” with SELinux/AppArmor, but you can’t enforce it at the app-store level.

(Also, completely dropping the metaphor: the iOS App Store is frequently exposed—at least for free purchases—to children or even infants. This is actually a capability people want. This is certainly not a case where the user can determine for themselves whether an app is trustworthy.)


>I can determine whether an app is trustworthy, sure. But you know what? Sometimes I actually want to install an untrustworthy app.

Yeah, that's why I said this:

> You only really need to sandbox stuff you're unsure of, or stuff that misbehaves but you need to use anyway.

I'm totally for sandboxing, at the discretion of the user. It doesn't require complicated infrastructure to do this.


Apple who employs over 100k individuals has employees create automated tests, and manually test apps for inclusion in the store which requires a 99 usd fee for a developer to access.

A potential malware author must pay 99 usd to submit apps and pass certification for its apps. If apps are detected in review as malware that 99 usd is burned and will have to be spent again potentially repeatedly.

This is probably why there are millions of infected androids and comparatively few infected iphones.

Redhat with 10% of the staff and 1% of the annual revenue doesn't require any fee to release applications. This probably doesn't scale to android or ios proportions unless your sandboxing is perfect.

Unfortunately there is no 100% safe way to fuck disease ridden whores and no 100% safe way to run malware ridden apps. This is a dangerous fiction and an unworthy goal.


To be clear: clients run “malware-ridden apps” safely every day. They’re web-apps. Web browsers are actually-competent sandboxes. (Even PNaCl worked fine, despite nobody wanting to use it.)

Likewise, servers run “malware-ridden apps” every day as well. Do you think AWS or GCP is getting its infrastructure infected when customers run their arbitrary code on it? No. Not even on the shared clusters like Lambda/Cloud Functions. These are competent sandboxes.

There are numerous other examples—running everything from user-supplied DFA regexps to SQL queries on shared servers (complete with stored procedure definitions) to arbitrary Lua code, server-side, in an MMO.

We programmers know how to (automatically!) sandbox arbitrary untrusted code. We’ve done it successfully, over and over. We just haven’t done it for GUI desktop apps yet.

That fact is much more to do with the legacy of the architecture of these GUIs, than it has to do with any inherent problem in sandboxing desktop GUI apps.


Hackers bypass the $99 fee by releasing infected Xcode tools and having lots of individual developers unknowingly submit infected apps for approval. https://en.wikipedia.org/wiki/XcodeGhost


The FOSS of today is as much driven by buzzword bingo as the big corporations (perhaps so much of it is made by people working for big corporations now a days). And the really big buzzword in FOSS these days is containers. In large part thanks to the massive presence of cloud derived thinking (i have been told that basically all that mattered was cloud, as that was were the usage was).


I've noticed after updates I believe, the calculator app just won't open. It will sit there with the spinning wheel thing. After a reboot it will be fine. I noticed that the calc app is installed as a snap, never noticed this behavior before.


Wait, why is the calculator app installed as a snap? As a technology preview or proof of concept it's not very impressive—especially if it doesn't work!


I've encountered the same issue as well. I found that I could "fix" the issue by uninstalling the snap version and installing apt version instead.


I thought that was just me, haven't reported it yet as I don't use calc enough to bother, but probably should.


Had a similar experience with flatpack. Installed some software 2 days ago and yesterday flatpak told me it is not installed. Teething problems i guess.


Could you file a bug report here and send me a link?

https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setu...

I'll chase it up with the responsible team.


Sure. I'm at work now but I'll try to get to it this evening.


Perhaps not, I've had a good experience on Fedora 28. Even though I love Fedora, snap and flatpak needs to work on Ubuntu flawlessly before primetime, as its the most popular distro.


Eh, it's a step forward, I wouldn't call it a great one. A great step forward would be to remove all this complicated runtime garbage and keep things simple. GNUStep App Bundles, ROX AppDirs, and AppImage are much better ideas, if you ask me, and the wide adoption of one of those would be a great step forward.


Eh, it's a step forward, I wouldn't call it a great one. A great step forward would be to remove all this complicated runtime garbage and keep things simple. GNUStep App Bundles, ROX AppDirs, and AppImage are much better ideas, if you ask me, and the wide adoption of one of those would be a great step forward.

I go back and forth on this one.

On one hand I think it would make things much simpler for users who have enough storage and bandwidth.

On the other, when there is a vulnerability in a major library we would have to wait for every application to be updated, if they're even being maintained.

The thing is that package managers have mostly abstracted away the headaches of dependency hell, but have also raised the barrier to entry, so any software not participating seems suspect.


> On the other, when there is a vulnerability in a major library we would have to wait for every application to be updated, if they're even being maintained.

Yes, that's a drawback, but how often is there a major library that shouldn't be part of the set of libraries included as part of the base OS install? And for the ones that aren't, if the vendor chooses not to update at least you have the option of continuing to use, or not, the older version on your own terms instead of having it broken by the package manager when a dependant library is swapped out from under it.

We live with this sort of thing every day in real world business and it honestly hasn't been a big deal.


Just hoping the host never breaks ABI is why we are here in the first place, solutions like that simply don't work and you need a runtime to be portable.


The Linux Kernel ABI is incredibly stable, thanks to Linus, it's only the userland built up around it by other people that doesn't give a damn about compatibility. This isn't a significant issue for other OSs because they have a well defined base system and care about compatibility (because users care about compatibility).


I'm not entirely sure why you are being downvoted. The statements may not be entirely true and a bit volatile, but it's been my experience to a large extent. One of the things I like about docker, flatpak, etc. is that it allows for entire apps and dependencies to be encapsulated together. Less interruption and down time fixing things.

When I use Linux as my main OS, I lose about a day or so every other month to dealing with upgrade/update fallout. It's discouraging to say the least. With Windows, it's been about every other year, and macOS every major version gives me a little grief for something. Both far less frequent than any Linux distro I've used.

Don't get me wrong, I do enjoy using linux, that said, I still have very weak trust of it as my primary desktop/laptop OS. I use all three regularly.


> When I use Linux as my main OS, I lose about a day or so every other month to dealing with upgrade/update fallout.

One of the big benefits of Linux is you have control over this by picking your distribution. If you used, for example, Debian you wouldn't have this problem as updates don't break things. If, on the other hand, you don't mind dealing with the occasional update issue and want bleeding edge packages then something like Arch might be better.


But choosing a distro that doesn't update gives you other problems. Flatpak allows for a more hybrid approach where you get up to date software but a stable base. (Plus you can rollback software if those updates lead to problems)


Yes. I wasn't arguing against Flatpak, just the impression that Linux was unstable. Personally I'm very much looking forward to Flatpak or Snap taking off... I run Debian stable on my main system and there are always a few packages I have to personally backport.


I'm not saying it can't be stable.. but by your own statement there's packages you have to backport yourself at times. There's a pull between current and stable, and Linux distros in general don't do as good a job as macOS or Windows on that imho. Doesn't mean I don't use it, just pointing out the flaws that cause me pain.


What distro do you use that breaks this often? Ideally this should consume a few hours every 1-2 years.

For a user whose machine is managed by others it should consume as long as the machine takes to reboot.


You might like AppFS, it solves the packaging problem. It doesn't include sandboxing, but that's a separate system that could be built on top.

https://AppFS.rkeene.org/


>I think there's a vocal segment of the Linux community that doesn't understand what a major roadblock it is for the general user having different distributions having completely different ways of packaging, distributing, and updating applications. Don't worry, no one's taking away your apt-get, pacman, rpm, eopkg, makefiles, etc.

There's a vocal segment of the mobile App developer community that doesn't understand what a major roadblock it is for the general user having different distributions having completely different ways of packaging, distributing, and updating applications on iPhone and Android phones.

In reality, it doesn't matter.

I saw a lot of near pension age people, first time computer users, being taught basic usage of Ubuntu at around version 5.10. In our office, we have administrative and corporate treasury staff being taught to use it. The matter of them using it and not complaining "where is the start button here?" is having them RTFM* or get fired.

Not a single time we had to fire anybody over that.

Irony here is that the few Windows machines in the office are used by engineering for things like solidworks, different physics simulation packages, cadence, vivado and etc

[*] available in eloquent Chinese, unlike worse than Google translate Microsoft's manuals


What worries me more is that packaged applications come with 1-n standards again.

There is no value in having a package manager on top of one.

The real problem is that actual software is trying hard to be too complex and violate best practices for installing software.

User managed Apps are okay. But: only for private use and never for company use since user control IS the actual issue there.


> User managed Apps are okay. But: only for private use and never for company use [...]

well, private use is by far the majority use case, isn't it ?


So, what about all the people building the software and making a living from that?


Ready to ruin the security of Linux, you mean. The split between package vendor and package maintainer has classically been the primary reason for malware being rare on Linux. Getting maintainers out of the loop for auditing packages, ensuring security updates go out, etc - is an awful idea. Sandboxing applications is great, but it can be done without subverting the package manager.


There are several linux distros, I'm a single OSS developer. I once tried making my program available for Debian (deb), Ubuntu (ppa) and Fedora (rpm). I ended up supporting only AUR. There are too many differences in packaging managers, different requirements, too many websites I have to register on, too many config files for each distro. Support from people who work as packagers is non-existent, questions on SO out of date, error and warning from packaging software misleading or completely useless. It's a huge mess, and if you think existing system is better than Flatpak/Snap/AppImage, I'm sure you haven't tried porting your software to top 6 distributions. Also, enjoy dependency hell, Debian has old Qt and PA, Mint has PA compiled without noise-cancelling flags, Ubuntu had SQLite3 with not working encryption, Fedora had backported Qt5 patch (old Qt with latest patch) witch fixed one issue, but broke CSS on Cancel/OK button group.

I welcome AppImage and Flatpak.


> I'm sure you haven't tried porting your software to top 6 distributions

In his world developers don't do that, they throw a pile of source files into the wind and let package maintainers put it together for a distribution.


Yea, mainters also modify X-sofware, backport it, compile with different than tested compilation flags, remove patches they don't like, don't upgrade software for years or decades. Remember the time xscreensaver author asked Debian to remove it from their distro, added warning in GUI saying

_("Warning:\n\n" "This version of xscreensaver is VERY OLD!\n" "Please upgrade!\n" "\n" "http://www.jwz.org/xscreensaver/\n" "\n" "(If this is the latest version that your distro ships, then\n" "your distro is doing you a disservice. Build from source.)\n" ),

YES, THAT WHAT USERS WOULD SEE, if only Debian maintainer haven't removed this patch.

jwz.org/blog/2016/04/i-would-like-debian-to-stop-shipping-xscreensaver/

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=819703#84


And most of that come out of upstream not having a farts care about API stability!

Thus distros have to work around that by freezing lib versions and any software depending on said lib version for the support period of the distro release (5-10 years).

What is "funny" is that Debian here is doing no different from what RH had been doing with RHEL for ages, yet Debian gets all the flak...

Never mind that xscreensaver has a very prominent splash screen with JWZ's email on it on every launch. Maybe if he didn't want every idiot on the net emailing him, he should not show it in their face all the time. He may be brilliant in some ways, but sometimes he can be an raging asshole.

Hell, sometimes i wonder why he keeps supporting Linux as he seems to have no love for anything but Apple these days.


Note that jwz.org redirects any requests from HN to a rather NSFW Imgur picture. (Also filled w/ ad hominem attacks towards HN users.)

If you want to view the jwz.org link, copy the URL, and open it in Incognito.


Fixed by removing https://


I once had a maintainer remove random lines of code from my modern C++ codebase until it compiled on his old GCC / Qt combination. He then went on making bug reports of random segfaults on almost every user interaction.


In their defense, if they remove enough lines, eventually the segfaults will go away.


And then your package reaches distributions 1-5-7 years later. I guess that's an acceptable software distribution practice. If you're still living in the 80's :D


Want things to make it into STABLE distros quicker, do not break APIs because the latest bling bling framework/language/whatever got released!


There are many people who have got tired of wanting the absolute latest exciting version of their text editor, mail client, IRC client, spreadsheet and want things to just work and leave them alone. The larger part of me dreads debian upgrades these days.


What about the many people who do want the latest version of their text editor etc? Those people now have few options good options. Is it fair to force them to conform to your conservative standards?


Package managers work well for the large proportion of software on Linux that is open-source, and has enough users to warrant the effort of packaging it separately for each distro, and doesn't require you to have the latest version (or specific older version) all the time.

However, there is still a large amount of software out there for which the above does not hold. That software is not currently being installed with a package manager - it is the curl | bash scripts you see all the time, as well as a whole bunch of other more manual methods of downloading and installing. It is this software that solutions like flatpack and snap are for, and it is improving both the security and usability in these cases.


It improves usability only. It also creates false perception of security. Malware already found in NPM packages, AUR packages, and other packaging systems without maintainers.


Do you not think Flatpaks (and Snaps) are more secure than curl | bash scripts?


Is the sandboxing in Flatpack completely useless then? Any details on this?


It is a work in progress. A few packages are very secure but many have lax permissions for now.

There are big improvements coming like PipeWire, DConf sandboxing, etc but it will not happen over night.


It depends. Historically, sandboxing used to protect from takeovers, because computer time and network bandwidth was costly. But today, user information is much more valuable than computer time. Flatpack partially protects from both attacks on system and on user data, but this protection is not perfect. We should expect that Flatpack protection is (or will be) on par with Android anit-malware protection.


Not the person you're replying to by you're intentionally being obtuse to try and prove a point he never argued.

v_lisivka never mentioned sandboxing as an issue. They were clearly talking about the fact it's a free for all user submitted repo which have a provable history of people adding malware.


The comparison above was vendor-provided install scripts.

From my understanding, Flatpak provides sandboxing and asks the user to confirm exceptions, which if it works reliably does seem like I win something security-wise going from "go to vendor website, use their installation script" to "go to vendor website, get their flatpak thing, know that it can't totally screw my system without me granting it exceptions".

Parent claims it's only a usability benefit, so I'd like to know details about the problems with its sandboxing (totally willing to accept that they exist, sandboxing is tricky and not a cure-all)


You're overly aggressive in your reply, GP just asked a legitimate question.


The split might have some amount of minimal value, but it also has many problems. Just the insane amount of work spent, by literally 100s of different communities is almost beyond insane.

Massive amount of identical work that the WASTE majority of the time, does not improve anything for the end user.

Security should come from moving into well maintained platform, automatic scanners and many users. Instead of the package maintainers being the 'security reviewers' we should have app stores and sub repositories with security reviews.

Nothing stops a distro from only offering flatpaks from their own repo.

Centralizing and automating these things will allow for far better security processes in the long run.


Don't forget the craziness in /etc. Where do I edit the hostname? Where do I edit the network card configuration? How can I check the distribution name and its version number? Where are the configuration files for Nginx and how are they structured? Etc, ad nauseam.

Just try to do some of these steps on 2-3 versions (versions for non-rolling distributions) of each of the following distributions: RHEL, Debian, Ubuntu, Arch, Gentoo.

The Linux distribution ecosystem is insane.


> The split between package vendor and package maintainer has classically been the primary reason for malware being rare on Linux.

It's also one of the reasons users and proprietary software have been rare on Linux.


> The split between package vendor and package maintainer has classically been the primary reason for malware being rare on Linux.

It's also one of the reasons users and proprietary software have been rare on Linux.

It may be why proprietary software is lagging, but I think most users prefer an OS level package manager. They just didn't know about them until the Appstore.


IMO proprietary lags way more because you do not know if your chosen framework (unless it is Qt, heh) will function 5-10 years down the line with minimal investment in updates.

The Linux kernel is everywhere thanks to Torvalds (overly late in the opinions of some BSD devs) declaring the API/ABI sacrosanct, and holding the other kernel developers to that (much to the chagrin of some that want to do hail mary changes for sake of security or ego).

But the userspace above had been reimplemented again and again because the various libs and such are fundamentally unstable. In far too many cases you can even do a x.y.z+1 update without getting some kind of breakage.

Often because someone decided that spec adherence was more important than real life stability, and thus pushed a patch through that altered the API behavior to be more in line with a spec that nobody had complained about for a decade or more.


Which is why everyone loves the Windows Appstore and uses it exclusively.

The Appstore works on phones and tablets because they are primarily consumption-only devices, and because users are not given any other choice (also, the installer/uninstaller model has a lot of its own suck). I don't know about you, but that's not what I want out of personal computing.


Most complaints about the appstore revolve around things like it being slow, clunky, failing to work, being designed around DRM, forbidding win32 early on, etc.

Nobody is complaining that applications are in a central location.


I am assuming that's sarcasm when it comes to Win10, but 'exclusively using the app store' comes closer to the truth for non-dev macOS users, at least that's my impression.


Strange. I have a whole load of proprietary software on my laptop, all installed via native packages.


The packaging system has very little to do with user adoption of a more complicated interface, or the success of proprietary software in a purposely open system. From the user's standpoint, package management on Linux is usually just hitting the "update" or "install" buttons.

Edit: I have no idea why you think this is condescending.


It's this kind of condescending attitude towards users that keeps Linux a non-option for people. What happens when a user wants to install something that isn't in the repo? Or do you just imagine that "average users" don't do that (and that no one else matters)?

Edit response: It is condescending because it imagines the user is incapable or uninterested in the way their system actually works, as a mere pusher of buttons to make the blinkenlights go. Sure, some users are like that, but largely it is a convenient excuse to dismiss criticism.


Everything you say is completely applicable to Android as well, and it's not clear how that is making the platform a non-option for people. Very few people installs third party packages.

The market share of any PC desktop operating system has mostly to do with the type of office work that took place in the 90s, when computers took over.

That's slowly changing as the web is taking over, and Chromebooks, Android, Mac, and "regular" Linux desktops becomes a viable platform for sharing your work with others.


On Android, software developers package applications and push directly to the repo - no 3rd party is involved for a packaging step. Developers have a high degree of control in the application's packaging.


> Everything you say is completely applicable to Android as well, and it's not clear how that is making the platform a non-option for people. Very few people installs third party packages.

Consider that the usecase for a personal desktop computer may be different from that of a smartphone or tablet. There's more to personal computing than a web browser.


Right, but the Android comparison was meant as a counter example to how packages outside the main repository being difficult to install is somehow a deal breaker to ordinary users. It's not. Desktop computer market share is mainly due to historical reasons.

Of course an easier to use third party packing system would make the desktop easier to use, but that's not where the pain points are. (Especially since most third party binary packaged software is just something you unpack and run.)

Even the most easy to use packaging system you could imagine wouldn't impact the Linux desktop market share measurably.


> Desktop computer market share is mainly due to historical reasons.

Historical reasons like it being an open platform that developers could target without the requirement of third party approval, among other strengths.


Android is very different. You only need to make one file for many different stores, and that file is pretty trivial to install even if it isn't in a store.

Linux doesn't have anything like that traditionally. You have to make a deb, rpm, whatever Gentoo uses, etc. And ./configure && make && make install is laughably unintuitive.


It depends. If you use the NDK you have to make sure you can build for both x86 and ARM (the play store will hand out the relevant variant silently).

Also, Google have done some serious work to ensure at least some backwards compatibility (and providing a support library, that i fail to recall the exact name of, for backporting various functionality).

While the FOSS world seems to lean on "compile it" along with "lacking resources" as an excuse for not maintaining anything more than incidental compatibility across releases.


What problems does Flatpak solve from a user perspective? Assume the user is already using Ubuntu, and will download and run any shell script without a second thought.


That shell script may not actually work because it makes assumptions about the structure of the underlying OS that may have changed 10 minutes ago on the arbitrary whim of a distro developer.

The installer/uninstaller paradigm is almost as bad as the package manager paradigm in my opinion.


Good. Proprietary software doesn't belong on Linux, or anywhere else for that matter.


But that doesn't really work for proprietary software. I use Flatpak/Snap for Spotify, IntelliJ, Slack etc. Audited packages are not really an alternative here, and (partially) sandboxed, distro-agnostic containers are the next best thing.


Sure it does. I add that companies repo to my sources and install their packages. I'm trusting their keys to install their software. I'm not trusting them to install software they have zero control over.


Flatpak repos are GPG signed, nothing changes.


Yes it does. I go from trusting one vendor (with one key) to install one package (and dependencies) to one 3rd party repo (with one key) to install N packages, and the owners of the 3rd part repo don't verify the uploaders to their repo.

That's going from trusting 1 person to trusting thousands.


There is nothing stopping one repo containing one app+runtime.


Sure, just like there is nothing stopping them doing the same with apt/yum/etc... the things is that they don't. SO if they stop publishing their native packages and push to a handful of central repos I'm forced into a 1:N trust situation rather than the 1:1 it should be.


But they do? VSCode, Spotify, Chrome, etc are all custom repos just for their software.


Are they? I can't find the existence of a flatpack repo for any of those apps, all are just in global flathub/snapcraft repos. I'm happy to be wrong but I can't see anything that backs up your claim.

Spotify: https://www.spotify.com/uk/download/linux/

Chrome: https://www.google.com/linuxrepositories/

VSCode: https://code.visualstudio.com/download


> Getting maintainers out of the loop for auditing packages, ensuring security updates go out, etc - is an awful idea.

Package maintainers can also be a bad idea! They have the ability to make the user experience worse and create a lot of unnecessary work for vendors. The version of AbiWord on Ubuntu was out of date and unsupported for a while. There was a steady stream of people going through vendor support channels reporting bugs that had been fixed a year previously. The maintainers ignored the vendor's requests to update the package to a more recent, supported version.


> The version of AbiWord on Ubuntu was out of date and unsupported for a while

You can't blame distribution maintainers for not wanting to expose their users to a constant stream of the newest exciting quirky features the authors have decided to cram into the current release just to be able to keep on top of all of the bugfixes. New releases all also tend to implicitly have new or different sets of dependencies, and distributors have a duty to make sure these various dependencies that exist in their distribution can coexist and interoperate

Authors declaring whatever versions they no longer care about as "unsupported" is a rather unhelpful trait that ignores where their users are (distributions) and what a lot of their users actually want (stability).

I'm sure developers would ideally like to have users suckling at the teat of their latest tagged changeset and perpetually excited about the latest enhancements in each and every release. But really I'd wager that most people, of the ~20-30 applications and desktop components they use every day, by and large... they'd just like most of them to behave like they did yesterday - or last week - or last month - as long as it's reliable.


> You can't blame distribution maintainers for not wanting to expose their users to a constant stream of the newest exciting quirky features the authors have decided to cram into the current release just to be able to keep on top of all of the bugfixes. New releases all also tend to implicitly have new or different sets of dependencies, and distributors have a duty to make sure these various dependencies that exist in their distribution can coexist and interoperate

Yes, and that is a great argument for why the unix model of applications spreading themselves over the file hierarchy and sharing everything is completely broken. Something as simple as updating to the latest and greatest version of an application should not require you to rebuild the entire goddamned OS! That Linux Desktop people don't get this is one of the big reasons there will never be a Year of the Linux Desktop.


This is the only way you can prevent having seventeen different copies (and probably versions) of e.g. libQt on your system at once. And once you have different versions of a library, there's nothing that can guarantee they they will all interpret your configuration files in the same way, etcetera.

The "year of the linux desktop" has always been a red herring. Let's leave 2006 behind, shall we?


> This is the only way you can prevent having seventeen different copies (and probably versions) of e.g. libQt on your system at once.

How many copies of the Win32 widget interface do you think exist on a Windows installation? I'll give you a clue: it's one. This is because Windows has a defined and stable base system that developers can depend on being there so they don't need to ship their own copy of things in the base system.

Linux has no base system because that would mean agreeing on something.


In this case it wasn't that Ubuntu's package lagged behind development. It was that Ubuntu shipped a version of the software a year and a half old in an LTS release.

Back-porting bug fixes isn't really an option at that point.


Patches speak louder than bugs. The source material used to prepare the packages is often available and sending a patch to the maintainer's inbox (or other maintainers in similar software if the maintainer is unresponsive) is a sure way to make it easier for them to accomodate you.


So if the maintainers haven't done something, we should assume it's upstream's fault for not making it convenient enough for them, or not prompting them enough? For every distro that has packaged their code?

There are real downsides to the maintainer system - it creates a lot of extra, uninteresting work, and frequently no-one's that interested in doing it, especially for smaller packages. That's why there's so much interest in other models.

To give another example, if you install jupyter-notebook through apt on Ubuntu 18.04 today, you get a version with a security issue (CVE-2018-8768) that upstream released a fix for months ago. Package maintainers are not making anyone safer there.


It's everyone's fault. The upstream, the maintainer, and the users. The first person who becomes aware of an issue should take steps to resolve it. By distributing this along everyone you make sure that there's enough hands to do the work and people can specialize in supporting the packages they want to work on. Maintaining a package is not hard.

This isn't some hypothetical, for the record. I'm explaining how this actually works in practice.


The package maintainer system doesn't add additional people to share the same work, it creates additional bits of work for different people to do. Upstream can release a fix, and it doesn't get propagated to people on distro X because the package maintainer for X is busy with work, or is a parent now, or just isn't interested in the package any more. And if one proactive maintainer patches an issue, it doesn't help users on all the other distros, or users who get it directly from upstream.

You're explaining how this works in response to concrete examples of where it hasn't worked. I understand how distro packaging works for some packages, but I've seen it fall down too many times for others, especially more niche things.


>Getting maintainers out of the loop for auditing packages

Do maintainers commonly audit source code to look for vulnerabilities? And at any rate, aren't the common security-critical libraries for flatpaks, like OpenSSL, already (in theory) provided and maintained by the runtimes?

All the major consumer OSs distinguish between system components, like cryptographic services and graphics libraries, and user-facing applications. The world hasn't collapsed for them so far, and in an ideal world that distinction allows for better delegation of responsibilities.


It's definitely not so cut and dry - maintainers have actually managed to introduce vulnerabilities into software too. The famous Debian SSH key generation issue comes to mind.

https://github.com/g0tmi1k/debian-ssh


This was in 2006. 2006. And people still have to dig back to then to make this point.

In 2006 the security landscape was very different. The common understanding of security being a tremendously subtle issue was much, much lower. A typical system from 2006 was much more trivially exploitable than one today. A lot of things can be said about security attitudes in 2006 vs now.


My experience when I worked on upstream open source software was that distribution package makers would routinely introduce subtle bugs into programs via the act of packaging. In fact on the project I worked on we stopped supporting users who didn't use our own upstream packages because the number of bugs introduced by downstream packagers was just so huge.

I can't see any real benefits to the Linux approach and never could. It's one of the reasons I ended up moving to macOS. There's hardly any malware there too and yet app developers build packages themselves.


Not always and not comprehensively, but they work alongside their users to tune packages as appropriate. It introduces a dispassionate 3rd party for users to report issues to who can fix problems which the upstream has an incentive not to fix.

>The world hasn't collapsed for [major OS vendors] so far

Hell yes it has. Malware and user-hostile is running rampant on the "major" platforms like Windows and macOS. Have you ever used a non-technical person's computer for a few minutes? It's like a warzone.


And yet people seem to prefer that warzone to the alternative that open source software offers. You can bury your head in the sand and claim that's only because of Microsoft fud campaigns, dirty business practices, advertising, or whatever, but you're wrong because even many people who know about and have experience with Linux choose that warzone.

Personally, I think that says a lot about the Linux desktop and its community.


> Have you ever used a non-technical person's computer for a few minutes? It's like a warzone.

I have used non-technical people's Windows machines, and they were not a warzone. I've looked over the shoulder of plenty of other people using Windows & Mac computers, and generally not seen any obvious signs of malware infestation. There could be unseen malware lurking, but we can't assume that with no evidence.

Maybe this depends on the non-technical people you know, but it's definitely possible for people to use Windows/Mac computers and not be dropped in a pit of malware.


I don't agree. If you want to install Flatpak applications that you can trust to the same extent that you currently trust packages, all you need to do is encourage your distribution to host their own Flatpak repository and have current package maintainers move over. They can still implement their own fixes and patches and release cycles.

Flathub isn't trying to replace your package manager for everything, just for graphical desktop apps. As far as I'm concerned, it's more than welcome. I'm tired of hearing about a major release of a desktop app and not having it available for months. Even worse, I'm tired of hearing about a major release, experiencing horrible bugs when the maintainer updates it, and not having any long-term option to install the previous version so that I can get work done.

Flatpak solves all of this and more.


linux security is honestly more obscurity; most linux servers by anyone but experts is like swiss cheeses.. and desktops..are just not that appealing to people who want to exploit. hell I could easily get around my login manager in ubuntu without breaking a sweat.. As a linux user I'm eager for flatpak/snap/whatever.


The tone of the comments on news like this never fails to disappoint me. Sigh.

The Snap store so vastly improves my user experience of using common apps I switched from Arch Linux (where 50% of the programs I had installed came from the AUR) to Ubuntu, where everything I needed was just there. No longer do I need to run weird scripts from the internet to get simple stuff to run on my non-standard distro (which could be Arch, Fedora, or whatever I am running at the time).

You can hold the position that these things are a big security risk. Distributing monolithic packages with likely old/vulnerable dependencies is not a great idea. But on the other hand, it prevents asking the user to run random scripts (which in many cases are not vendor provided) as root to get their software, and it gives the user integrated automatic updates and other software center integration (as opposed to downloading random stuff from the internet). In terms of increased security through sheer usability and requiring less manual maintenance, the advantages of Snaps and Flatpak add up, I think. Many things in security are a tradeoff, and I feel that getting the user to do the right thing is often extremely undervalued. I think it is also undervalued in these comments.

Flatpak and Snaps still have a lot of problems. Why do we have two competing standards? Why can't I properly get all Snaps running on Fedora or other platforms that use SELinux [rhetorical question - I know the technical reasons]? Why do so many apps not use their sandboxing effectively? Why is it so hard for these things to respect my computer's theme? The list goes on.

The list of problems is long and valid. But I think it's worthy of some celebration that advancements are being made in making desktop Linux usable for users and popular for developers. And I don't think it's clear at all that this is a regression in terms of security.


Snap has no concept of multiple repos meaning Canonical is solely responsible for choosing what software you are allowed to run even if you aren't running Ubuntu.

Especially if you are running a non ubuntu distro you are stuck with snap AND something else rather than replacing your current system with snap.

Further both apt and snap are strictly inferior to nix or guix so you are actually stuck with 2 worse packaging systems in place of one superior one.

For what its worth my experience with a ubuntu/ubuntu derivatives was different. I found the non LTS releases fraught with difficulties and the LTS was out of date sufficiently that I ended up adding over 80 ppas to get up to date version which isn't much different than having 80 some aur packages save that it is less convenient.


We also have the not-so-dead autopackage (listaller)


Yeah, but it's not just about solving the problem "how do I install package X on a random Linux distribution". I agree there have been solutions for that for a long time. In fact, Snap at least does not even fully solve this problem because its sandboxing depends on AppArmor, which clashes with SELinux on distros that use this - (e.g. anything from Red Hat).

The real key to Flatpak and Snap is the entire app "experience" - integration in the software center as a trusted source, automatic updates, clean installs and removals without messing up other parts of the system. Before Flatpak and Snap, this did not exist on Linux, especially not in something that was compatible with all distros.


> But I think it's worthy of some celebration that advancements are being made in making desktop Linux usable for users and popular for developers.

I don't know about that. This "advancement" is catching up to where every other desktop OS was in the 80s.


Do you think perhaps it's not worthy of celebration that advancement happens, even at a slower pace than you would like?


On the one hand, yes, on the other there's the immense amount of shame I feel for promoting open software and computing in general because progress is either glacially slow or regressive.


I like to run a pretty lean system, and flatpak gives me the ability to install some of the bloatier packages without the deep system dependencies they bring with them in a package manager.

Installing a PDF reader should not forcibly install Udisks2 and upower.

A lot of commenters are upset that maintainers are being removed from the equation. Can't each distro just set up their own maintained repository? If I understand correctly, there's nothing about flatpak that actually prevents traditional maintaining. The only thing distros have to do is integrate flatpak, set up their own repository as default, and note that user should use other repositories at their own risk. Which is basically how things already work.

Is there a valid reason to hate flatpak itself, or are you all just too caught up in hating change to actually evaluate it?


> Can't each distro just set up their own maintained repository?

Yes Fedora is working on doing so.


A lot of criticism smells like 'RedHead systemd conspiracy' vibe.


I've been using the Slack, GIMP, and Darktable flatpaks on Fedora Workstation (which is GNOME based), available on flathub.org, for quite a while, maybe a year - without problems. I also sometimes use Okular which is a KDE app, and by installing it, the necessary kde.Platform runtime libraries were also installed and kept up to date by flatpak - works flawlessly. There's also a LibreOffice flatpak I have installed, and it seems like the flatpak update "deltas" are smaller than RPM updates, by quite a bit.

I haven't used the feature yet, but supposedly there's a means of easily rolling back to a previous version in case an update has a bug the user can't work around. Rolling back RPM's can be non-trivial when there are many dependencies - it's way easier for me to do rollbacks of an RPM only based system by Btrfs snapshots which of course not everyone can depend on just for undoing an application update.

So I'd say this is definitely an improvement from a user perspective; and it seems no more painful and perhaps a little less painful for packagers.


I dont know but if i just compare https://snapcraft.io/store and https://flathub.org ... I see that snap packages have a lot more adoption by big name vendors.


One advantage of snaps for developers is that if one is already developing on Ubuntu or Debian, the build and runtime dependencies of a snap can be expressed directly in terms of the packages in the distribution's repositories. To build a flatpak requires one to hunt down the original sources and figure out how to build them like what people did before the era of package managers.


Canonical has a marketing team and outreach. Flatpak doesn't.

Also the Snap store allows a larger range of software (WINE repackages, server apps, duplicates, etc) that Flathub does not. Flathub is also hand reviewed so it takes a bit longer for new apps to get in.


im not talking about niche ones - im talking about REALLY big names.

firefox, vscode, etc. there is also this - https://www.docker.com/docker-news-and-press/docker-and-cano...


vscode at least is there on Flathub: https://flathub.org/apps/details/com.visualstudio.code


I found the flatpak version too sandboxed to be useful: https://github.com/flathub/com.visualstudio.code/issues/25

Snap "solves" this issue by not sandboxing some apps. I'm not sure if there is a good solution to sandboxing developer tools.


The solution is developer tools to conceptually split the target environment and their running environment. GNOME-Builder is an example of this.


I was wondering how Builder handles this when I wrote my reply, but I assumed it'd do the same thing. Thanks, I'll take a look.


My first impressions of Flatpak have been positive, with a few caveats.

As an end-user, I want my apps to be getting regular, automatic updates, which means it's vital to get them from some kind of official repo. I sympathize with the one-man developer who just wrote some cool little Electron app that he designed to be cross-platform and promptly gets bombarded by requests from his 10% Linux userbase that wants the app to be packaged for Debian/Ubuntu/Fedora/SuSE/Arch repos. I get it, I've been that annoying guy[0].

So to that end, packaging once as a Flatpak and working everywhere has been great. A handful of those pesky apps I used to have to regularly check for new RPM releases are now on flathub and I can update them automatically.

With that said, flatpak support is still spotty. DNF doesn't support flatpak yet, so I had to install GNOME Software on my Cinnamon DE just to be able to easily support and update them. There's also the issue of the greatly inflated installation sizes. I'm hopeful that support will get better soon now that it's finally at 1.0.

[0]https://github.com/MarshallOfSound/Google-Play-Music-Desktop...


> DNF doesn't support flatpak yet,

DNF probably won't support flatpak, as it is something entirely different. Gnome Software does support multiple backend thought.

If you want to install/update/remove flatpaks from command line, just use the flatpak command ("flatpak update" will check all the configured flatpak repositories for newer versions and update whatever is available).


> With that said, flatpak support is still spotty. DNF doesn't support flatpak yet, so I had to install GNOME Software on my Cinnamon DE just to be able to easily support and update them. There's also the issue of the greatly inflated installation sizes. I'm hopeful that support will get better soon now that it's finally at 1.0.

Could you not just use the cli?

(Not trying to start a holy war, just a question)


The CLI wasn't terribly intuitive for app discovery, and to be honest until I googled around I couldn't even find the flatpak command to update already-installed apps.


update installed apps: "flatpak update"

list whatever is available in repository: "flatpak remote-ls repo-name", i.e. "flatpak remote-ls flathub".

list installed packages: "flatpak list"


Searching is easy too: `flatpak search $foo`.


Flatpak works much better for me on Ubuntu 18.04 with vanilla Gnome on Wayland: the Snap packages don't appear in the Gnome menu (until you launch an X session) and some Snap packages don't work at all on Wayland. Also, Spotify is updated much more often on Flatpak hub than on Snap.


I dont understand the point of this - why is this any better than apt? Based off the top comment here this is meant to be much more user-friendly, but already installing on Ubuntu didnt work properly despite following Flatpak's own guide. And the install process for apps is then basically the same as apt. I dont get it


User friendliness is a smokescreen, it is about being upstream friendly. This in that now you get the kitchen sink of dependencies on each install rather than them actually having to care about API stability (not that they did much in the past).

This because they do not want to admit that distros being slow with rolling out new releases of their software, is because of the dependency tangles that they have created for distro maintainers.


They might want to take a look at snaps - it does something similar and has a good software mix.


Snaps are Canonical's NIH take on Flatpak (previously known as xdg-app). Fragmentation for fragmentations sake.


Sure it's Canonical's NIH and not Red Hat NIH, since both got released in December 2014?


Yes, because Snap doesn't support different repositories easily and everything atm is based on Ubuntu base-images. Also the license is GPL+CLA allowing Canonical to relicense it under a proprietary license. Furthermore they require AppArmore for some features which some distros can't use (it isn't part of the mainline kernel AFAIK).


Okay I was wrong: AppArmor is part of the mainline Linux kernel. I think it were some Ubuntu-specific patches then which were required for some of the sandbox features. I will try to find where I read it again.


Thanks for the follow-up and details - I wasn't aware of all this.


Fair enough. On the otherhand, does it really matter? The result is the same. GNUStep had Application Bundles and was developed in the 90s. ROX Filer had AppDirs and was developed in the late 90s and early aughties. Klik (now AppImage) was developed in the early aughties.

There's plenty of NIH to go around on this one.


I would be curious to know what the differences between snap and flatpak are? Is this just a "flatpak is for RHEL, snap is for debian" situation, or is there something more to it? Just curious.


You can use them side by side. They are just different options :). If its not on snap, it could be on flathub or the other way around.


Packaging was a solved issue in Linux, congrats on the 3 steps back.

The rush towards containers because they're "easy" strikes yet again.

My fear is that the handful of companies that build packages for their desktop apps with abandon them and move to Flatpak/Snap. From the Flatpak docs it looks like anyone and everyone can just get access, even if you don't own the thing you're packaging. So if you pack $newpopularsoftware first you can now install malware on everyone's computers with a single push.

It's like they looked at everything bad about Chocolatey/NPM/pip/AUR and just ran with it.


> Packaging was a solved issue in Linux

That's an insane thing to say. Literally 100s of people doing the same thing adding no value to the end user. That is a waste amount of resources waste by the open source community.

> My fear is that the handful of companies that build packages for their desktop apps with abandon them and move to Flatpak/Snap.

Good? Why does it matter to you.

> From the Flatpak docs it looks like anyone and everyone can just get access, even if you don't own the thing you're packaging. So if you pack $newpopularsoftware first you can now install malware on everyone's computers with a single push.

Nothing stops you or any distribution from having a repo with reviewed or specially selected software.

Also, the waste majority of packagers would not have found that malware anyway.


> That's an insane thing to say. Literally 100s of people doing the same thing adding no value to the end user. That is a waste amount of resources waste by the open source community.

Or you know developers could package their own software. It's some upfront effort but usually set and forget. Things like FPM[1] make this even easier. Personally I don't know why developers find packaging so hard, I've had to package hundreds of bit of software for different distros (and versions of said distro) over my career and it's usually set and forget with some changes when there are big underlying changes to the OS like sysv > systemd. Granted my experience is with non GUI apps so I can imagine there is likely some pain points between different distros/version when it comes to the hot mess that is DEs.

> Good? Why does it matter to you.

Because I have to go from trusting 1 vendor to install 1 package (and dependencies) to 1 3rd party repo that anyone can push to. That is a huge change in the trust model.

> Nothing stops you or any distribution from having a repo with reviewed or specially selected software.

We already have those.

> Also, the waste majority of packagers would not have found that malware anyway.

This isn't about trusting the software in the package it's about trusting the package maintainer, who could now be absolutely anyone with no verification or validation. See malware in other user run repos like NPM, pip, AUR etc...

1. https://github.com/jordansissel/fpm


> Or you know developers could package their own software.

When I was a Linux user, I've seen so many packages that worked on one Debian derivative but had problems on another. Maybe this is a solved problem now? It definitely wasn't solved from a user perspective five years ago.


> Or you know developers could package their own software.

As a developer I can tell you right now that I'm not gone package everything for for 100s of linux distros that I don't even know the name of.

> It's some upfront effort but usually set and forget. Things like FPM[1] make this even easier. Personally I don't know why developers find packaging so hard, I've had to package hundreds of bit of software for different distros (and versions of said distro) over my career and it's usually set and forget with some changes when there are big underlying changes to the OS like sysv > systemd. Granted my experience is with non GUI apps so I can imagine there is likely some pain points between different distros/version when it comes to the hot mess that is DEs.

Have you ever thought about maybe that doing something a lot makes you good at it.

Many developers are not linux expert, and don't know the many difference between the distros and the package managers, or even the different distros using the same manager.

Saying its 'fire and ferget' when there are so many distors who are all moving targets with different targets, release cycles and so on, its just basically lying.

But that's not all, you also have to get bug reports from many different people that you can not reproduce.

> Because I have to go from trusting 1 vendor to install 1 package (and dependencies) to 1 3rd party repo that anyone can push to. That is a huge change in the trust model.

So the fact that some poor debian maintainer had to wrap the venders app with some debian specific glue makes it safer?

If you don't trust the vender of your software, you can trust that they maintain a repo where you can download it. That might be Flathub for some smaller once, but others will have their own.

There is not more trust involved at all, maybe you move trust from Debian to Flathub, but that's it.

> We already have those.

Not with the advantages of the maintainer only having to publish one software package.

> This isn't about trusting the software in the package it's about trusting the package maintainer, who could now be absolutely anyone with no verification or validation. See malware in other user run repos like NPM, pip, AUR etc...

It is already absolutely anybody. Do you know all the people who package for Debian?

Also, if don't want to use apps directly from Flathub, you don't have to. You can only use software that has been validated by the Flathub community, or when somebody like Fedora uses the app.

Forcing 1000s of man hours of voluntary work just so you can claim "Some guy lightly related to the X distro project has taken a couple hours to see if it doesn't crash" so that's what I'm gone base my security on is a insane approach.

That's a terrible, terrible security architecture and the only reason anybody is defending it is because that's who it used to be and that somehow makes it good.


> Packaging was a solved issue in Linux, congrats on the 3 steps back.

Dude, seriously. Flatpak has issues, but saying "Packaging was a solved issue" when every distro rolls out its own packaging system is the reason we can't have nice things.

Ignoring the problems won't make them go away.


Dude, seriously. Flatpak has issues, but saying "Packaging was a solved issue" when every distro rolls out its own packaging system is the reason we can't have nice things.

If every distribution was using the same packaging system, would they still be separate distributions?

EDIT: By packaging system I assumed we meant the same repositories, and packages. Since that is how Flatpak seems to work.


> If every distribution was using the same packaging system, would they still be separate distributions?

Do we even need that many distributions? What's the value proposition for all those hundreds of Linux distros?

My impression is that for many of them its:

A. we want to make my own distro, to learn, get visibility or maybe even make money

B. we want to make our own distro because of an already existing user base from point A.

C. we want to make our own distro because we can't get along with distro X makers

I wish that the barrier to entry to making a Linux distro was higher. People would have to put more thought and effort into it and the overall quality would be higher.


Do we even need that many distributions? What's the value proposition for all those hundreds of Linux distros?

That is a valid point, and why I usually tell people to just use Ubuntu if they're not sure. The value proposition is that different people are doing different things, and even people doing the same thing sometimes like to do it different ways.

I wish that the barrier to entry to making a Linux distro was higher. People would have to put more thought and effort into it and the overall quality would be higher.

I believe that less competition equals lower quality.


> I believe that less competition equals lower quality.

While I generally believe in competition leading to better results for users/customers, the real life experience of Linux distros vs Windows or Mac OS doesn't seem to bode well for the free market :p (i.e. Linux distros should be the pinnacle of software quality and they aren't).


D. We want to make our own distro because we finally got this gigantic mess of independently designed and developed software to almost kind of work the way we want it to, and now we have to maintain it that way because it was so much work to get here.


Are Debian and Kubuntu separate distributions?

Would RedHat switching to apt make Debian redundant?


> Are Debian and Kubuntu separate distributions?

yes because Debian (stable) and Kubuntu have different KDE packages

> Would RedHat switching to apt make Debian redundant?

- no because they apply different patches - yes because I think that both Debian and Red Hat and any distro which applies patches is already redundant with Arch


You seem to be confusing packages and package managers.


> EDIT: By packaging system I assumed we meant the same repositories, and packages. Since that is how Flatpak seems to work.

Flatpak is entirely decentralized, anybody can make a custom runtime (Debian/Fedora ones exist), and anybody can host a repository.


How's that different from current package managers?


Its not, its only different than Snap which is centralized. Their comment implied that Flatpak was too.


yes of course they would. Using the same package manager does absolutely not require the same directory structure.

If they used the same packages, then there would be a question if they are separate distributions.


> every distro rolls out its own packaging system

But, they don't. Lots use Debian’s, for instance.


Umn, the software developers for X don't necessarily run the packagins systems for deb, apt, etc. There's nothing stopping RedHat from maintaining their own Flatpak "store", ditto for Ubuntu, etc.

What this does is allow software to keep working when a library update happens without having to spend hours/days tracking down wierd scripts you don't even understand and running them as root to fix your problem. Not everyone wants to write OS-level software or fix every bug themselves.

Hell, my GF watched me install something that wasn't in the UI app center (whatever it is called) via command line, and was like, "WTF do you have to do that? That's the most stupid thing I've ever seen." The fact is, she's not wrong.


> if you pack $newpopularsoftware first you can now install malware on everyone's computers with a single push.

How does this compare to the traditional package managers?


Well, looking at the procedure for Snap [1] there's no review at all. Flatpak [2] defers to Flathub [3] which has some sort of review: a pull request on Github but I'm not sure how thorough that is. Traditional package repositories have a review phase before you can get your package into the system (that's how it is with Debian at least).

[1]: https://docs.snapcraft.io/build-snaps/upload

[2]: http://docs.flatpak.org/en/latest/publishing.html

[3]: https://github.com/flathub/flathub/wiki/App-Submission


How does that review work in the case of binary/proprietary packages?


Flathub reviewer here. We do not security audit code of any project as thats just unreasonable. The user puts ultimate trust in the upstream projects we just ensure that the right upstream is used, libraries are up to date and sanely built, permissions are as reasonable as they can be, etc.

This is identical to every traditional distro like Debian or Fedora.


I'm assuming you're talking about traditional package managers. Proprietary packages aren't uploaded to the main package repositories from what I've seen. The company will typically provide the packages themselves (e.g., a deb or rpm file), and sometimes they'll provide their own repositories for easier delivery. This has been my experience with Dropbox, SpiderOak, and Sublime Text.


Got ya. They create their own repositories, and don't use the central one.


The review is to get your access to publish, not the code submitted. So pass that, package honestly for 6 months, deploy malware.


If I need software not found in my upstream repos you have 3 options:

1. Packaged by the developer/publisher, see Virtualbox, Spotify, Chrome, Slack etc...

2. Packaged by a single 3rd party, see PPAs

3. A central 3rd party repo populated by anyone, NPM, AUR, pip, Snap/Flatpak etc..

Case 1: I have to trust the developer/publisher which is sensible, you're already trusting their code to run.

Case 2: I have to trust some random 3rd party, sometimes this is possible, sometimes not but if I do trust them I'm trusting them for one package (and maybe it's dependencies). I may have multiple options of who to trust to provide this package.

Case 3: Anyone can package anything if they get there first they can publish things however they like. The problem is that they are implicitly trusted by the repo, not me.

The first 2 are acts of active trust, I have to verify the person/company I'm downloading and make a decision whether I trust them or not. With a 3rd party central repo (it doesn't matter what format) I do not have that option, I either trust everything that anyone can publish to be correct or I don't use it.


AUR, snap and flatpak all work differently. AUR is free for all with no trust implied or given but packages will be removed upon user intervention. So curation is basically done by users and other administrations but it doesn't prevent bad packages from being uploaded in the first place. Most of the AUR is building from source so you just have to read the build scripts (PKGBUILDs) of each package before installing and do what a maintainer does.

It gets easier on updates since you only have to check diffs between versions. I believe you are making a wrong decision if you trust anything on the AUR implicitly.


> I believe you are making a wrong decision if you trust anything on the AUR implicitly.

That's basically my entire point.


> It's like they looked at everything bad about Chocolatey/NPM/pip/AUR and just ran with it.

Basically they did just that. The only "desktop" devs left in Linux land are really cloud devs that dabble in desktop on the side. Damn it, i have been basically told to clam up when i complained about the cloud mentality having no home on the desktop. Because apparently only could mattered because it was the big Linux user case.


I know snap makes it hard if I want to modify something inside the package even as root (so it's close to the UWP nightmare). Is flatpack any different?


Interestingly flatpak doesn't really support drag-and-drop.


It should support it as well as deb or rpm does. I.e. it is completely uninvolved in drag-and-drop.


No, it really isn't.

When you drag-and-drop a file from e.g. a file manager into some application, then the DnD events do not contain the file. Instead, they contain something like `text/uri-list` enumerating the file paths. If you sandbox some app, then you need a bit of software to map and translate these paths. If you do not provide such software, then your sandbox does not support DnD.


Well, if you drag-and-drop a file things might be somewhat problematic due to the sandboxing. An app with no access to the file can't read the dropped filename.


From a technical perspective, I understand what you mean. But from a user perspective, 'except for files' is a pretty big caveat. Files are the main thing I drag and drop!


They mean the apps running in flatpak, not the flatpak installer...


What is the position of Linus and Linux Kernel Development crew regarding this?


Why would they have a position that is "official"? Linux is the kernel: it provides userspace with a very stable API (which includes the sandboxing primitives used by snaps and flatpacks) but it does not mandate much in userspace.


Linus himself distributes Subsurface using AppImage.


Also available in flathub.org


In Linux, you need to download, compile, and install viruses and trojans by yourself. With Flatpack, trojans can be installed in one click.


OMG. I got 2 downvotes before my page with response loaded after submitting. Never saw that. :-[ ] Instant karma.


Jokes on HN always run a risk of downvoting, even if really funny.


People seem particularly downvote happy this morning, or this story. Is this project close to YC?


Anything related to RedHead developers doing anything that is 'disrupting' or 'different' can unleash a shitstrom and lots of trolling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: