What kind of integration do you mean? Basically the only integration that distros do is forcing all packages into one library dependency, which is something with relatively little user-facing benefit (in fact, it's mostly to make it easier for the maintainers to do security updates). This push towards appimages and the like is basically about standardising the interface between the distro and the application, so application developers don't need to rely on the distros packaging their app correctly, or to do N different packages for N different distros and deal with N different arbitrary differences between them (and if they want to delegate this packaging work like before, they can. Not all of these various packages are put out by the author of the software).
(Now, whether these various standards work well enough, is a different question. There seems to be a bit of a proliferation of them, all of which have various weaknesses ATM, so it seems there's still some improvements to be made there, but the principle is fairly sensible if you want to a) have a variety of distros and b) not have M*N work to do for M applications and N distros)
I very much work at the coalface here, and "application developers don't need to rely on the distros packaging their app correctly" occasionally happens but is most often about miscommunication. Application developers should talk to the distros if they think there's a packaging problem. (I talk to many upstreams, regularly.) Or, more often, application developers don't understand the constraints that distros have, like we need a build that is reproducible without downloading random crap off the internet at build time, or that places configuration files in a place which is consistent with the rest of the distro even if that differs a bit from what upstream thinks. Or we have high standards for verifying licensing of every file that is used in the build, plus a way to deploy security updates across the whole distro.
And likewise packagers often don't understand that the application has been extensively tested with one set of library versions and that changing them around to fit the distro's tastes will cause headaches for the developers of that application, and that they have a vendored fork of some libraries because the upstream version will cause bugs in the application. It's a source of friction, the goals are different, and users are often caught in the crossfire when it goes poorly (and when each application is packaged N times, there's N opportunity for a distro to screw something up: it's extremely rare that a distro maintainer spends anywhere near the amount of time on testing and support as the upstream developers do, since maintainers are usually packaging many different applications, while upstream is usually multiple developers focused on one project).
Software should be written robustly, and libraries shouldn't keep changing their APIs and ABIs. It's a shame some people who call themselves developers have forgotten that. Also you're assuming that distro packagers don't care, which is certainly not true. We are then ones who get to triage the bugs.
They should, but the world isn't perfect and occasionally you do actually need to apply workarounds (which application developers also dislike having to deal with, but it's better than just leaving bugs in). Distros would run screaming from the bare metal embedded world where it's quite common to take a dependency and mostly rewrite it to suit your own needs.
And I'm not saying distro maintainers don't care, I'm just saying they frequently don't have the resources to package some applications correctly and test them as thoroughly, especially when they're deviating in terms of dependencies from what upstream is working with. And much as the fallout from that should land on the distro maintainer's plate, it a) inevitably affects users when bugs appear in this process, and b) increases workload for upstream because users don't necessarily understand the correct place to report bugs.
The place where my argument is coming from is that the MxN nature is pretty much inescapable.
> What kind of integration do you mean?
See? The "integration" is something you only notice when it breaks (or when you're working through LFS and BLFS in preparation for your computer science Ph.D.) -- This kind of work is currently being done pretty well, so it rarely breaks, so people think it doesn't even exist. Also notice that a linux distro is what's both on the outside and the inside of most containers. If debian stops doing integration work, no amount of containerization will save us.
So, what kind of breakage might there be? Well, my containerized desktop app isn't working. It crashed and told me to go look for details in the logfile. But the logfile is nowhere to be found. ...oh, of course. The logfile is inside the container. No problem, just "docker exec -ti /bin/bash" to go investigate. Ah, problem found. DBUS is not being shared properly with the host. Funny. Prior to containerization I never even had to know what DBUS was, because it just worked. Now it's causing trouble all the time. Okay, now just edit that config file. Oh, shoot. There's no vi. No problem, just "apt get install vi" inside the container. Oh "apt" is not working. Seems like this container is based on alpine. Now what was the command to install vi on alpine again? ...one day later. Hey, finally got my app to start. Now let's start doing some useful work. Just File|Open that document I need to work on. The document sits on my NAS that's mounted under "/mnt/mynas". Oh, it's not there. Seems like that's not being shared. That would have been too good to be true. Now how do I do that sharing? And how does it work exactly? If I change the IP address of my NAS and I remount it on the host, does the guest pick that up, or do I need to re-start the app? Does the guest just have a weak-reference to the mountpoint on the host? Or does it keep a copy of the old descriptor? ...damn. In 20 years of doing Linux, prior to containerization, I never needed to know any of this. ...that's the magic of "system integration". Distros did that kind of work so the rest of us didn't have to.
God, yes. I did some training courses over Zoom. The presenter frequently shared pdf files we had to interact with, but the Zoom download button dropped them in the Zoom container. Figuring out how to get hold of them was a pita.
Of course, the Windows users didn't have this problem. Flatpak, etc. are objectively making the Linux user experience worse.
Those aren't particularly useful examples, though. They're all things that have been artificially seperated in containers and now there's a bunch of work to punch the right holes in that seperation, because people want the sandboxing of containers from a minimum-trust point of view, and that's pretty hard to get right. Previously this wasn't a problem, not because the distros solved it, but because there was no seperation of dbus or views of the filesystem or the like.
(Dbus, much like a lot of the rest of desktop integration, is something that has been standardised quite heavily, such that you can expect that any application that uses it will basically work with it without any specific configuration or patching, unless you've insisted on fiddling with the standard setup for some reason. It used to be that the init system was an area which lacked this standardisation, but systemd has evened out a lot of these differences, which distro and apps maintainers as well as users all benefited significantly from. Most of containerisation is basically trying to do the same with libraries as well, but most projects are also trying to achieve some level of sandbox seperation between applications at the same time)
(This is one reason why I don't much like a lot of the existing approaches here: I think the goals are admirable and the overall approach makes sense, but the current solutions fall quite short)
(Now, whether these various standards work well enough, is a different question. There seems to be a bit of a proliferation of them, all of which have various weaknesses ATM, so it seems there's still some improvements to be made there, but the principle is fairly sensible if you want to a) have a variety of distros and b) not have M*N work to do for M applications and N distros)