I have to say re: pipewire that it's one of the few (redhat?) projects of the past 15-20 years that doesn't seem to have generated a ton of hate from users so it must be pretty good all around.
I love Pipewire, Pipewire fixed bluetooth headphone reliability. For years I had frequent trouble where the bluetooth headphones and bluetoothctl would both say they were connected, but there was no audio sink for pulseaudio. Sometimes there would be audio, but it would be crappy mono audio in "headset mode". For three or four years I had this problem several times a week. I believed this was an issue with the linux bluetooth drivers until one day, after several unsuccessful cycles of reloading, restarting and rebooting everything I got fed up enough to install Pipewire (which was unexpectedly painless.) From that moment on, not once have my headphones failed to connect on the first attempt. I've concluded that there's something fundamentally wrong with the way Pulseaudio recognizes and responds to a bluetooth speaker being connected, and Pipewire obviously doesn't have this flaw.
There's a popular narrative of "linux enthusiasts hate anything that's new"; you hear this a lot from people defending Pulseaudio and SystemD from "the trolls". It was never true. People love new things when the new thing solves their problems, and hate the new thing when it introduces new problems. The haters narrative is little more than cope; a way for the authors of buggy software to rationalize the negative response to their software.
This is me learning that pipewire is a RH project.
For context as to why it's not getting hate: the alternative is Pulseaudio (another RH project, headed by Poettering who is now at Microsoft incidentally) which was eggergiously difficult to configure, heavy and seemed to dominate any system that tried to interface with it even slightly.
The same exact concerns that people levy towards systemd and GNOME.
Pipewire is light, interfaces with programs on their terms and seems to follow the philosophy that "it's just a tool" meaning that it should not be something you have to care about as an application developer or as a user.
Pulseaudio was adopted too early, and I think distros have learned their lesson this time around.
Honestly, it fell under "just worked" for me. One time I had been reading about PA using too much CPU, so I checked and indeed it was using a reasonable amount of processing power "just" to feed data from the media player to ALSA. So I tried turning it off, and the media player used more CPU to play audio direct to ALSA without PA than both the media player and PA running together.
I had tons of problems with PA. Jitter, noise, insensible defaults. i.e. Why upmix stereo to 5.1 just because I have the speakers, and why make disabling it an override?
I never used onboard audio, always had some higher end card with more resolution than a bog standard on board audio chip, and PA struggled to deal with them for a long time.
Also if the daemon crashed or needed a restart, it was a dance of restarting with exacting order and other details.
Pipewire is just invisible. It works the way it should and doesn't bend the system to fit in.
Pipewire is invisible because Pulseaudio (PA) exercised a lot of sound stack features, exposing bugs (which were often attributed to PA) and prompting a lot of bug fixes. It wouldn't be nearly as good if it weren't building on PA's foundations.
Pipewire also doesn't need to bend the system to fit in, because the system is already the right shape.
(And I'm perennially annoyed that my work Mac won't upmix to 5.1. I've got the speakers, why only use two of them?)
I need to read the ALSA bugfixes because of PA to see and believe that.
On the other hand, I used multichannel audio since the last 20 years or so, on Linux, and it always worked with what I have. First with Live and Audigy, then with an Asus card.
No, pipewire is much more gentle on how it handles and takes the streams from other applications. It doesn't make an heavy-handed attempt to make it is also replacing ALSA and drivers at the same time. It's a much thinner layer and works what it should do. Most importantly it doesn't alter the streams it goes through it.
(Sorry, but upmixing a good stereo sound source to 5.1 is just butchering the sound. The resulting sound stage is an abomination of what it should be. For a musically inclined person (read: ex-orchestra player), it's just torture. It's so wrong on so many levels.)
GNOME Shell uses 0.75% of my CPU and 136 MB of memory. All processes in gnome-system-moniotr that are somehow related to GNOME clock in around 400-500 MB of memory. How is that heavy?
It's barely usable when paired with the Realtime patchset.
An actual OS that constitutes a "good audio stack" would be able to provide hard realtime (i.e. formally guaranteed maximum latencies), like seL4 does.
Linux-rt is just probabilistic. It behaves much better than mainline, but latency could spike anytime.
More specifically, it's fully compatible with PulseAudio/JACK/ALSA and is capable of completely replacing them while still providing the benefits of what it's replacing. It's a complete joy to work with too, especially if you've ever experienced the nightmare that is getting your pro-audio JACK setup working nicely with pulse.
Video capture is something that's "just worked" for me for as long as I can remember (unlike audio). What's the value gained from the added latency of an intermediary? Is it only to ease containerization?
With audio, accessing kernel interfaces directly in olden days lead to problems with multiple apps using the device & inability to provide inputs/sinks from userspace (eg post/preprocessing or hw driver implemented in userspace). It's still that way with video.
People have been using a out of tree loopback video kernel driver for these things which is hard to use and hacky and slow.
That still is the case, it's just that nobody runs hardware DSPs so nobody needs to deal with anything but a single userspace sound server. Video just has order-of-magnitude higher processing requirements, so dumping raw buffers into userspace and letting a "video server" do the work isn't usable for use cases that care about latency, throughput or resource usage (e.g. game streaming).
So you end up with a bunch of kernel interfaces for hardware-accelerated video transforms (many driver-specific), a few competing userspace APIs (gstreamer, vulkan extensions, vaapi, vdpau, etc), zero-copy kernel buffer APIs, the list goes on. You end up with something like Pipewire just to abstract over the complexity.
This is huge news for people who runs Linux on Surface devices as they need to compile a custom v4l module for the custom kernel, and also need to run a hacky v4loopback workaround to use it on vidcalls.
I use it with Teams every day. The only thing that doesn't work is notifications (and I blame Microsoft for that). What else do you find isn't working?
I was waiting for support to download YouTube videos with YT Premium on FF desktop and the other day the button just appeared! Not sure if was something missing on FF end or YT just didn't want to support it before, but yeah, no more waiting.