How useful are hardware decoders? My understanding is that they are not necessarily faster than software decoders (in my experience it is the opposite), so the only benefit I can see is less CPU consumption. I see the benefit when you run on a battery (laptop / smartphone) but on a desktop or server, unless you critically need the CPU at that time, I am not sure how useful it is.
>I see the benefit when you run on a battery (laptop / smartphone) but on a desktop or server, unless you critically need the CPU at that time, I am not sure how useful it is.
Yes it is. Very useful. The vast majority of people use laptops now, not desktop computers with 120mm tower coolers anymore, and if you want to switch to Linux then not having video decode nuke your battery life and turn your tiny fans into a Pratt & Whitney, then this is another welcome change to closing the gap to Windows.
People, especially on the Linux side, need to stop viewing the world as if everyone still has desktop towers PCs tucked under their desks while disregarding the software issues and deficiencies that only plague laptops, which are the majority of the marketshare and have vastly different constrains than desktops.
> People, especially on the Linux side, need to stop viewing the world as if everyone still has desktop towers PCs tucked under their desks while disregarding the software issues and deficiencies that only plague laptops, which are the majority of the marketshare and have vastly different constrains than desktops.
Linux (especially the kernel) isn’t even tuned for towers. It’s tuned for servers.
The defaults make horrible optimizations all in the name of throughput > latency.
Software video decoding isn't some monstrous workload, though. We're not in the era of Pentium 4 and h.264 decoding drowning every CPU out there to a screeching halt anymore, the CPUs of today hardly acknowledge the load even in the low-end tiers.
The benefits of hardware decoding are lower power consumption (read: electricity bills) and offloading the work to dedicated co-processors freeing up resources on the CPU proper, which are indeed far less relevant on desktops and even laptops which can just pack big batteries.
The biggest beneficiaries are by far mobile devices and really low power hardware. Computers which either cannot practically carry more batteries or which use hardware that simply aren't performant for more important reasons.
Desktops and laptops meanwhile are better geared to benefit from the perks of software decoding: Better quality and versatility, and fine-tuning of filters and rendering paths.
With all due respect, I think you must be out of touch if you don't acknowledge the negative effects of lacking HW decode on video playback and content consumption on regular notebooks. People do notice this even if they don't know where the fault comes from.
Even on a modern Ryzen laptop APU, if I run H265 or AV1 4k decode in software the CPU usage is vastly noticeable. On my older Intel i5 6300HQ they're pegging it at 100% .
And its made even worse nowadays to properly support video conferencing, as often you need to decode multiple streams (video from multiple participants) as well as encode one or more (camera + slides/screen) streams from your end, at the same time, in real time.
I watch software decoded video all the time because I like watching anime. I have never, ever had any problems for quite literally over a decade thanks to how powerful CPUs have gotten.
Way back 20 years ago, you would have had a point. But today? CPUs are several orders of magnitudes of overpowered for the task.
>. I have never, ever had any problems for quite literally over a decade thanks to how powerful CPUs have gotten.
I wasn't talking about having problems decoding, I was talking about software decode screwing your battery life due to high CPU usage on certain streams. Which is a problem, but not something most people who don't have laptops care about.
Do you watch your anime on a desktop or a laptop on battery?
The very slight increase in CPU processing and thus power use doesn't bother me at all. The screen takes more power than the CPU.
I habitually have Task Manager minimized in the tray area because I like monitoring my CPU usage at a glance, and decoding video in software simply does not even register as a noticable blip anymore.
So yes for 720p low bitrate stuff it's not so much of a difference, but as others said watch some real content in 1080p or better 4k and your CPU will definitely break a sweat and the fans spin up.
Now that I think of it with your claim of it not even registering as a blip in taskmanager I'd actually argue you're probably using hardware decoding without noticing, because I wouldn't know which player (esp if on Windows) wouldn't make use of it.
Decoding horsepower required increases sharply with increase in resolution/framerate. 720p30 doesn't cause my fan to even run but 1080p60 does and 4K30/60 drops frames.
Just to be clear, 4K is 9 times the load of 720p. 4K60 is 18 times.
My setup is Media Player Classic Home Cinema, splitting and decoding via LAVFilters, then fed through ffdshow for some filters, before finally rendering on Enhanced Video Renderer.
Nowhere in the pipeline is the GPU involved as far as decoding is concerned, it's all deliberately software on the CPU.
Generally 8-bit or 10-bit h.264, occasionally h.265, at either 1280x720 or 1920x1080 progressive with frame rate usually 23.976. Split and decoded in software via LAVFilters then ran through ffdshow before rendering on Enhanced Video Renderer.
CPU anything ranging from an i7-14700K to an i3-2100 (yes, Sandy Bridge). Seriously, decoding video has never been a significant workload for over a decade.
>4K h.265 / AV1 video on a couple of years old laptop dual-core CPU.
Kindly, why the hell would I even watch 4K video on a laptop? Y'all keep throwing out contrived situations like that, meanwhile I'll be a sane man living in reality and re-encode it (in software, finer tuning of parameters) in my spare time down to 1080p or 720p so I save myself precious disk space and CPU usage while travelling.
> Kindly, why the hell would I even watch 4K video on a laptop?
Because that's the file I have on hand. Why on earth would I re-encode it if I want to watch it once?
I also connect my laptop to my 32" 4K screen. There the battery life is not a consideration, but the spinning fan is.
> in my spare time down to 1080p or 720p so I save myself precious disk space and CPU usage while travelling.
Your use case might work for you, and that's fine, but you claim that software decoding is universally not a problem. But it is, if you don't limit yourself to 720p h.264 pre-encoded at home. Most people are not fine with having to do this and having to limit themselves to low resolutions / bad IQ.
> Kindly, why the hell would I even watch 4K video on a laptop
How about if you have a 4K monitor plugged in? Or if your notebook display is itself 4K (which is a completely valid configuration nowadays)?
I have a pretty beefy laptop with an RTX 3080. I regularly watch BDRips that exceed 50+ Mbps, and software decoding even on my 8-core Intel Xeon will cause some stutters. Hardware decoding is just so much faster.
mpv (definitely the best player for advanced users) does not use it by default. I'll simply quote `man mpv`:
> Hardware decoding is not enabled by default, to keep the out-of-the-box configuration as reliable as possible. However, when using modern hardware, hardware video decoding should work correctly, offering reduced CPU usage, and possibly lower power consumption. On older systems, it may be necessary to use hardware decoding due to insufficient CPU re‐sources; and even on modern systems, sufficiently complex content (eg: 4K60 AV1) may require it.
Decoding has a component that is proportional to the bitrate. Anime has a much lower bitrate than say a 4k bluray movie. The truth though is somewhere in the middle, as recent CPUs are getting to a point where they can do all the heavy decoding on their own, but older models (that are still widespread) still struggle.
But also I understand the GPU is using a video encoding/decoding chip which will have a fixed performance whatever the model of GPU within a generation, whereas software can fully utilise your CPU, and scale up if you run multiple streams in parallel. All of that is irrelevant for real time decoding, but for converting files, my experience if you are using hardware encoding is that hardware decoding can become the bottleneck and you may get faster performance with software decoding + hardware decoding.
Definitely not. Many anime release groups are early adopters of new video coding standards. It was the first popular media that saw heavy use of AVC back in probably 2006 or so, then HEVC (roughly in 2014 or thereabouts), then AV1 (since ~2021), and sometimes even VVC, although that's currently being held back by a lack of support in mainline ffmpeg.
It was also the first to start using 10 bit color profiles.
Some release groups are particularly insistent on producing high quality releases and use FLAC for audio (even if the source tracks were in lossy format), and very high bitrate video.
Haha, a normal 1440p / 4K YouTube video is a pretty monstrous workload for my Intel Core i5-8259U CPU at least! I get by completely thanks to the Intel Iris Plus 655 HW decoder. Otherwise my Intel NUC would indeed be a screeching, hot mess, and then I'm only talking of H.264 and H.265.
You have to consider hardware limitations, you want maximum battery (thus minimize power burn, by reducing CPU and GPU load), and that can only be achieved by optimizing for the highest possible speed.
Battery life is everything, no one cares about electricity bills on a laptop or phone, but they do care if the phone or laptop dies sooner than it has to.
I can tell you from my Chromebook and Macbook experience, HW decoders and encoders are the difference between a productive remote video meetup with good collaboration, and a frustrating lagged glitchy suttering meet with no work done, if CPU only.
Asahi linux is a good example. There is no hw acceleration support but video decoding is working pretty flawlessly. Of course at the expense of battery life
Encoding is very different. Most codecs are designed to be slow to encode, fast to decode. HW encoding is day and night vs software. But I see ffmpeg mentioned in the post. Switching to hardware decoding is likely to slow down your pipeline.
> Switching to hardware decoding is likely to slow down your pipeline.
If thats your experience, then something is off with your setup.
Hardware decoding on a modern computer might seem equally fast as software until you look at power usage where hw decode is a hard win.
Even so, the resources used to decode modern high end codecs like x265 or av1 will be major and hard to miss.
but anyway discussed are encoding &decoding.
And video encoding goes from like 3 fps in software to 100s of fps in hardware, making it possible at all with any resolution and quality to speak of.
You need encoding to stream your video, like in a video meeting.
I barely know what I'm talking about, but in my experience to use software I had to explicitly enable it with "-allow_sw" but this may have only been necessary due to videotoolbox
What type of "setup" are you talking about? Servers or home use?
I've been working with video transcoding/broadcasts a lot and software decoding was still worth it in large amount of cases - mostly because the CPUs these days (threadrippers & co) can handle significantly more concurrent encodes than the HW decoders.
HW decoders are built to play video on your PC so you can watch a movie and usually don't supoprt all that many concurrent streams and aren't all that fast (they "just" need to be realtime, after all). That's amazing on playback devices (pretty much mandatory for H.265/AV1), but for "2U racks at Amazon" that's not very useful and large cored CPUs are still kings. Especially since software encdoders are still massively winning on visual quality per second per MB of video.
(Why am I talking about servers? Because this thread has started with AWS 2U video racks, not Apple TV boxes.)
Yes, real-time decoding on a desktop is just nice-to-have, with a couple of exceptions
(1) if your decoder software runs too slow for real-time decoding. I think it historically has happened every once in a while until CPUs and decoder optimizations catch up again. Eg 4k H.265 was too much for many desktops for a while.
(2) you do a lot of it, in which low power decoding->saving electricity is still good
eg Linux web browsers for ages had sw-only decoding at it worked mostly fine, just laptops having battery eaten in browser apps & electron based apps (videoconferencing).
As codecs are historically a top source of remote code execution vulnerabilities that get our device pwned, the poor sandboxability of hw decoders makes them a security problem as well. eg as a pathway to exploit the flaky and high-privileged GPU driver code paths.
Software decoding 4K x265 requires insane resources. You could live transcode it to 1080p quite reasonably on 1 average core, but simply decoding it requires ~3 average cores
With JPEG you can scale down in power-of-two factors by just not processing the higher-frequency components.
You're still left with the non-lossy compression step, so one could do something like "unzip", repackage data throwing away the high-frequency stuff, "zip".
Wouldn't surprise me if you can do something similar with h264/h265.
I'd be surprised if that worked for interframe compression like video but even if it still did, and extremely well, encoding to the resulting format still has to fit in and that's where the majority of the work should be even when you just decoded the source normally.
Sorry if I'm misunderstanding your comment's implication. But this is not hardware decoding. This is software decoding on the GPU.
NVIDIA for example has specialized (en/de)-coding hardware on their GPUs, which is different. And even that works together with their GPGPU Cuda cores.
No, the entire point of those vulkan extensions is to use the hardware-accelerated video processing units. You can use computer shaders for software decoding on the GPU, there is no need for special extensions to do that. AMD and Intel have specialized encoding/decoding hw too.
All three big vendors (Nvidia, AMD, Intel) and also bunch of ARM SoC vendors do have specialized blocks in their graphic solution whose only purpose is to decode and encode specific codec profiles. It is hardware decoding, not decoding on GPU.
I suppose the above comment was about amdvlk. radv is indeed often ahead of it these days.
If anything, I think AMD should drop amdvlk on Linux (to collaborate on radv there) and instead try to develop radv for Windows if that's doable to replace amdvlk altogether.
Many of the people working on RADV are already employed by AMD (or paid by them indirectly), so arguably it's already collaborating.
I see the amdvlk/radv drivers a bit like the windows "Radeon Pro"/"Adrenalin" driver split, one is for people who stare at solidworks all day, the other for everyone else.
I think AMD wanted to phase out that distinction for OpenGL for example, so why wouldn't they want the same for Vulkan There is no need for them to be different really in that sense.
amdvlk and radv exist for historic reasons since they started independently and amdvlk was designed to support Windows too. But today I don't really see a big point in admvlk except where radv doesn't exist.
So AMD can do what Intel does. Support radv on Linux and amdvlk on Windows for example, or as above, try to make radv work on Windows too.
Seems like with av1, Google had to stop self-sabotaging (turning on va-api has worked for most users for more than half a decade) & kind of had to move (because av1 is too beastly to do on cpu). Sounds like va-api has been turned on for almost a month now, but I haven't confirmed. https://www.phoronix.com/news/Google-Chrome-Wayland-VA-API