> Given that x264 already has a 2 pass mode, I don't see why that is necessarily the case. Even CRF mode uses mbtree by default, which is a pretty complicated rate control algorithm. x264 also has pretty intelligent keyframe determination, I almost always see I frames on scene transitions.
x264 will not align keyframes across resolutions and encodes. In addition the pershot encodes optimize more than just idr frame placement. They also optimize other encoder parameters such as aq.
> I think it's probably the other way around. The closer you are to a screen, the more likely you are to notice increased resolution. But if there are large patches of the image full of artifacts, you'll be likely to see that even far away.
That isn't how this works. It's not a strict tradeoff between more artifacts and less sharpness by changing resolution. Downscaling -> Upscaling is simply another form of lossy compression. It may look better or worse than using those bits in another spot.
> x264 will not align keyframes across resolutions and encodes. In addition the pershot encodes optimize more than just idr frame placement. They also optimize other encoder parameters such as aq.
Sure, that's true. (Though I don't know why it matters that keyframes aren't aligned.) But at the end of the day the point is that Netflix has a better rate control algorithm, and this could be built into x264, even if it might require a significant amount of work. (Which I'm sure the x264 developers would be willing to do for a substantial quality improvement.)
> That isn't how this works. It's not a strict tradeoff between more artifacts and less sharpness by changing resolution.
Of course it's not a strict tradeoff. It's a loose one. And yes, it may look better or worse. That's really my only point in that section of the comment: that bumping up the resolution earlier in the ladder as they're doing is not a pure win, and it may look worse to some people depending on their viewing conditions.
> Sure, that's true. (Though I don't know why it matters that keyframes aren't aligned.) But at the end of the day the point is that Netflix has a better rate control algorithm, and this could be built into x264, even if it might require a significant amount of work. (Which I'm sure the x264 developers would be willing to do for a substantial quality improvement.)
You can't ABR adapt without aligned GOP boundaries.
> Of course it's not a strict tradeoff. It's a loose one. And yes, it may look better or worse. That's really my only point in that section of the comment: that bumping up the resolution earlier in the ladder as they're doing is not a pure win, and it may look worse to some people depending on their viewing conditions.
Of course not, that's why the perform an analysis of both options and select the better one. That's what the algorithm does...
> You can't ABR adapt without aligned GOP boundaries.
Yes, that's true of course, but not really relevant to whether you could port Netflix's work on scene-adaptive rate control to x264. Maybe you'd lose aligned GOPs... but for a lot of purposes (offline?) that doesn't matter.
> Of course not, that's why the perform an analysis of both options and select the better one. That's what the algorithm does...
The only point I've ever tried to make on this subject is that in some cases this approach fails. "Perform an analysis" is such a high level description that it misses the fact that this is being done according to some objective metric that may disagree with an individual viewer's personal preferences or viewing environment. In fact, just because an objective metric says 4k > 1080p doesn't mean the difference will be noticeable at the viewing distance the viewer is at, whereas the additional artifacts introduced by moving from 1080 -> 4k without a significant bitrate increase may very well be visible!
x264 will not align keyframes across resolutions and encodes. In addition the pershot encodes optimize more than just idr frame placement. They also optimize other encoder parameters such as aq.
> I think it's probably the other way around. The closer you are to a screen, the more likely you are to notice increased resolution. But if there are large patches of the image full of artifacts, you'll be likely to see that even far away.
That isn't how this works. It's not a strict tradeoff between more artifacts and less sharpness by changing resolution. Downscaling -> Upscaling is simply another form of lossy compression. It may look better or worse than using those bits in another spot.