Back in ye olden tymes, video was an analog signal. It was wibbly wobbly waves. You could look at them with an oscilloscope. One of the things that it used to do to make stuff work was to encode control stuff in band. This is sorta like putting editor notes in a text file. There might be something like <someone should look up whether or not this is actually how editor's notes work>. In particular, it used blacks which were blacker than black to signal when it was time to go to the next line, or to go to the next frame.
In later times, we developed DVDs. DVDs were digital. But they still had to encode data that would ultimately be sent across an analog cable to an analog television that would display the analog signal. So DVDs used colors darker than 16 (out of 255) to denote blacker than black. This digital signal would be decoded to an analog signal and were sent directly onto the wire. So while DVDs are ostensibly 8 bit per channel color, it's more like 7.9 bits per channel. This is also true for BluRay and HDMI.
In more recent times, we've decided we want that extra 0.1 bits back. Some codecs will encode video that uses the full range of 0-255 as in-band signal.
The problem is that ... sometimes people do a really bad job of telling the codec whether the signal range is 0-255 or 16-255. And it really does make a different. Sometimes you'll be watching a show or movie or whatever and the dark parts will be all fucked up. There are several reasons this can happen, one of which is because the black level is wrong.
It looks like this function determines scans frames for whether all the pixels are in the 16-255 or 0-255 range. If a codec can be sure that the pixel values are 16-255, it can saves some bits will encoding. But I could be wrong.
I do video stuff at my day job, and much to my own personal shame, I do not handle black levels correctly.
Iām positive you know this already, but for anyone else, this section [0] of the lddecode project has a wonderful example of all the visible and non-visible portions of an analog video signal.
The project as a whole is also utterly fascinating, if you find the idea of pulling an analog RF signal from a laser and then doing software ADC interesting.
Maybe you could get some savings from the codec by knowing if the range is full or limited, but probably the more useful thing is to just be able to flag the video correctly so it will play right, or to know that you need to convert it if you want, say, only limited-range output.
Also as an aside, "limited" is even more limited than 16-255, it's limited on the top end also: max white is 235, and the color components top out at 240.
In later times, we developed DVDs. DVDs were digital. But they still had to encode data that would ultimately be sent across an analog cable to an analog television that would display the analog signal. So DVDs used colors darker than 16 (out of 255) to denote blacker than black. This digital signal would be decoded to an analog signal and were sent directly onto the wire. So while DVDs are ostensibly 8 bit per channel color, it's more like 7.9 bits per channel. This is also true for BluRay and HDMI.
In more recent times, we've decided we want that extra 0.1 bits back. Some codecs will encode video that uses the full range of 0-255 as in-band signal.
The problem is that ... sometimes people do a really bad job of telling the codec whether the signal range is 0-255 or 16-255. And it really does make a different. Sometimes you'll be watching a show or movie or whatever and the dark parts will be all fucked up. There are several reasons this can happen, one of which is because the black level is wrong.
It looks like this function determines scans frames for whether all the pixels are in the 16-255 or 0-255 range. If a codec can be sure that the pixel values are 16-255, it can saves some bits will encoding. But I could be wrong.
I do video stuff at my day job, and much to my own personal shame, I do not handle black levels correctly.