That's a gigantic savings, but not surprising since film grain (or noise generally) is the worst thing to compress. It's hard to think of any other single "feature" improvement that could result in such massive gains.
I've always found it funny that the "HBO" branded intro to all their shows is produced of pure visual static. ([1] for anyone unfamiliar.) They literally couldn't have picked something worse to encode for streaming.
I always wondered if this was a deliberate pre-show test of the transmission channel's compression.
I remember noticing as a kid that some channels on cable TV were noticeably worse quality (far more blocky iirc, looked very MPEG artifacty) than the analogue broadcast equivalent and that others looked very different between even cable and satellite tv (iirc More4 was one of the more obvious ones, looked OK on Sky but awful on NTL).
The HBO static intro screen was worst case for compression but gave a pretty good indication of the quality (of the bitrate/compression) of the rest of the show; which worked for broadcast, recordings or downloads. Probably not deliberate, but actually kinda useful.
Assuming they really cared, they wouldn't bother encoding it, they would embed it in the binary of their app. The Shannon entropy of this clip is zero, it occurs with probability one in every show, hence it is infinitely compressible :-)
Did anyone else notice that when HBO adapted that snow intro to wide screen, it looks like they just cloned an existing chunk of snow onto the left and right borders.
Maybe it's just an illusion. But I swear I can see the seams where the widescreen pixels start
Are modern digital video formats good at compressing chunks of pixels far apart in the video screen? It's not like zlib where identifying duplicated chunks is part of compression.
The formats absolutely have the capability, but it's an another question if the encoder is capable of noticing the duplication.
(Modern video formats are not deterministic in that the same source always encodes to the same output for the same format. They are more like programming languages that they provide as set of tools you can use to compress various different kinds of redundancy out, but it's the job of the encoder to find that redundancy. For example, one of the tools typically available is to provide a source image to an off-screen buffer, and then when rendering the screen, occasionally provide offset/length pairs into that buffer instead of pixels. This could be used for efficiently encoding the duplicated noise.)
I shot beautiful 4k telephoto video of hummingbirds cleaning themselves in a fast running stream and YouTube is totally unable to display the content at a satisfactory level. It’s such a bummer it was such a cool shot and I can’t really share it!
Similarly whenever I take photos in the forest they come out looking like complete garbage. It occurred to me it’s the image compression completely falling apart. Color gets totally desaturated too.
It's also that a forest is incredibly three-dimensional and layered, which contributes a lot to how you experience it. A photograph shows only a 2D projection of this, obviously.
Capturing depth in a satisfying way in a photograph is a skill that can be learned, but often it's impossible. (The skill is really about learning to see the small set of perspectives that do translate meaningfully to a 2D projection.)
> Capturing depth in a satisfying way in a photograph is a skill that can be learned, but often it's impossible.
My photography teacher told us "If you can't see the same thing (that you see with your eyes) via the viewfinder, don't take that photo". It took me a decade to completely understand what she meant.
One day it clicked at a very mundane moment, but it was pure enlightenment.
But even the viewfinder is forgiving in this respect, because parallax allow you to sense depth through small perturbations of the position of the camera! So eventually, you will learn to imagine in your head what a scene looks like when projected onto a sheet of still paper.
(Bonus: the parallax effect can be exploited to convey depth also on a print, actually. A comparatively long exposure time will, depending on light levels, emphasise or de-emphasise motion closer to the camera in motion blur effects, that vary in size and intensity depending on the two things above.)
They could encode just the intro with bespoke settings and concatenate that to the bitstream of their other content if they really cared about making it look good.
I wonder how much savings they’d get by extracting things like that HBO intro or series intro songs and keeping a cached copy on the device, instead of downloading it with each episode? Or maybe they already do that?
First, set up the scenario. You're going to binge-watch an HBO Max series that is 10 x 1 hour episodes. Each episode has the HBO static card (7s), a 1m45s intro sequence, and a 2m30s trailing credits sequence where the first 30s is episode-dependent but the last 2m is constant across the series.
This is a best-reasonable-case scenario, since we know that people do sometimes binge-watch a series like this. We can construct unreasonable scenarios which would do better (10 minute loop of an aquarium played for 24 hours) but they would be unreasonable. More often, I think, people tend to watch one movie or one episode of a show and then switch to something else or leave.
The receiving device needs to have 3m51s of storage available for reuse across episodes. That's not unreasonable. At a 4K streaming rate of 10GB per hour (Youtube 4K uses more than this, Netflix uses less) that would be 260MB or so.
That number does sound unreasonable, though. A typical streaming device has between 1 and 4GB of RAM available -- an Amazon FireStick 4K has 2GB, various modern Roku devices have 1 to 2 GB -- and eating a quarter of that speculatively does not make sense to me.
I was recently compressing some Blu-Ray rips to H.265 and made the rather funny discovery that H.264 in its film profile with some additional NR can achieve better bitrates, a faster encode, and a smaller filesize at equivalent quality levels than H.265, for sources with heavy film grain.
For some reason ffmpeg's H.265 implementation doesn't have a film profile, I wonder why?
You are probably talking about the x264 and x265 encoders? The former was created by some obsessed people, the latter by employees. x264 goes above and beyond being a "good implementation". It is also less tuned according to "objective" (actually terrible) metrics than most codecs. Simplistic "objective" metrics encourage blurry images and sometimes odd spatial and temporal distributions of blurriness.
It would be nice to have a fantastic implementation of an h.265 encoder like x264, but alas.
My favourite novel things about x264 are probably the tuning preset called "touhou", literally for better encoding things that look like a bullet hell game, and that the highest quality preset is called "placebo".
I can't edit the post now but you are correct. I thought the profiles were inherent to the spec, and I seem to be mixing up profiles and tunings, but it sounds like they're just implementation details in x264/x265
Not with modern codecs, but there's probably a random seed somewhere that'll get you close enough.
Noise detection could be useful, if you can get noise that's close enough, it'd be a more than adequate substitute.
It'd also be useful for rain, which video codecs also struggle with.
Another thing is, a lot of modern video was generated, and composited - why not have that reflected in the codec?
You could directly encode the elements as understanded by the process used to generate them, rather than attempting to reconstitute things from a stream of bitmaps.
> You could directly encode the elements as understanded by the process used to generate them, rather than attempting to reconstitute things from a stream of bitmaps.
They could have bundled it into the app to save hella bandwidth, or done something similar to Samsung's awful generator that comes on when you select an input without a signal.
My guess is because there’s a lot of ways of making things digitally and it’d need to be a really complex codec.
On the other hand, deep-learning based compression with autoencoders or something similar is pretty promising, since it can learn the constituent elements in a more general way, independent of what program was used to create it.
You would assume that an updated version of the HBO intro would go from a seriously degraded version of the video to a perfect picture.
Update it every decade or so to reflect artefacts of whatever in-house codec they use at the time. The ramp-up would be a creative exercise for the techies working in the studio.
I'm not sure if it's because of growing up watching most things online with limited bitrates, but I realized I dislike film grain a lot. I know that to a certain degree, film grain helps humans resolve finer details. Similar to using noise shaped dithering to allow you to hear more than the theoretical dynamic range in audio or see a larger range of color tones in a picture. But I found that on certain 4K uhd blurays, if I am watching on a small screen the film grain looks very obnoxious. The problem doesn't exist when watching on a big TV, so I know it's not related to projecting on a screen. I really dislike when directors go out of their way to add artificial film grain. Will media players eventually allow users to adjust the film grain that is added back in?
This reminds me of Steve Yedlin's, cinematographer on everything from Brick to Star Wars: The Last Jedi to Knives Out and its 2022 sequel, Glass Onion: A Knives Out Mystery, who showed through research that film can be completely emulated using digital techniques and mixed digital and film shots during Star Wars.
>Similar to using noise shaped dithering to allow you to hear more than the theoretical dynamic range in audio
I’m not sure how apt this analogy is. Dither is only useful for mitigating artifacts of quantization noise; it does not generally increase perceived detail of any arbitrary signal. More specifically, it is introduced to reduce the harmonic content of quantization noise, at the expense of a higher overall noise floor. Suppose you have a 1kHz sine wave; quantizing it will introduce harmonics that peak at, say, an average of -70dB, with an absolute noise floor of -120dB. Adding dither will raise the noise floor to -90dB, but reduce the harmonic peaks to -100dB. So while it actually decreases the true dynamic range, it increases perceived signal quality by removing the harmonic content.
These spurious harmonics occur because quantizing a signal introduces periodic artifacts. For example, suppose our analog sine wave can continuously vary between 0-7, and we quantize it to 3 bits (discrete values 0,1,2,…,7). Any analog value 4.7 will always be rounded up to 5; in a sine wave, the value 4.7 will occur periodically, thus resulting in a periodic rounding artifact, leading to harmonic distortion.
In order to prevent these periodic rounding errors, dither needs to be added pre-quantization, so that 4.7 can sometimes randomly become 4.4 and get rounded down to 4 during quantization. Adding “dither” to an already quantized signal (e.g. digital video) would just make the apparent picture noisier.
As a test, try quantizing a full-color image to 16 colors, and then adding back some random noise. It won’t look any better. You need to strategically dither the 16 colors with knowledge of the original full-color image.
It's impossible to have a pristine signal when talking about film and tv. Having to transmit over wires and having to go from analog to digital will introduce quantization noise. Additionally raw capture and playback isn't an option because of transmission bandwidth limitations. And displays aren't in 12bit color depth, the most obvious visual example of poor quantization is color banding. While less obvious, film grain does the same for increasing the perceived texture. The quantization errors observable in video are different than in audio and I was trying to make an analogy that feels intuitive. I think the street picture in this link is helpful.
Film grain looks good until you hit a resolution (and color detail) that consistently exceeds the perceived boost that grain provides... which barely exists in pixelated screens in the first place. It provides some anti-aliasing and moire-reduction (at the cost of fuzziness), but that's about it.
When those smooth-then-pixelated-then-smooth lines transition to even-smoother, like 4K / HDR consistently hits, grain looks like the muddying noise that it really is.
I think it has more to do with the artificial boost in resolution not matching the “native” resolution of what the film grain would be in actual film.
Lots of stuff looks wrong with higher resolution and framerate, it’s not the thing (like grain) is inherently wrong, it’s just being done poorly.
Foley sounds and lighting are the things that annoy me the most about modern cinema. Those two characters looking into the sunset in the background… where exactly is the cool white light illuminating their faces coming from?
Film grain that is an artifact of the recording media I think is fine, because it inherent to the media (film).
Added film grain to digital to me is very dumb. It is hard for me to think that it is a strong authorial intent and not just a legacy idea that is basically assumed by default because that is how it used to be. Same thing bothers me in games. I can’t even begin to understand why someone wants film grain slapped over a game. Another reason why I think of this as an assumed convention instead of a meaningfully intended or utilized convention in modern media.
IMO grain and artifacts are a tool, like any other part of the post processing pipeline. Those tools can be abused, but they aren't valueless. I'd think that most people reach for these tools for two reasons, adding detail to bland sets, or adding authenticity to your setting. Our brains are weird and sometimes seeing less of the scene allows your mind to fill in the gaps better than the budget allows a texture artist or a set designer to do. The lack of post processing in general is what gives bad games that Unity Engine look that people hate.
As for the other reason, it's sort of like how we perceive people as older due to the clothing they wear and their hair style, our cultural reference for how old a piece of media is can be tricked by the mimicking the imperfections of the filming techniques of the time. Take a photo of old brick building with a Polaroid camera and ask folks when that was shot, I would bet that most folks would say 80s or 90s more than the 2020s.
Knives Out is an interesting edge case there. It was shot almost completely on digital (on scene was on analog) but was post-processed by the Director of Photography to achieve the look of analog recordings. A process he has spent many, many hours perfecting.
most film grain since the mid 2000s has either been fake, or augmented.
Anything that went through a VFX pipeline was denoised (so that trackers, painters & rotoscoper can work more efficiently.) once the effects applied, then film grain was put back in. (even if it was to be lasered back out onto film.)
Film stock improved significantly from 2000 to maybe 2008. grain getting smaller. optical resolution getting better. (well maybe digital intermediary got better.)
I wish netflix allowed you to specify that you have a beefy internet connection and to stream higher bitrate 4k. I hate watching netflix shows directly because of how crushed the blacks are. Ironically, pirated netflix shows dont have that problem.
Netflix doesn't give you that option because even if you have a beefy internet connection, streaming high bitrate 4k content costs them a lot too. In Netflix's case (and a lot of streaming providers for that matter), I think the bottleneck is on their side and not the viewer's.
In a similar vain: Is it possible to "fool" Netflix into giving me the highest bitrate stream?
I pay for Netflix Premium but currently don't have any 4k screens, but I imagine the extra bitrate would improve the quality at no cost (to me) when watching at e.g a 2560×1440 monitor.
Thanks to chroma subsampling done with most encoded content, your 8bit screen can likely display way more than the usual SDR content contains. Meaning less banding and better dark scenes.
a lot of the problem with compressing dark scenes is 8 bit already is throwing away a lot of info so the compassion takes a look and figures that since there isn't a lot of info, it isn't an important part of the picture and then procedes to gut it. 10 bit color means you start off with a much cleaner image which makes it easier for the compressor to do a good job
I'm a bigger fan of just de-noising video, and leaving it gone. I don't want it replicated, UNLESS the de-noised video is such a low-bitrate blocky mess that you absolutely NEED the noise in the video to hide it.
Conan The Barbarian DVD is the noisiest film I can think of off-the-top. Strong de-noising makes it look far, far better. But just about every movie out there benefits from modest de-noising.
If youre into horror you should take a look at Hellraiser. Good god is that a noisy film. I know there are a lot of purists when it comes to film grain but that film needs some denoising applied
Can we please just get rid of the 'noise' look and embrace digital acuity already? This might be one reason I tend to prefer animated content over live action.
To many film artists, what you’re suggesting seems like the equivalent of hardcoding a custom massive bass boost into commercial headphones because “it makes music better”. It might seem better to you, but you’re modifying someone’s art, which is actively detracting from the film for many.
This analogy seems off. In your analogy, it is suggested that playback devices be made somehow worse, while the request is for the recording to not have the effect applied.
I’d say it is more like asking musicians to stop adding feedback — another phenomenon that began as an error but became an artistic tool — to their songs. Or requesting that sound engineers stop the engaging in the loudness wars.
I recently learned that Darabont intended "The Mist" to be shot in B&W, and the studio wouldn't allow it. While I'm sure the cinematography would have been vastly different if filmed in B&W, a quick Handbrake recompress let me simulate it, and honestly it worked really well. In addition to the "Twilight Zone" vibe, the CGI monsters looked much better without color.
Why even use Handbrake and not just turn saturation on your TV/display to zero? Seems much faster solution and you can decide whether you wanna color or B&W option with same video file.
I'm a data-hoarding AV nerd who keeps hand-rips on a big NAS anyway, wasn't exactly out of my way :) I now have both the color and B&W versions on tap with the rest of the collection.
that's hour or two longer than it takes to adjust saturation to 0 on display device, even just setting up encoding software takes longer, so it's waste of time
what's the benefit of doing this? you waste your time and you lose color option, I see no advantages over just adjusting saturation
Sure but it is a lot easier to attack people’s preferences if we consider having a preference to be equivalent to wanting dictatorial control over every creative project.
The artists' intent argument doesn't hold up, but is one that continues to be used as justification for "tradition". Most people aren't watching movies strictly in IMAX or even a color calibrated TV. Most default TV settings, even on the most high end devices, have things like dynamic saturation and motion smoothing on by default. The most popular brands of headphones do have custom eq from the factory, most people don't listen to studio monitor style flat audio. Most people don't realize there is a difference.
The meaning of art is up to the viewer, not the artist. There's nothing wrong with having a preference, like using things like temporal denoising or increasing/decreasing saturation. Even an IMAX movie theater isn't enough because that's not what the film was graded with. Pretending that art shouldn't be modified because it moves away from the artist's intent is a bad faith argument. There were certain House of Dragon color grading decision that made me feel very disappointed, and I was watching in Dolby Vision on a nice high dynamic range TV.
Artists' intent is useless if 99.999% of the population can't enjoy the actual art.
Your first paragraph seems to be “others are already doing it”. That’s a problem, not a justification for doing more of it. It’s a shame to see embedded analog grain stripped out because it was inconvenient for a dev team. It strikes me as a good example of the bad reputation developers hold for constantly prioritizing their own convenience over the end user’s experience.
To address your second- I think that’s an example of the “slippery slope” fallacy. I’m stating only that film noise and grain is a big part of the atmosphere of many movies and shouldn’t be removed; I’m not suggesting that a viewing experience needs to be done in a world-class theater under perfect conditions to capture artistic intent. Anyone with a basic HD television can see analog noise and grain.
We fundamentally disagree on how removing film grain, or changing things like saturation, affects end user experience. I don't believe it harms the viewing experience in a meaningful way. I respect that you care about artists' intent, but I think you're assuming more people care or notice than actually do. I don't believe adhering so strictly to the artists' intent will lead to a better watching experience. For some films where only a 1080p master remains, if you don't denoise before upscaling and then adding in artificial film grain, it'll look terrible.
There is an endless list of poor Bluray and UHD Bluray transfers where the the film grain makes it unwatchable. The unfortunate thing is that the nicer the display is, the sharper the image and the more distracting the grain. In some cases, I would trust the AV1 endcoder decoder to do a better job at adding a more reasonable amount of grain than the 4k transfer did. It's not a slippery slope, bad transfers and poorly color graded films/shows already exist. Mistakes exist in both directions, the UHD remaster release of Terminator 2 is a exceptionally horrible example of too much DNR. In my opinion T2 is bad enough to bring real actors into the uncanny valley.
The developers aren't doing anything different from what films do when going through an analog to digital transfer.
... you don't think changing things like saturation affects viewing experience in a meaningful way???
Jesus. Every engineer who's worked on color implementation just choked on their drink.
You can literally scroll down a little further and find blog posts on how iMessage green vs blue saturation creates tiny changes in readability. The human experience matters a lot more than what engineers (who can't tell apart two shades of women's fingernail polish) decides matters.
You're making a lot of incorrect and dismissive generalizations. If you're able to notice the differences in color calibration and film grain, more power to you. Most people can't explain the difference between the saturation in video when watching something on a phone compared to watching it on their TV, if they even detect it. For what it's worth, I just did a Farnsworth-Munsell 100 hue test and got a perfect score again. I'm not sure why telling apart fingernail polish shades would make a difference in how meaningful or noticeable changes in saturation or film grain are to people.
Readability is dependent on screen brightness and difference in contrast between background and foreground colors. These decisions aren't based on what engineers assume matters, it's dependent mostly on A/B testing and perception psychology. It's the same kind of research that went into making the OPUS audio codec and deciding which parts people will notice missing and which parts don't really make a difference. Tiny to moderate changes in visibility/readability make a much smaller difference than you imagine.
The problem is, like 24fps is that film grain makes things feel "filmic".
The hobbit doesn't feel like a film in 60fps. it feels like a play, or something from TV.
Now. to your point, that motion smoothing stops this, its not strictly true. There are still artefacts that happen because its film. this then becomes the marker for "film".
> The meaning of art is up to the viewer, not the artist.
yes, but the delivery is key. your point about imax doesn't really add up. Unless you are printing out IMAX prints (which most people don't) they you'll at least do a technical grade for the IMAX to make it look like the version that the DoP/director agreed.
The House of Dragons chose to make it look like that. Mainly because I suspect the director was trying to chase the fashion for removing colour from everything. Grading, like most things is prone to fashions, and pushing against "what came before" is par for the course. Some people do it well, others are shite.
To be fair, in the mid-2010s that's basically what consumer headphones did -- e.g. Beats, etc.
Before I got my Shure SE215-es I ended up doing a lot of research to figure out which headphones were actually going to give me a reasonable balance between treble and bass.
Headphones/speakers and music production are locked in a prisoner’s dilemma. If you get accurate reproduction equipment, you have to Hope the content wasn’t mastered to sound good on Beats/etc.
In music, many engineers add "color" (artifacts like tape noise and analog "warmth" (subtle saturation/distortion which were undesirable side-effects of the original circuitry)).
But it is a tool to be used tactically, on a specific instrument or sound, to achieve specific effect. They don't master in 24bit/192kHz then just dump it to tape and send it.
If used as element to paint the scene noise is fine and can work well, but the trend to just slap the grain on anything "coz that's how movies looked before" is just silly.
That is typically done to genres of music where it makes sense though. A lot of hip hop, funk and stoner rock have a certain vintage aesthetic they're going for which tape noise and overdrive help achieve. The analogy to video would be VHS artifacts for retro-80s content or sepia tones for early/mid-20th century stuff.
To my eyes, a lot of film sets look too pristine when shot as-is, the grains help to make the sets look a little dirty and worn for that added realism.
Alternatively you can paint/airbrush every set piece to add that used/worn look, but I don’t think it’s economically feasible.
I know it’s referred to as film grain, but digital cameras also have grain, and the amount of grain will vary from shot to shot depending on lighting, shutter speed etc.
Adding extra fake noise to the movie solves a few problems such as ensuring a consistent look from scene to scene, as well as avoiding video artifacts like banding.
It’s also an artistic choice just like the choice to use telephoto lenses that blur backgrounds into fuzzy balls of light.
>I know it’s referred to as film grain, but digital cameras also have grain, and the amount of grain will vary from shot to shot depending on lighting, shutter speed etc.
2004 Collateral and 2009 Public Enemies were early all digital movies full of digital noise, it looks BAD.
The noise floor seems to get lower and lower with every generation of camera. Eventually, no one will remember a time when using a high gain (ISO) revealed sensor/pipeline noise.
Total tangent here, but in my opinion, most HPPD are optical/perceptual defects that everyone experiences all the time, but normally our minds block, filter, or compensate for. Taking psychedelics makes people aware of these defects, and once you notice them, you can't "unsee" them.
My personal experience with this is that very mild astigmatism causes diffracted halos around light sources. I never noticed these until taking psychedelics. Now I always notice them, but they're certainly caused by defects in my eyes' lenses and not by some permanent direct effect of psychedelics. Similar with floaters, visual snow, POV effects and so on.
Many of these effects/defects become exaggerated whilst tripping and hence get noticed, then when people sober up and still notice them, they think it's something new, rather than realizing it's something that's been going on unnoticed the whole time.
I agree, but AFAIK the most common symptom is amplified “visual snow”. While everyone has some level of visual snow, an increased amount such that it’s visible during the day/on bright surfaces would certainly be annoying
I think many people complaining about visual snow really just never noticed that it's a normal part of human visual perception to have a bit of it (I remember noticing it as a kid, staring at the ceiling), but I do believe that there are these extreme cases that are real, and can't be explained with my model. It would be very annoying and concerning.
Similar to dithering, I believe film grain does help you resolve finer visual details. Having no texture would cause issues when trying to match things shot on different cameras as well. I agree with you that many movies and shows are guilty of adding so much grain it becomes distracting, but I was watching Schindler's List in UHD and loved how natural it looked. Things like clothing just felt more real and I didn't realize until the end that it was because of the expertly used film grain. The value of grain becomes obvious when you look at clips of that movie online.
That is an interesting way to frame it. I see it more as an expert use of 35mm film and lighting, and the grain is just present because it is inherent to the medium.
When going from film to digital, you have to modify the source noise in some way and there's no way to avoid it. A 1 to 1 transfer is possible, but I haven't come across any. Instead I have seen so many times where films either denoised too much or added too much artificial grain during the transfer. Starting from a very good source definitely helped, but film is limited in certain ways that digital isn't. Additionally, color grading for HDR10 and Dolby Vision was done. I didn't think that HDR would make a huge difference in a mostly black and white film but it does. The second picture comparison is what encouraged me to rewatch the movie again. [1]
Personally I find it helps with immersion for content of that era.
E.g. the non-digital aesthetics of Tarintino films contribute a lot to their style in my opinion.
Plastering fake film grain ontop of something is not really my ideal, but going full film is also probably not something people can really afford in terms of time and budget...unless you're someone like Tarintino who's already at the top
There are good reasons why both the VFX/post production and music production industries have done vast amounts of research into accurately modelling every 'imperfection' that resulted from the pre-digital processes. If you remove all the imperfections, you end up with something that is sterile.
Digital production methods are great at producing data in very predictable, reliable ways. But our brains our great at spotting patterns, and they can tell really quickly if something is unchanging and can be ignored. So while the challenge in the analogue world was creating order (a faithfully reproduced signal) from chaos, in the digital world it's flipped. We need to find ways of adding some chaos to make the resulting signal interesting to our brains.
We could do more FPS too, but the issue is that these defy the audiences' expectations, so suspension of disbelief is impacted, so the story suffers because of that. In fact, producers kind of have to do a lot of things, purely for the fact that audiences now expect it on screen. TVTropes has a great article about this issue in general, not just regarding to film grain and frames per second:
I think a lot of digitally introduced noise looks pretty bad, but it's hard to imagine noise removal actually improving movies that actually were shot on film.
Why is it nice? Is it that you're used to seeing it and its absence feels wrong? Wouldn't you become accustomed to its absence with exposure? I've found this to be the case for me.
Maybe it's like photography. I can edit a photo of a landscape to look how it looks in person, or I can edit it to look, to me, how it feels to see it in person, or edit it for some other feeling, of course.
this. In a hotel in Mexico with decent bandwidth. Netflix buffers to 20% and stops. Pirate/free sites play entire films no problem.
Netflix (and most of the other streaming services) are dying because they no longer focus on customer experience, but on management goals. The recent move to declare password-sharing illegal is evidence - the nail in the coffin of the recording industry was when they started suing their own customers too.
That first 20% of buffering is a lie. It'll manage to do that with no internet connection at all. Fair chance the hotel blocked netflix, or otherwise just doesn't have the bandwidth for streaming video.
HBO is pretty rough too. I travel a lot and I usually download a set of shows and movies to watch on various streaming services. 100% of the time, HBO fails to play it's downloaded content.
Have you ever tried recording NTSC video to audio cassettes? I tried it once around the end of the CRT era and it's just good enough to keep a fuzzy mostly in sync picture.
That wouldn't scale because each new set of printer dots would require a complete re-encode. You can find raw rips on the high seas, so anything less than hard-coded dots would just get stripped anyway. If you take rips from two sources, you could also just subtract one identical frame from another to isolate the dots and remove them.
This sounds like they're taking the source material, removing the film grain artificially (which must degrade the data a lot), then re-adding it. It's interesting that they aren't starting from a grain-free version - at least for new productions that should be available.
Many productions specifically want film grain. Directors and DPs have very concrete aesthetic intentions when they go for film grain, so you're likely to find lots of movies with film grain added in post when they were shot on almost noiseless digital cameras. (For instance, one might scan the grain from an old film stock and apply it to the digital footage later.)
Netflix is only optimizing the pipeline here, trying not to mess with the artistic intent.
I understand that. I would expect Netflix to tell the production crew "I want the film grain on a separate layer so we can apply it after compression".
High-end digital cinema cameras are not _that_ grainy per se. The sensors are very clean.
The level of grain you get from digital cinema photography is mostly by artistic choice and added through the camera or in post production. Sometimes the movie is shot digitally and transferred to film, then scanned. This was done for Dune (2021), for instance.
Also, ARRI (who make the most renowned cameras for cinematic use) now specifically let you choose a grain texture on the Alexa 35 that is imprinted in the digital material. [1]
> Also, ARRI (who make the most renowned cameras for cinematic use) now specifically let you choose a grain texture on the Alexa 35 that is imprinted in the digital material. [1]
That is so sad. Such a waste if it can just be done during decoding.
From a technical perspective that would make sense, but it's unrealistic to achieve this in practice. It would be impossible for the entire market to support that. In order to maintain the artistic intent across different platforms, it's better to imprint it in the source. I guess lots of directors and DPs are frustrated enough with motion smoothing and other "enhancements" on modern TVs.
The same way Dolby Vision has kind-of enforced better color consistency, I imagine grain could be done as well.
One could call it something like Dolby Vision Film Enhancements, that would just mandate decoder support. Those who don't support it get a fallback, like the usual.
First you take video with no grain artifacts. Then you use something like Filmlook to add grain. (Dust and scratches are optional.) Then, in preparation for transmission, you remove the grain, characterize it, and transmit grain statistics to the receiver, which re-inserts fake grain.
Grain is probably something that will disappear in a few years as the old film-oriented directors die off. Along with 24FPS.
24FPS is gonna stick around for a while. I've not seen any high framerate movies that didn't make the movie set look like a...movie set. This is great for sports and documentaries where a better visualization of the subject is always better, but in movies it just makes everything look like it was manufactured as quickly and as cheaply as humanly possible--because it was. I feel like high framerate, even more than 4k, lets you "see" the set and the makeup and the props as a hastily crafted illusion. Everything looks fake. Even the body language looks like acting in high framerate.
Meanwhile, in sports and documentaries everything looks more real, because it is real; there's no set, no props, no act--well less acting. More acting in soccer.
I'm sure at some point in the future set designers and the props departments and all that will figure out how to make 48FPS and above look good, but we don't live in that future yet.
> 24fps I suspect will be displaced by some sort of VR as the predominate high-end story medium.
I seriously doubt it. For one, VR headsets are broadly incompatible with the theater business. You can make it work in smalls scales or in amusement parks (where the park admission and food sales are where the money is made) but broadly it doesn't work because the headsets are expensive and you need to disinfect them to prevent lice. It would be like bowling alley shoes except you wear them on your face... very gross.
At home, wearing a VR headset to watch a movie works for a solo experience, but ruins the fun of watching movies with others. VR is bachelor tech, doomed to be niche.
The beauty of film grain is back in the digital world. The grain will be reduced before encoding and transmitted via a grain table in the AV1 bitstream. After decoding the grain is added back and now it is possible to enjoy film look even at ultra low bitrates.
This is a very subjective thing to do! I don't like it.
I really dislike chroma subsampling and Bayer-pattern sensors and all the other little steps in the video pipeline that sacrifice color density. I'm stuck in a world where color is always blurry but apparently, everyone else seems fine with it.
That's true. When you see the raw data coming from a CCD it doesn't look good. I understand a lot of voodoo is required to get the beautiful digital images we have come to expect.
But for some time now that magic was employed with a perceptual goal to make the raw data look "correct".
I'm not sure if I would object to fake depth-of-field or fake film-grain if it were done absolutely convincingly or if I just dislike the whole idea of trying to add "artistry".
What's appealing about film grain isn't the grain as such so much as it is the knowledge that it's an artifact of an analog process in a world where nigh everything is digitally produced. It has a warmth that's inextricably tied to a sense of authenticity. Removing the real grain and adding synthesized faux-grain doesn't just remove that attraction, it feels actively deceptive, even if it's indistinguishable from the real thing visually.
I used to work in the VFX industry, specifically DI.
However I can't find any reference to it online. There are loads of "degrain" workflows for things like nuke and AE, but thats not really the kind of evidence I would accept if I were you!
Appreciate the insight. When you say even stuff shot in film is de and re-grained, are you referring to works shot recently? Or do you mean that the sort of restorations put out by Criterion/BFI/etc have fake grain added in?
Would be interesting, and to me a bit disappointing, if so.
The reason why its good is because its not a realtime scanner (like a spirit) so it has the time to do "perfect" registration. This means that you should be able to re-scan the same thing twice, and any VFX/added in stuff will still line up.
It has the light source well away from the film, so it doesn't heat it up. It also does an Infra red pass to allow for dust busting. (that is the removal of blemishes from the film)
This scanner is the one you'd use for restorations/rescans and "digital remastering".
With that in mind, its capable of scanning at 8k (which is then down res'd to 4k from memory).
If you are doing a straight film-to-digital with no remastering (which are pretty rare) then thats what you'd do.
Now, to answer your question!
For restoration, there might be a bit of degraining and re-painting & reconstruction, followed by a regrain. This is because film grain is really hard to rotoscope/paint over. Because the edges all move, and its a general pain. So for some limited restorations there will be some "fake" grain. But only as much as needed to make it look like the rest of the film.
Think of it as like a conserved oil painting, a light cleaning, some post processing to bring out the colours/contrast, bam off to digital.
for "re-mastering" there might be some upscaling. Now, its my understanding that upscalers remove grain, because otherwise you'd just get large amounts of noise on the screen, rather than detail. But there are a large range of ways to upscale. The gold standard is basically re-painting everything: https://www.youtube.com/watch?v=IrabKK9Bhds or with a 4k remaster of blade runner: https://youtu.be/tN0MtsIz6H0?t=15 where they have spent a lot of time removing film noise.
in those, the film grain is removed almost completely.
the BFI from what I have seen only do restoration and dust removal.
But my perception is entirely the opposite, that grain is the result of low quality film, low quality digital sensors, or a gimmick. Why should artists be targeting your perception over mine?
Again, not making a normative statement about the superiority of film grain or analog or anything. If your perception is that grain is a gimmick, then you should likewise not want it synthetically reinserted into the stream. The point I'm making is that those who do like film grain overwhelmingly hold that view in part because of the source of its creation and not because of its detached-from-context aesthetic effect per se -- much the same way that people enjoy the warm crackle of a vinyl record, but don't want a facsimile of it overlayed on their spotify audio.
So, for the majority of those that like film grain it feels deceptive, and for those that don't it's an added annoyance. It's only appealing to what I surmise is a vanishingly small subset of Netflix's customers that just want the visual artifact of grain and don't care where it came from or if it is original to the analog transfer.
H264 and H265 have an optional similar feature for grain synthesis, but very few decoders implement it so it's not something you'd find at the codec level in blurays.
Of course, film grain is a common effect added during the editing process.
Strictly speaking ("film noise"), I suppose you're right. But we are all aware that there is such a thing as CCD noise, right? That's not being synthesized.
I wonder if one is preferable to the other: that is, is CCD noise worse than emulsion noise? My sense is that the CCD, with the Bayer filter in place, gives you wild chroma noise while film gives more of a tonal noise.
Oh, I hadn't even thought of that! Yes, digital cameras would of course not have film grain!
I was making a more stupid point: these days image you are seeing on screen is always synthesized from 0s and 1s. No matter how that stream of data was original produced (ie by scanning actual film stock).
the noise floor for cameras (when working in the correct params) is ridiculously low.
modern slow 35mm film (now there are loads of types and speeds) has an optical resolution of something like 5-12megapixels. a full frame CCD has easily got an optical resolution of 50mp.
We are probably talking about CMOS here, right? I haven't seen CCD camera for like 15 years, and that had resolution like 10 mpx. I know CCD chips are still used in astrophotography because of their other qualities, but aside from that they seem to have pretty much died out.
I've always found it funny that the "HBO" branded intro to all their shows is produced of pure visual static. ([1] for anyone unfamiliar.) They literally couldn't have picked something worse to encode for streaming.
https://www.youtube.com/watch?v=O08PQRA6MLo