Dithering was never a compression technique, it's a filtering technique for reducing banding on devices/displays/images that have a small color palette.
In fact, even in the 80's, dithered images were often larger than their un-dithered counterpart, sometimes by a lot. But it was worth the trade-off when the alternative was an image with so much banding that it could be confused for a European flag.
Unless you're trying to display your image on a retro console (or have aesthetic reasons for wanting to achieve that effect), you should not use dithering. Essentially all modern devices have a sufficiently enormous color palette, and modern compression algorithms use other techniques to achieve their efficiency.
In fact, modern compression will do a much better job giving you a smaller file size if you don't use dithering.
Edit:
Don't get me wrong, dithering is a super interesting topic, and designing a good dither can be surprisingly hard, it's just not going to help you if your goal is to shrink the images on your website the way the article claims.
If you haven't seen the trailer for "Return of the Obra Dinn" you owe it to yourself to take a look:
Super cool aesthetic, and writing that shader must have been all sorts of difficult/fun. But you don't do this sort of thing for compression efficiency.
Directly related to dithering and compression, Return of the Obra Dinn is, to an certain extent, an "unstreamable game".
When this streamer on Twitch tried to play it [1], the quality of his composited webcam would immediately drop as the compression algorithm focused on rendering the high-frequency dithering sections of the screen. As soon as he would aim the camera away from the fancy rendering (to the sky for instance, or the menu), the video quality would immediately improve. Really a fascinating clip.
The devlog [1] for that game is incredibly interesting, would strongly recommend to anyone interested in game development. If I recall correctly this compression problem almost made him reconsider the whole game - how can your indie game get any momentum if it's unwatchable on youtube/twitch? Luckily for us he persisted. Obra Dinn is one of the most interesting games I've ever played.
I hadn't thought of that, but this is hilarious, and illustrates my point perfectly.
If your compression algorithm isn't aware of the exact dither you're using, the decompressor can't reproduce the dither on the other end using only rules and image data. The compressor needs to encode every single dither pixel as an expensive "Hey decompressor, you're never going to be able to guess this pixel value, so here's the whole thing" residual value.
This is also why old image compression algorithms that were aware of simple dithers (i.e. a handful of fixed grid patterns) could produce small-ish images that looked slightly better than un-dithered, but still kind of bad. But then as soon as you customized the dither to use a more random looking pixel arrangement that looked significantly better, the filesize would explode -- because the compressor was blissfully unaware of the more complicated dither and had no choice but to encode all of the seemingly random pixels directly.
Do you know if they tried the other rendering modes (shaders?) included in the game? From memory there were at least five and some of them looked more suitable for livestreaming.
> In fact, modern compression will do a much better job giving you a smaller file size if you don't use dithering.
Not necessarily. The idea of dithering is to use a representation with a smaller color space, meaning fewer bits per pixel, possibly palettized.
The idea is to control where the lossiness "damage" happens. You deliberately discard information in the area of color depth, rather than whatever the modern compression might choose to discard. It's possible you could get results that to an observer appear subjectively better per file size.
Imagine a photo of masonry brickwork. What's important is the edges between the brick and mortar, while you don't really care about the grain within a brick. General-purpose image compression tends to smear sharp edges like that. It's possible you could do subjectively better by reducing the color depth, to intentionally discard more of the data that you don't need with dithering to keep a little of it, while keeping more of the information you do want in the sharp edges.
I'm not claiming any of this would pan out for real-world use, but there are certainly hypothetically feasible cases for dithering.
In practice, video inspired image compression techniques (i.e. HEVC Main Still Picture) will do a significantly better job on that masonry brickwork image.
The algorithm is already looking for patches of image that have moved by possibly hundreds of pixels (to sub pixel accuracy) both within a single frame, and across multiple frames.
Basically, it'll find a patch of image that, when shifted horizontally and vertically by a certain amount, looks pretty close to the patch of image that it's trying to encode. The compressor will then say "just copy that region that you decompressed half a frame ago to here first, and now the residual values I give you are differences between the first patch and the second."
Even if the brick / mortar phase is slightly off (e.g. from perspective or lens effects), this will give you about an order of magnitude more compression efficiency (and perceptual quality) than anything that tries to use color depth to preserve edges.
Your example is about saliency and perception. Modeling these to guide lossy compression is an important feature of high-end encoders, but that is largely independent of compression techniques used.
It's possible to do optimal-ish highly compressible dither (it's been done for LZW), but the results are still pretty disappointing compared to even old JPEG.
Specifically, modern formats use gradients where possible. If something transitions smoothly from one color to another, they can represent that as what it is - to oversimplify, one pixel is the first color, a different pixel some distance away is the second color, and the decompression will generate the intermediate colors for all the pixels between those two. By manually dithering, it has to encode each pixel individually, because the transitions are gone.
Really? IME images compressed real hard with modern techniques look pretty terrible. Dithering is much less efficient if you're looking for low loss, but if you're trying to get something quite lossy, it looks better to my eyes.
This was something I helped with recently, as a great case study. We launched a new website for our game studio recently and went all-out on supporting modern compressed images: AVIF and WebP with PNG fallback.
Originally, the image came from art with the glow around the planet being dithered. The resulting PNG was over 2 MB, resisted crushing, and didn't downscale well. Trying to use AVIF and WebP with aggressive compression made the image look awful.
We asked if they could remove the dithering and suddenly we got super great compression with some tweaking: 50 kB as AVIF, 68 kB as WebP, 797 kB as PNG (oof!)
This is a large banner image. Smaller images can get _much_ smaller with AVIF and WebP with no sacrifice of quality. It takes some tweaking and the tools were pretty bad in my experience. We wrote a couple utilities to do this and fiddled with knobs for awhile and it turned out great.
EDIT: Looking at this page again closely, I can see interesting artifacts because of AVIF. Look at the robo-dog's left ear! You could probably use slightly higher settings than we did.
Not sure how you created those AVIFs. The reference AVIF encoder[0] wants to use 4:4:4 chroma, but it looks like that hero image is 4:2:0. There is a small size hit for 4:4:4, but edges around saturated colors is much better.
Sometimes it is helpful to first reduce the number of colors (preferably to 256 if that doesn't cause too much banding, depending on the number of color shades used). Then png usually compresses a lot better. Png compresses badly when the image contains too many different colors.
> Png compresses badly when the image contains too many different colors.
Old trick to squeeze a few kb out of a png: Use Posterize filter from Photoshop, with very light settings. Basically it will just flatten the number of colors.
Nah, just use Photoshop's "Export -> Save for Web (Legacy)", then set the file to PNG-8.
Now you can mess around with the number of colors in the color table, customise the color palette selection algorithm, dithering algorithm and dither amount.
The loop filters in modern image/video compression systems don't know anything about dithering. If you want a pixel perfect dither like what you remember from the 80s, you're going to need way more bits to encode the image, because you need to encode those pixels as expensive residuals that can't be ignored, rather than obvious (to the decompresser) pixels that can be inferred from the decompressed pixels in neighboring regions.
The best counterexample for the claim that dithering is a good idea is the post itself. It shows a high quality (albeit downscaled) picture of the dog that is only twice as big as the horrible-looking dithered versions. And at 30 KB vs. 14 KB, HTTP header sizes already start to make the marginal savings questionable.
https://imgur.com/a/eBxFlL5 has 4 images next to each other - the original scaled down image, the 14 KB dithered image, a 14 KB JPEG, and a 8 KB webp (both the JPEG and WEBP were at the full 500x500 resolution, downscaled afterwards, since in my experience that often yields better results).
You should still use dithering, even with modern palettes. You can absolutely see banding on undithered 24 bit images. 256 color levels per channel is barely adequate, even when optimally allocating those levels perceptually (gamma correction is a poor man's approximation of this).
Yes, dithering will increase the file size of a losslessly compressed image. That's because it contains more information. If you're sufficiently bothered by file size to degrade color accuracy, why are you using lossless compression to begin with?
Dithering is an essential component of any digital signal processing pipeline, not some weird retro artifact.
In fact, even in the 80's, dithered images were often larger than their un-dithered counterpart, sometimes by a lot. But it was worth the trade-off when the alternative was an image with so much banding that it could be confused for a European flag.
Unless you're trying to display your image on a retro console (or have aesthetic reasons for wanting to achieve that effect), you should not use dithering. Essentially all modern devices have a sufficiently enormous color palette, and modern compression algorithms use other techniques to achieve their efficiency.
In fact, modern compression will do a much better job giving you a smaller file size if you don't use dithering.
Edit:
Don't get me wrong, dithering is a super interesting topic, and designing a good dither can be surprisingly hard, it's just not going to help you if your goal is to shrink the images on your website the way the article claims.
If you haven't seen the trailer for "Return of the Obra Dinn" you owe it to yourself to take a look:
https://youtu.be/ILolesm8kFY
Super cool aesthetic, and writing that shader must have been all sorts of difficult/fun. But you don't do this sort of thing for compression efficiency.