Hacker News new | past | comments | ask | show | jobs | submit login

When your GPU is rasterizing the edges of polygons, it computes (sometimes just approximates) how much of a little square is covered by that polygon and uses that as the weight when averaging what color to assign to that pixel. The resulting rendered image is most correctly interpreted as an array of little squares, not point samples and definitely not truncated gaussians.



Actually no, that's not the case, rasterization is on-off at the hardware level. You need anti-aliasing for the behaviour you are describing, which very rarely works the way you describe - the best we have right now as far as quality uses multisampling.


2D graphics uses coverage-based antialiasing which computes the coverage over the pixel square. https://nothings.org/gamedev/rasterize/


And that is not done by the GPU. The top comment started with "your GPU does...."

In any case, that is an extreme edge case of software renderers that doesn't even come close to a significant part of 2D graphics in real life. Indeed, most 2D graphics is really flat 3D graphics done using GPU routines and does not the work that way. I know that some extreme edge cases do use coverage based rasterization, but :

>You need anti-aliasing for the behaviour you are describing, which very rarely works the way you describe

This is a case of anti-aliasing (read the title of the article) and is extremely rarely used. It's essentially irrelevant when discussing how graphics work in real life.

I really cannot overstate just how rarely software rasterizers are used for interactive graphics in 2020, coverage based rasterizers are an even smaller subset of that. It really makes a ton more sense to use a GPU rasterizer and use MSAA or oversample the whole image.


2D graphics on the GPU is an open research problem. My understanding is that piet-gpu and Pathfinder, both state of the art research projects, use coverage-based solutions based on the GPU. MSAAx16, which is incredibly expensive on mobile devices, only provides 16 different coverage values, and from my limited tests was poorer quality than a coverage-based solution.


2D graphics on the GPU is not an open research problem in practice. In real life you either use Direct2D or OpenGL/Vulkan/Direct3D and just... ignore a dimension.

Yes, MSAA 16x is incredibly expensive on mobile devices, and it provides a worse result than a coverage based approach. But MSAA 16x is done by an asic, and is simpler than coverage based AA. It is not even close in performance. A GPU ROP trounces any programmable compute unit as far as performance, it's not even closed. It is done by pecialized, in silico hardware. And in practice MSAA 8x is more than good enough, especially on mobile devices. You certainly will not notice a difference on a phone with a density of 563 dpi between MSAA 4x and 8x, let alone 16x and coverage based.

At those scales, the resolution of the phone is literally greater than the optical resolution of the optical system that is your eyes. There is no point in anything beyond MSAA 4x in reality, and a lot of people with displays in the 200 dpi range just use 2X MSAA while they could use 8X MSAA because they really can't tell the difference.

The final nail in the coffin is that these compute-based rasterization engines so far more or less match the performance of CPU rasterization. This is simply unacceptable when GPU direct rasterization can give results nearly indistinguishable at multiple times the performance and much less power usage. This is literally taking something done by a highly optimized, 12-7nm ASIC, and trying to do it through compute for a tiny improvement. It's absurd.


Some rasterization algorithms are done this way, but they arguably are getting suboptimal results, and would do better to apply some other filter, instead of a box. (As pixels keep getting smaller and smaller it matters less though.)

> resulting rendered image is most correctly interpreted as an array of little squares

Still nope. What matters in the end is the viewer’s eyes/brain reconstruction of the image, and given the frequency response of human eyes to typical screens at typical viewing distances, there is little if any practical difference between convolving some eye-like reconstruction filter with pixels thought of as uniform-brightness squares vs. point samples.

If you want to improve your results you’ll get much more bang for your buck from considering RGB subpixels to be point samples offset the appropriate amounts for the given physical display than you’ll get from thinking of any of them as being an area light source instead of a point.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: