Hacker News new | past | comments | ask | show | jobs | submit login

Is there a theoretical ideal of what line-drawing and anti-alliasing algorithms are aiming for? Is there a perfect line drawing algorithm you can use if you have the time?



There is a very good signal theoretic framework to discuss this.

You can start reading from example Alvy Ray Smith's "A pixel is not a little square" http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

But in the end, the sampling kernel you choose to sample your line, and, what sort of geometric primitive your line is in the first place, defaults back to matters of taste so there is a very good theoretical basis for discussing all of this, but, there is no ultimate absolutely correct answer that it will provide.


The signal theoretic framework depends on bandwidth limitations that often don't hold for signals we want to produce. Sharp lines and boundaries are reasonable things to want.

I'd say the sampling kernels aren't fully matters of taste, but matters of the display or sampling technology instead. Pixels on CRTs are indeed not little squares, but pixels on anything else pretty much are -- though they can be non-contiguous squares, with different patterns for different colors, even, which eliminates any simplicity that the little-square picture intuitively captures.

Fortunately, all this starts to matter less and less as pixel size and spacing gets smaller and smaller.


I’d say no because there are different goals that can be incompatible, i.e., “perfect” is not well defined. This article’s focus is on accuracy and speed but not visual quality. Antialiasing goes for visual quality and sometimes speed, but most techniques land in an area that balances those or leans a bit one way or the other, while very few techniques are achieving the known maxima of either, let alone both at the same time.

Sibling comment mentions the famous paper “A Pixel is Not a Little Square”. The implication being that using squares when computing analytic coverage for antialiasing is not “perfect”. The problem is we don’t have a single definition of perfect, because it depends on what display device you’re using. CRTs and LCDs and printers all have completely different ‘pixels’, so no single solution exists that works for all of them. Combine that with the fact that getting close to perfect for any given device is very computationally expensive, and you can see why we don’t usually even shoot for perfect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: