Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, they're mathematically exactly the same. Just a different way of thinking about the same mathematics that the author argues (probably correctly) makes more intuitive sense.


The development environment would have to be set up such that the part of the language that uses GA primitives could be efficiently transpiled into their corresponding matrix abstractions to prevent loss of performance on the GPU at runtime.


It's unclear that you really need to convert to matrices before sending off to the GPU. [1] finds that quaternions are faster; I find that questionable (does it scale?), but it shows that the two are comparable.

1: https://tech.metail.com/performance-quaternions-gpu/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: