Hacker News new | past | comments | ask | show | jobs | submit login

It's not only the precision (i.e., the bit-width), but also the variable exponent (i.e., the "floating-point" part).

Unfortunately, simulations often cover a wide range of exponents during a single run (one matrix element might be 5.74293574325e8, while the one next to it might be 3.25356343e-9, and you still want to preserve their precision), and the exponents of the inputs might vary a lot between different runs. You can only use fixed-point if you have a good idea of the exponents of the input numbers, and how those change during the course of the computation. That works well for typical digital signal processing applications, and not so well for generic number crunching libraries.




> That works well for typical digital signal processing applications, and not so well for generic number crunching libraries.

Even then it breaks down with more complex signal processing algorithms. FPGAs are great for simpler algorithms like FFTs, digital filtering, and motion compensation. They aren't quite as good at more complicated algorithms like edge detection. They really break down when you want to use ML-based algorithms.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: