Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That said people are always discovering algorithms that get better performance than you'd expect from new hardware.

For instance a frightening amount of CPU is spent in financial messaging systems on validating UTF-8, parsing XML and JSON, converting numbers written in decimal digits to binary and things like that. You'd think these are "embarrassingly serial" problems but with clever coding and advanced SIMD instructions such as AVX-512 they can be accelerated for throughput, latency, and economy.

The benefits of the GPU are great enough that you might do more "work" but get the job done faster because it can be done in parallel.

For instance the algorithms used by the old A.I. ("expert systems") parallelize better than you might think (though not as well as the Japanese hoped they would in the 1980s) despite being super-branchy. Currently fashionable neural networks (called "connectionist" back in the day) require only predicated branching (which side of the ReLU are you on?) but spend a lot of calculations on parts of the network which might not be meaningful for the current inference. It depends on the details, but you might be better doing many more operations if you can do them in parallel.

Given that GPUs are out there and that so many people are working on them I think the range of what you can do with them is going to increase, though I think few people will be writing application logic on them directly, but they will increasingly use libraries and frameworks. For instance, see

https://arxiv.org/pdf/1709.02520.pdf



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: