Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We built machines that could do calculations far faster than humans could long before we had any idea what neural configuration humans were using to do them. We gave those machines short and long term memory without understanding how human brains do the same. Then we wrote software on those machines that could outperform the best humans in chess without even the slightest inclination on how human brains do that. And then we started making software that vastly exceeded those early chess bots when we didn't even understand how that software performed its calculations (ie. neural networks). And now we have software that can read and write that we understand even less than we understood earlier NNs.

Empirically, it does not seem necessary to understand one version of a thing to produce a superior version. Why should the remaining unsolved cognitive tasks break that pattern?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: