Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty much the same thing has happened in natural language processing. Previously trained linguists would spend lots of time on carefully crafted features. Now you just throw a bidirectional lstm model at the problem and enough training examples and you are close to state of the art.


Where "enough training examples" has proven to be the real difficult problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: