Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep, this is why I talk about the virtuous feedback loop between these two modes. Empirical methods feed theory which feeds empirical methods ad infinitum.

In the field of ML, a concrete example might be the tool Xgboost (#1) and the original work that led to and developed Gradient Boosting itself (#2), of which Xgboost is an implementation, and probably one that has helped refine the underlying theory as well.

ML has lots of examples where the researcher(s) for #2 were also doing #1. A famous paper in NLP comes to mind as an excellent example of this overlap (PDF: https://www.csie.ntu.edu.tw/~b92b02053/print/good-turing-smo...)



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: