Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The initial claim was that distillation can never be used to create a model B that's smarter than model A, because B only has access to A's knowledge. The argument you're responding to was that play and reflection can result in improvements without any additional knowledge, so it is possible for distillation to work as a starting point to create a model B that is smarter than model A, with no new data except model A's outputs and then model B's outputs. This refutes the initial claim. It is not important for distillation alone to be enough, if it can be made to be enough with a few extra steps afterward.



You’ve subtly confused “less accurate” and “smarter” in your argument. In other words you’ve replaced the benchmark of representing the base data with the benchmark of reasoning score.

Then, you’ve asserted that was the original claim.

Sneaky! But that’s how “arguments” on HN are “won”.


No, I didn't confuse the two. There is not a formal definition of "smart", but if you're claiming that factual accuracy is unrelated to it, I can't imagine that that's in good faith.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: