Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are saying correctly that we _can_ understand the learning of models in a continuous fashion, and that we do not need to assume that at some stage something different than at other stages happens. So no emergence.

This is massive because it means, for example, that you can study what is happening in these models with small ones that perform worse but are simpler to understand, and you're not missing something fundamental. I don't think they claim we already understand what is happening I these models though.

I don't know what exactly you are talking about with your academia bashing. If anything it seems to me academia is just as awash as everywhere else with barely substantiated AI hype. Disparaging and psychologizing critical voices just makes it seem to me like you might be missing the point of academia...



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: