Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is modifying weights sensibly impossible? Is it because a modification's "sensibility" is measurable only post facto, and we can have no confidence in any weight-based hypothesis?


Just doesn't feel like current LLMs, the thing would be able to understand its own brain enough to make general improvements with high enough bar to be able to non-trivially improvements.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: