Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Training large language models is characterised by diminishing returns; the first billion training inputs reduce the loss more than the second billion, the second billion reduce the loss more than the third, etc. Similar for increases in size; the improvement is less than linear.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: