Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not how it works. LLMs were only trained on text, so this is new data it's never seen before. There's no train-test leakage.


You are right but I am sure there are patterns in the weights that (accidentally or not) predict patterns commonly seen in media.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: