Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs quite literally work at the level of their source material, that's how training works, that's how RAG works, etc.

There is no proof that LLMs work at the level of "ideas", if you could prove that, you'e solve a whole lot of incredibly expensive problems that are current bottlenecks for training and inference.

It is a bit ironic that you'd call someone wanting to control and be paid for the thing they themselves created "selfish", while at the same time writing apologia on why it's okay for a trillion dollar private company to steal someone else's work for their own profit.

It isn't some moral imperative that OpenAI gets access to all of humanity's creations so they can turn a profit.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: