Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is showing that humans aren't just doing next word prediction too.


I don't see that as a problem. I don't particularly care how human intelligence works; what matters is what an LLM is capable of doing and what a human is capable of doing.

If those two sets of accomplishments are the same there's no point arguing about differences in means or terms. Right now humans can build better LLMs but nobody has come up with an LLM that can build better LLMs.


That’s literally the definition of takeoff, when it starts it gets us to singularity in a decade and there’s no publicly available evidence that it’s started… emphasis on publicly available.


> it gets us to singularity

Are we sure it's actually taking us along?


> but nobody has come up with an LLM that can build better LLMs.

Yet. Not that we know of, anyway.


Given the dramatic uptake of Cursor / Windsurf / Claude Code etc, we can be 100% certain that LLM companies are using LLMs to improve their products.

The improvement loop is likely not fully autonomous yet - it is currently more efficient to have a human-in-the-loop - but there is certainly a lot of LLMs improving LLMs going on today.


I feel like people are going to find it hard to accept that this is how most of us think (at least when thinking in language). They will resist this like heliocentrism.

I'm curious what others who are familiar with LLMs and have practiced open monitoring meditation might say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: