Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks a lot for that detailed explanation. That makes perfect sense.

I wonder if all these additional capabilities can be bolted onto existing LLMs or need another iteration to the transformer architecture plus retraining.



LLMs are not the end of AI research. See LMM (large multimodal model), cognitive architecture, spiking neural networks, I-JEPA, etc. There are going to be multiple totally different types of AI that may be called be AGI, depending on who you ask.

For emotions, see Pei Wang's research.


> That makes perfect sense.

On the contrary, it has multiple glaring flaws. When you consider that you can ask an LLM one question and get a nonsensical answer, saying that stopping them from forgetting will bring them closer to “universally-accepted” AGI has no basis in reality. Humans can’t even universally agree that the Earth is not flat, it is a pipe dream to think LLMs will bring any consensus in a few months.

Another poster points out other issues with the answer: https://news.ycombinator.com/item?id=37915367




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: